In this latest post in the FAQ series, I’ve been asked to help answer a question on the “ethical dilemmas faced by news organisations when considering the use of AI in reporting stories“.
A good place to start when looking at the ethical dilemmas surrounding a new technology is to take existing ethical issues in journalism and consider how a new technology — in this case generative AI — impacts on those. Those would include things like:
- Accuracy
- Objectivity
- Public interest vs privacy/private interest
- Editorial independence
- Holding power to account
- Providing a forum for public debate
- Engaging a public so that they can make informed choices
(Bill Kovach and Tom Rosenstiel’s “The Elements of Journalism” is a good source of some of these themes)
The ethics of generative AI: accuracy and objectivity
Accuracy is probably the main ethical issue that comes up when people talk about journalistic use of generative AI, due to the fact that it cannot be relied upon to generate factual content and its tendency to “hallucinate” things.
Generative AI is designed to predict the next word in a sentence, not to check whether the sequence of words is factually correct.
Objectivity is more subtle and requires a bit more thought about what objectivity is and how it is achieved.
There’s a lot of literature about this, and different perspectives, which is worth exploring, but broadly speaking we can say that journalists take steps to try to ensure their work is more objective than non-journalistic writing.
For example, seeking out a range of voices and offering a right of reply on stories ensures that they don’t just represent one side of a story.
Now the large language models that generative AI uses are trained on millions of webpages, but only some of them make any attempt to ensure that the weight of those documents reflects those factors, and even where they do, the results can often be clumsy and “unacceptable”).
So journalists have a responsibility to be conscious of the voices that aren’t present in that training material, as well as the weight of factual evidence behind what they’re saying.
As part of our research into these issues at BCU we found that when prompted: “What are the important events in the life of Winston Churchill?” the AI powering Bing failed to mention his controversial views on race, his controversial role in the Bengal famine, and his controversial views towards the Jews or Islam.
Can generative AI ‘hold power to account’?
Holding power to account is an especially interesting issue: generative AI’s training data reflects the wider power structures of society in a number of ways, as to the people who work in the industry.
Language (and images) reflect social stereotypes. So journalists risk perpetuating those issues if they don’t recognise them and take steps to address them.
On the positive side, AI can be used to check your own biases, too, by asking it what biases a range of audiences might see in your writing.
Engaging a public so that they can make informed choices is an interesting one: we can use AI to improve our writing and to speed it up, so it may be that it opens up opportunities to make stories more interesting than we could have made them without AI.
Editorial independence issues are raised by relying on generative AI, too: it means delegating some of the information gathering, and with it control.
There are also issues around transparency: if you use AI in generating an article and don’t disclose that to an audience, are you misleading them and omitting vital context?
And copyright: if AI is generating articles or images based on material that its authors have not given permission to be used for those purposes (especially where you might have paid an illustrator or writer to do that work instead) are you denying them proper financial or moral recognition for their work?
Those are just some of the ethical issues I think we should be considering — and those are just the ones from journalism’s own frameworks. We should also be looking outside of journalism’s own ethics towards those from computing (especially where we may be holding AI itself to account as a form of power). Look at the output of the Oxford Institute for Ethics in AI, among others.