
Many people — including me — are quite uncomfortable with generative AI. Most of this discomfort can be traced to the various ethical challenges that AI raises. But an understanding of the different schools of ethics can help us both to better address those challenges and what to do about them.
Three different ethical approaches
The first thing to say about the ethics of AI is that there is no single ‘ethics’. When we engage with ethical issues there are typically at least three different systems that might be in play:
- Deontological ethics, or the ethics of duty (regardless of consequences, and so “non-consequentialist”). This is the school that, for example, would say killing or lying is always wrong, regardless of the outcomes (lying to prevent a murder is wrong)
- Virtue ethics. In this system, decisions are made based on a person’s own virtues, rather than external duties.
- Consequentialist ethics (part of teleological ethics), where outcomes are important. In this school it might be justified to lie if it would save a life, or to kill if it would save many lives.
Any discussion of the ‘ethics of AI’ needs to first identify what ethical approach is being adopted.
For example, one ethical issue with AI is the amount of energy and water used by the technology. A response to this might fall into one of the following camps:
- “It is our duty not to consume excessive energy and water (deontological ethics). No one should use AI for any reason.”
- “I don’t want to harm the environment, so I won’t use AI for that reason (virtue).”
- “Energy and water usage has negative outcomes, but there may be some contexts where the energy usage of some AI use also has beneficial outcomes (such as allowing journalists to highlight problems in society), or where not using it results in negative outcomes (such as missed diagnoses in health). I will use genAI where the overall benefit of doing so is greater than the damage caused by the carbon footprint of the interaction, but if that is not the case I will use an alternative approach with a smaller footprint.”
Whatever position is held, identifying which approach a position is coming from will help when discussing that position with others: at the very least you will understand that the disagreement may be more about the nature of ethics itself than about generative AI specifically.
Industry guidelines: duties, virtues, or consequences?

Industry guidelines around the use of AI fall into all three categories. Many, for example, focus on the duty not to enter “confidential information, trade secrets, or personal data into AI tools.” Duties such as transparency and accountability also frequently recur.
Elements of virtue ethics can also be found in Reuters’ guidelines — while still talking about duties around how to treat people:
“Thomson Reuters will strive to maintain meaningful human involvement, and design, develop and deploy AI products and services and use data in a manner that treats people fairly.”
Outside of areas such as data protection and diversity, however, the guidelines become more consequentialist. In those areas basic principles are expanded to outline exceptions or acceptable use. William Reed‘s statement about the use of AI is one of the clearest examples of this:
“We may experiment with AI-generated images or video under certain conditions, ensuring that artists incorporate significant creative input and avoid unlawful imitation or copyright infringement. We respect the value of stock photography and will not replace it with AI-generated images until creators are fairly compensated by AI companies.”
The BBC‘s extensive editorial guidance also includes many examples of “acceptable” reasons (outcomes) for using generative AI:
“Examples of acceptable use might include creating a synthesised voice to deliver text based content, where it does not seek to replicate the voice of another individual, or a ‘deepfake’ face used to preserve anonymity in a documentary.”
Beyond the guidelines: individual feelings about AI

Many individual arguments about AI will revolve around more personal principles. Some journalists may feel that their creativity (a virtue) is challenged by AI; for others that creativity requires them to experiment with AI.
The same thing, by the way, happened with the introduction of another technology: photography — which was seen by Baudelaire as “art’s most mortal enemy” but by Poe as “an enormous stride forward”.
Some journalists may feel that truth and accuracy (a duty) is threatened by genAI’s tendency to hallucinate and its association with deepfakes; others may see genAI as a way of meeting that duty by tackling problems in the accuracy of reporting.
If I think about my own work with AI I can see a number of these dimensions coming into play. Curiosity, for example, is both an important personal ‘virtue’ and a journalistic ‘duty’ that drove the decision to research genAI (both theoretically and practically) in order to be properly informed about it.
Other duties also come into play: the duty to better serve audiences through exploring possibilities for improving journalism, or to keep journalism economically viable.
For a trainer and educator there is a duty to accurately inform the journalists and students being trained. And to tackle problems such as diversity and bias.
But then the environmental impacts of large language models introduce an ethical tension with another duty — around environmental impact. And there are concerns around AI’s impact on the rights of creators and on jobs (where research suggests it will create jobs but also reduce demand for certain work).
Exploring and resolving those tensions meant engaging with consequentialist ethics on a case-by-case basis: for example, why use genAI for tasks that can be achieved through a less energy-intensive Google search?
Consequences are complex
While non-consequentialist ethics offer the attraction of simple black-and-white moralities, the difficulty with consequentialist decisions is that consequences are ultimately predictions, and therefore to a degree subjective.
We generally cannot know for certain that a particular usage is going to have a particular impact — or factor in unforeseen consequences, good or bad.
The environmental issue is a particularly tricky one in this regard, firstly because — outside of extreme examples such as curing diseases — it is very difficult to compare the environmental impact of a specific use of genAI with the wider benefits of the same use.
Any discussion about consequences has to acknowledge this and ultimately, the fact we cannot ever be right or wrong about something that cannot be known for certain — we can only disagree and, at best, wait to find out if we are proved right or wrong.
