
In the latest in a series of posts on using generative AI, I look at how tools such as ChatGPT and Claude.ai can help help identify potential bias and check story drafts against relevant guidelines.
We are all biased — it’s human nature. It’s the reason stories are edited; it’s the reason that guidelines require journalists to stick to the facts, to be objective, and to seek a right of reply. But as the Columbia Journalism Review noted two decades ago: “Ask ten journalists what objectivity means and you’ll get ten different answers.”
Generative AI is notoriously biased itself — but it has also been trained on more material on bias than any human likely has. So, unlike a biased human, when you explicitly ask it to identify bias in your own reporting, it can perform surprisingly well.
It can also be very effective in helping us consider how relevant guidelines might be applied to our reporting — a checkpoint in our reporting that should be just as baked-in as the right of reply.
In this post I’ll go through some template prompts and tips on each. First, a recap of the rules of thumb I introduced in the previous post.
- Provide context about story format, audience, and outlet
- Be specific about what aspect of the text you want it to review
- Ask the genAI tool to draw your attention to what can be improved
- Ask genAI not to make any changes itself
- Focus on one section at a time
- Consider copyright, data protection and information security
Also, a reminder that Claude is generally the better-performing tool when it comes to these sorts of tasks (and Google Gemini does worst).
Using generative AI to identify bias

I’ve already written about six principles for using generative AI in journalism that address diversity and inclusion — well here’s a seventh: use genAI to identify biases in your own reporting.
That’s what Trusting News explored last year: “Can ChatGPT offer counterarguments to polarizing or controversial narratives?” they asked.
Three prompts seemed to be effective:
Is there bias in this story?Would this news story feel fair to people with different views on [abortion]?How could this news article be written better in order for a [climate skeptic] to consume and understand it?
But those prompts are lacking context. Here’s a better prompt which is more ‘CAREful‘ (Context, Ask, Rules, Examples):
You are a sub editor at a [UK regional newspaper] with years of experience in editing stories. You like to see journalists become better writers by receiving advice from you on how to improve their stories, and you can be very frank and honest about the changes that they need to make.
Part of your role involves ensuring that stories are free of bias. One of your reporters has asked you for feedback on this [news] story: provide feedback on 1) Any elements where there might be bias; 2) Whether the story would feel fair to people with different views on the issue. Do not make changes to the story.
A more detailed and wider-ranging response can be generating by explaining exactly what we mean by bias (examples), and asking for advice on steps that could be taken. For example:
The publication has just introduced a policy of ensuring that all stories reflect the diversity of the communities it seeks to serve. Those communities include people of different faiths, social class, ethnicity, gender, and sexuality. It includes both rural and urban readers, old and young, and people with disabilities.
Identify any potential bias in this article and steps the journalist can take to address those.
The response is likely to be so comprehensive that putting all the suggestions into practice would actually result in a sprawling piece which is trying to tell too many stories at the same time.
But the point of the detailed prompt isn’t to generate a to-do list; it’s to help look at the reporting through a number of different lenses. There might be some changes we can make to the article, some suggestions that we reject or adapt, and some that we can consider for future stories.
Note: bias is also addressed in the previous post on using genAI for sourcing and research.
Applying guidelines to stories with the help of genAI

Many stories touch on issues where guidelines have been written to help journalists. Examples range from reporting on elections, surveys, protests, accidents and deaths, to stories involving children, disability, disfigurements, race and racism, mental health and suicide, migration, poverty, religion, sexuality and gender, sexual violence,
If the story touches on issues where guidelines exist, it is a good idea to inject that into the conversation and ask that it be specifically considered. For example, after establishing the personality and role of the editor, a second par might add this:
You are editing a story which relates to migration. Here are the guidelines that the story needs to follow: [PASTE GUIDELINES]
Remember that genAI tools have a limit to the amount of information you can enter in any one prompt. So if the guidelines are summarised in a bullet list, that is the best thing to paste, but you can also paste a few hundred words of the guidelines in the first prompt, and then further sections in further prompts, indicating “Here is more of the guidelines”.
Once the guidelines have been communicated, you can paste the article (or first part of the article, if too long) which you want guidance on, prompting:
Here is the [first part of the] article that needs editing. Identify any aspects of the article which the guidelines relate to, and where improvements could be made make suggestions for what those improvements might be. Do not make any changes to the article.
This is not a replacement for reading and applying the guidelines yourself, of course, but rather a way of ensuring that you haven’t missed anything, and providing suggestions you may not have considered. It can also help develop a better understanding of the guidelines longer term.
Have you used generative AI to help with any editing or reviewing tasks? Please let me know in the comments or on social media.
