Identifying bias in your writing — with generative AI

Applications of genAI in the journalism process 
Pyramid with the third 'Production' tier highlighted: 

Identify jargon and bias; improve spelling, grammar, structure and brevity

In the latest in a series of posts on using generative AI, I look at how tools such as ChatGPT and Claude.ai can help help identify potential bias and check story drafts against relevant guidelines.

We are all biased — it’s human nature. It’s the reason stories are edited; it’s the reason that guidelines require journalists to stick to the facts, to be objective, and to seek a right of reply. But as the Columbia Journalism Review noted two decades ago: “Ask ten journalists what objectivity means and you’ll get ten different answers.”

Generative AI is notoriously biased itself — but it has also been trained on more material on bias than any human likely has. So, unlike a biased human, when you explicitly ask it to identify bias in your own reporting, it can perform surprisingly well.

It can also be very effective in helping us consider how relevant guidelines might be applied to our reporting — a checkpoint in our reporting that should be just as baked-in as the right of reply.

In this post I’ll go through some template prompts and tips on each. First, a recap of the rules of thumb I introduced in the previous post.

  1. Provide context about story format, audience, and outlet
  2. Be specific about what aspect of the text you want it to review
  3. Ask the genAI tool to draw your attention to what can be improved
  4. Ask genAI not to make any changes itself
  5. Focus on one section at a time
  6. Consider copyright, data protection and information security

Also, a reminder that Claude is generally the better-performing tool when it comes to these sorts of tasks (and Google Gemini does worst).

Using generative AI to identify bias

Emotive language:
The use of phrases like "vile messages" and "innocent person" could be seen as editorializing rather than objective reporting.
Step: Stick to factual descriptions and allow readers to form their own opinions.
Implicit association:
By mentioning Alum Rock and Bordesley Green, areas with significant ethnic minority populations, without providing context, the article might reinforce stereotypes.
Step: If these areas are relevant, explain why. If not, consider whether mentioning them is necessary.
Overemphasis on ethnicity:
The prominence of the suspect's name (which suggests South Asian heritage) without similar details about other individuals mentioned could be seen as biased.
Step: Ensure consistent treatment of all individuals mentioned in the article.
When asked to identify potential bias and steps to take, Claude is able to identify a range of issues

I’ve already written about six principles for using generative AI in journalism that address diversity and inclusion — well here’s a seventh: use genAI to identify biases in your own reporting.

That’s what Trusting News explored last year: “Can ChatGPT offer counterarguments to polarizing or controversial narratives?” they asked.

Three prompts seemed to be effective:

  • Is there bias in this story?
  • Would this news story feel fair to people with different views on [abortion]?
  • How could this news article be written better in order for a [climate skeptic] to consume and understand it?

But those prompts are lacking context. Here’s a better prompt which is more ‘CAREful‘ (Context, Ask, Rules, Examples):

You are a sub editor at a [UK regional newspaper] with years of experience in editing stories. You like to see journalists become better writers by receiving advice from you on how to improve their stories, and you can be very frank and honest about the changes that they need to make.

Part of your role involves ensuring that stories are free of bias. One of your reporters has asked you for feedback on this [news] story: provide feedback on 1) Any elements where there might be bias; 2) Whether the story would feel fair to people with different views on the issue. Do not make changes to the story.

A more detailed and wider-ranging response can be generating by explaining exactly what we mean by bias (examples), and asking for advice on steps that could be taken. For example:

The publication has just introduced a policy of ensuring that all stories reflect the diversity of the communities it seeks to serve. Those communities include people of different faiths, social class, ethnicity, gender, and sexuality. It includes both rural and urban readers, old and young, and people with disabilities.

Identify any potential bias in this article and steps the journalist can take to address those.

The response is likely to be so comprehensive that putting all the suggestions into practice would actually result in a sprawling piece which is trying to tell too many stories at the same time.

But the point of the detailed prompt isn’t to generate a to-do list; it’s to help look at the reporting through a number of different lenses. There might be some changes we can make to the article, some suggestions that we reject or adapt, and some that we can consider for future stories.

Note: bias is also addressed in the previous post on using genAI for sourcing and research.

Applying guidelines to stories with the help of genAI

Thank you for sharing this article. As a sub editor, I have several concerns about its content and presentation. I'll provide a detailed critique focusing on the ethical guidelines for migration reporting that you've shared:

Facts not bias:

The headline and opening paragraph are sensationalist and potentially misleading. The £150,000 figure is presented without context, which could lead to bias against low-skilled migrants.
The article fails to provide a balanced view of the economic impact of migration, focusing heavily on costs without adequately explaining benefits.
The comparison between migrants and British-born workers is not like-for-like, as it doesn't account for lifelong contributions of British-born individuals.


Know the law:

The article uses the term "low-skilled migrants" without defining it. We should use more precise terminology, such as "low-paid migrant workers" or "migrants in low-wage occupations".
There's no mention of the legal rights of migrants or their status under international, regional, or national law.
When provided with guidelines on reporting migration, Claude provides this feedback on a story

Many stories touch on issues where guidelines have been written to help journalists. Examples range from reporting on elections, surveys, protests, accidents and deaths, to stories involving children, disability, disfigurements, race and racism, mental health and suicide, migration, poverty, religion, sexuality and gender, sexual violence,

If the story touches on issues where guidelines exist, it is a good idea to inject that into the conversation and ask that it be specifically considered. For example, after establishing the personality and role of the editor, a second par might add this:

You are editing a story which relates to migration. Here are the guidelines that the story needs to follow: [PASTE GUIDELINES]

Remember that genAI tools have a limit to the amount of information you can enter in any one prompt. So if the guidelines are summarised in a bullet list, that is the best thing to paste, but you can also paste a few hundred words of the guidelines in the first prompt, and then further sections in further prompts, indicating “Here is more of the guidelines”.

Once the guidelines have been communicated, you can paste the article (or first part of the article, if too long) which you want guidance on, prompting:

Here is the [first part of the] article that needs editing. Identify any aspects of the article which the guidelines relate to, and where improvements could be made make suggestions for what those improvements might be. Do not make any changes to the article.

This is not a replacement for reading and applying the guidelines yourself, of course, but rather a way of ensuring that you haven’t missed anything, and providing suggestions you may not have considered. It can also help develop a better understanding of the guidelines longer term.

Have you used generative AI to help with any editing or reviewing tasks? Please let me know in the comments or on social media.


This entry was posted in AI, data journalism, investigative journalism, online journalism and tagged , , , , , , , on by .
Unknown's avatar

About Paul Bradshaw

Paul teaches data journalism at Birmingham City University and is the author of a number of books and book chapters about online journalism and the internet, including the Online Journalism Handbook, Mobile-First Journalism, Finding Stories in Spreadsheets, Data Journalism Heist and Scraping for Journalists. From 2010-2015 he was a Visiting Professor in Online Journalism at City University London and from 2009-2014 he ran Help Me Investigate, an award-winning platform for collaborative investigative journalism. Since 2015 he has worked with the BBC England and BBC Shared Data Units based in Birmingham, UK. He also advises and delivers training to a number of media organisations.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.