Tag Archives: bias

“Journey prompts” and “destination prompts”: how to avoid becoming deskilled when using AI

A road
Photo: Tiana

How do you use AI without becoming less creative, more stupid, or deskilled? One strategy is to check whether your prompts are focused on an endpoint that you’re trying to get to, or on building the skills that will get you there — what I call “journey prompts” and “destination prompts”.

In creative work, for example, you might be looking for an idea, or aiming to produce a story or image. In journalism or learning, a ‘destination’ might be key facts, or an article or report.

But prompts that focus only on those destinations are less likely to help us learn, more likely to deskill us — and more likely to add errors to our work.

To avoid those pitfalls, it is better to focus on how we get to those destinations. What, in other words, are the journeys?

Continue reading

AI and “editorial independence”: a risk — or a distraction?

Tools
When you have a hammer does everything look like a nail? Photo by Hunter Haley on Unsplash

TL;DR: By treating AI as a biased actor rather than a tool shaped by human choices, we risk ignoring more fundamental sources of bias within journalism itself. Editorial independence lies in how we manage tools, not which ones we use.

Might AI challenge editorial independence? It’s a suggestion made in some guidance on AI — and I think a flawed one.

Why? Let me count the ways. The first problem is that it contributes to a misunderstanding of how AI works. The second is that it reinforces a potentially superficial understanding of editorial independence and objectivity. But the main danger is it distracts from the broader problems of bias and independence in our own newsrooms.

Continue reading

Why I’m no longer saying AI is “biased”

TLDR; Saying “AI has biases” or “biased training data” is preferable to “AI is biased” because it reduces the risk of anthropomorphism and focuses on potential solutions, not problems.

Searches for "AI bias" peaked in 2025. In March 2025 twice as many searches were made for "AI bias" compared to 12 months before.
Click image to explore an interactive version

For the last two years I have been standing in front of classes and conferences saying the words “AI is biased” — but a couple months ago, I stopped.

As journalists, we are trained to be careful with language — and “AI is biased” is a sloppy piece of writing. It is a thoughtless cliche, often used without really thinking what it means, or how it might mislead.

Because yes, AI is “biased” — but it’s not biased in the way most people might understand that word.

Continue reading

Google Sheets has a new AI function — how does it perform on classification tasks?

A new AI function is being added to Google Sheets that could make most other functions redundant. But is it any good? And what can it be used for? Here’s what I’ve learned in the first week…

AI has been built into Google Sheets for some time now in the Clippy-like form of Gemini in Sheets. But Google Sheets’s AI function is different.

Available to a limited number of users for now, it allows you to incorporate AI prompts directly into a formula rather than having to rely on Gemini to suggest a formula using existing functions. 

At the most basic level that means the AI function can be used instead of functions like SUM, AVERAGE or COUNT by simply including a prompt like “Add the numbers in these cells” (or “calculate an average for” or “count”). But more interesting applications come in areas such as classification, translation, analysis and extraction, especially where a task requires a little more ‘intelligence’ than a more literally-minded function can offer.

I put the AI function through its paces with a series of classification challenges to see how it performed. Here’s what happened — and some ways in which the risks of generative AI need to be identified and addressed.

Continue reading

Identifying bias in your writing — with generative AI

Applications of genAI in the journalism process 
Pyramid with the third 'Production' tier highlighted: 

Identify jargon and bias; improve spelling, grammar, structure and brevity

In the latest in a series of posts on using generative AI, I look at how tools such as ChatGPT and Claude.ai can help help identify potential bias and check story drafts against relevant guidelines.

We are all biased — it’s human nature. It’s the reason stories are edited; it’s the reason that guidelines require journalists to stick to the facts, to be objective, and to seek a right of reply. But as the Columbia Journalism Review noted two decades ago: “Ask ten journalists what objectivity means and you’ll get ten different answers.”

Generative AI is notoriously biased itself — but it has also been trained on more material on bias than any human likely has. So, unlike a biased human, when you explicitly ask it to identify bias in your own reporting, it can perform surprisingly well.

It can also be very effective in helping us consider how relevant guidelines might be applied to our reporting — a checkpoint in our reporting that should be just as baked-in as the right of reply.

In this post I’ll go through some template prompts and tips on each. First, a recap of the rules of thumb I introduced in the previous post.

Continue reading

This is how I’ll be teaching journalism students ChatGPT (and generative AI) next semester

Robot with books
Image by kjpargeter on Freepik

I’m speaking at the Broadcast Journalism Teaching Council‘s summer conference this week about artificial intelligence — specifically generative AI. It’s a deceptively huge area that presents journalism educators with a lot to adapt to in their teaching, so I decided to put those in order of priority.

Each of these priorities could form the basis for part of a class, or a whole module – and you may have a different ranking. But at least you know which one to do first…

Priority 1: Understand how generative AI works

The first challenge in teaching about generative AI is that most people misunderstand what it actually is — so the first priority is to tackle those misunderstandings.

Continue reading

AllSides’s John Gable: from the Dark Ages of the internet to bursting bubbles

all-sides-bias-rating

AllSides uses a bias rating system

As part of a series of articles on the innovators tackling the filter bubble phenomenon, Andrew Brightwell interviews John Gable, founder and CEO of AllSides, a website that has devised its own way to present alternative perspectives on American news.

When a man who helped build the first successful web browser says there’s something wrong with the Internet, it probably pays to listen.

“The internet is broken.”

John Gable’s diagnosis has authority: he has more than 30 years in the tech business, including stints at Microsoft, AOL and as a product manager for Netscape Navigator.

Now he is founder and CEO of AllSides Inc, a news website with a distinct mission. Visit AllSides.com and it offers the news you’d expect on any US politics site, except that its lead stories include a choice of articles: one from the left, centre and right.

 “The headlines are so radically different that even reading [them together] tells you more about that topic than reading one story all the way through.”

Continue reading

Newspaper bias: just another social network

Profit maximising slant

There’s a fascinating study on newspaper bias by University of Chicago professors Matthew Gentzkow and Jesse Shapiro which identifies the political bias of particular newspapers based on the frequency with which certain phrases appear.

The professors then correlate that placement with the political leanings of the newspaper’s own markets, and find

“That the most important variable is the political orientation of people living within the paper’s market. For example, the higher the vote share received by Bush in 2004 in the newspaper’s market (horizontal axis below), the higher the Gentzkow-Shapiro measure of conservative slant (vertical axis).”

Interestingly, ownership is found to be statistically insignificant once those other factors are accounted for.

James Hamilton, blogging about the study, asks:

“How slant gets implemented at the ground level by individual reporters. My guess is that most reporters know that they are introducing some slant in the way they’ve chosen to frame and report a story, but are unaware of the full extent to which they do so because they are underestimating the degree to which the other sources from which they get their information and beliefs have all been doing a similar filtering. The result is social networks that don’t recognize that they have developed a groupthink that is not centered on the truth.” [my emphasis]

In other words, the ‘echo chamber’ argument (academics would call it a discourse) that we’ve heard made so many times about the internet.

It’s nice to be reminded that social networks are not an invention of the web, but rather the other way around.

h/t Azeem Azhar