Category Archives: AI

“I don’t want it to be easy” and other objections to using AI

In September I took part in a panel at the African Journalism Education Network conference. The most interesting moment came when members of the audience were asked if they didn’t use AI — and why.

Continue reading

How to stop AI making you stupid: hybrid destination-journey prompting

A local map-style illustration where a pinned "answer" destination is visible, but the route is overlaid with checkpoints labelled “confidence”, “sources”, “counter-arguments”, “verify”, “edit” (image generated by ChatGPT).

Last month I wrote about destination and journey prompts, and the strategy of designing AI prompts to avoid deskilling. In some situations a third, hybrid approach can also be useful. In this post I explain how such hybrid destination-journey prompting works in practice, and where it might be most appropriate.

Continue reading

FAQ: How has journalism been transformed?

In the latest FAQ, I’m publishing here answers to some questions from a Turkish PR company (published on LinkedIn here)…

Q: In your view, what has been the most significant transformation in digital journalism in recent years? 

There have been so many major transformations in the last 15 years. Mobile phones in particular have radically transformed both production and consumption — but having been through all those changes, AI feels like a biggest transformation than all the changes that we’ve already been through. 

It’s not just playing a role in transforming the way we produce stories, it’s also involved in major changes around what happens with those stories in terms of how they are distributed, consumed, and even how they are perceived: the rise of AI slop and AI-facilitated misinformation is going to radically accelerate the lack of trust in information (not just the media specifically). I’m being careful to say ‘playing a role’ because of course the technology itself doesn’t do anything: it’s how that technology is designed by people and used by people. 

Continue reading

7 técnicas de design de prompts para IA generativa que todo jornalista deveria conhecer

Ferramentas como o ChatGPT podem parecer falar a sua língua, mas, na verdade, falam uma linguagem de probabilidade e suposições fundamentadas. Você pode fazer-se entender melhor — e obter resultados mais profissionais — com algumas técnicas simples de prompting. Aqui estão as principais para adicionar ao seu kit de ferramentas (Este post foi traduzido do inglês original usando o Claude Sonnet 4.5 como parte de uma experiência. Por favor, avise-me se encontrar algum erro ou traduções incorretas).

Técnicas de design de prompts para IA generativa

Prompting de função

Prompting de exemplo único

Prompting recursivo

Geração aumentada por recuperação

Cadeia de pensamento

Meta prompting

Prompting negativo

Prompting de função

O prompting de função envolve atribuir um função específico à sua IA. Por exemplo, você pode dizer “Você é um correspondente experiente de educação” ou “Você é o editor de um jornal nacional britânico” antes de delinear o que está a pedir que façam. Quanto mais detalhes, melhor.

pesquisas contraditórias sobre a eficácia do prompting de função, mas no nível mais básico, fornecer um papel é uma boa maneira de garantir que você fornece contexto, o que faz uma grande diferença na relevância das respostas.

Continue reading

At its best, AI can help us to reflect on our humanity. At its worst, it can lead us to forget it

Frankenstein's monster and Maria

Almost all conversations around AI come down to these hopes and fears: that at its best AI can help us to reflect on our humanity. At its worst, it can lead us to forget it — or subjugate it.

When AI is dismissed as flawed, it is often through a concern that it will make us less human — or redundant.

The problem with this approach is that it can overlook the very real problems, and risks, in being human.

When people talk about the opportunities in using AI, it is often because they hope it will address the very human qualities of ignorance, bias, human error — or simply lack of time.

The problem with this approach is that it overlooks the very real problems, and risks, in removing tasks from a human workflow, including deskilling and job satisfaction.

So every debate on the technology should come back to this question: are we applying it (or dismissing it) in a way that leads us to ignore our humanity — or in a way that forces us to address our very human strengths and weaknesses?

4 ways you can ‘role play’ with AI

4 roleplay design techniques for genAI
Rubber ducking
Using AI for ‘self explanation’ to work through a problem.
Critical friend/mentor
Using AI for feedback or guidance while avoiding deskilling.
Red teaming/
devil’s advocate
Using AI to identify potential lines of attack by an adversary, or potential flaws/gaps in a story.
Audience personas
Using AI to review content from the position of the target audience.

One of the most productive ways of using generative AI tools is role playing: asking Copilot or ChatGPT etc. to adopt a persona in order to work through a scenario or problem. In this post I work through four of the most useful role playing techniques for journalists: “rubber ducking”, mentoring, “red teaming” and audience personas, and identify key techniques for each.

Role playing sits in a particularly good position when it comes to AI’s strengths and weaknesses. It plays to the strengths of AI around counter-balancing human cognitive biases and ‘holding up a mirror’ to workflows and content — and scores low on most measures of risk in using AI, being neither audience-facing nor requiring high accuracy.

Continue reading

AI and “editorial independence”: a risk — or a distraction?

Tools
When you have a hammer does everything look like a nail? Photo by Hunter Haley on Unsplash

TL;DR: By treating AI as a biased actor rather than a tool shaped by human choices, we risk ignoring more fundamental sources of bias within journalism itself. Editorial independence lies in how we manage tools, not which ones we use.

Might AI challenge editorial independence? It’s a suggestion made in some guidance on AI — and I think a flawed one.

Why? Let me count the ways. The first problem is that it contributes to a misunderstanding of how AI works. The second is that it reinforces a potentially superficial understanding of editorial independence and objectivity. But the main danger is it distracts from the broader problems of bias and independence in our own newsrooms.

Continue reading

How to ask AI to perform data analysis

Consider the model: Some models are better for analysis — check it has run code

Name specific columns and functions: Be explicit to avoid ‘guesses’ based on your most probably meaning

Design answers that include context: Ask for a top/bottom 10 instead of just one answer

'Ground' the analysis with other docs: Methodologies, data dictionaries, and other context

Map out a method using CoT: Outline the steps needed to be taken to reduce risk

Use prompt design techniques to avoid gullibility and other risks: N-shot prompting (examples), role prompting, negative prompting and meta prompting can all reduce risk

Anticipate conversation limits: Regularly ask for summaries you can carry into a new conversation

Export data to check: Download analysed data to check against the original

Ask to be challenged: Use adversarial prompting to identify potential blind spots or assumptions

In a previous post I explored how AI performed on data analysis tasks — and the importance of understanding the code that it used to do so. If you do understand code, here are some tips for using large language models (LLMs) for analysis — and addressing the risks of doing so.

Continue reading

I tested AI tools on data analysis — here’s how they did (and what to look out for)

Mug with 'Data or it didn't happen' on it
Photo: Jakub T. Jankiewicz | CC BY-SA 2.0

TL;DR: If you understand code, or would like to understand code, genAI tools can be a useful tool for data analysis — but results depend heavily on the context you provide, and the likelihood of flawed calculations mean code needs checking. If you don’t understand code (and don’t want to) — don’t do data analysis with AI.

ChatGPT used to be notoriously bad at maths. Then it got worse at maths. And the recent launch of its newest model, GPT-5, showed that it’s still bad at maths. So when it comes to using AI for data analysis, it’s going to mess up, right?

Well, it turns out that the answer isn’t that simple. And the reason why it’s not simple is important to explain up front.

Generative AI tools like ChatGPT are not calculators. They use language models to predict a sequence of words based on examples from its training data.

But over the last two years AI platforms have added the ability to generate and run code (mainly Python) in response to a question. This means that, for some questions, they will try to predict the code that a human would probably write to solve your question — and then run that code.

When it comes to data analysis, this has two major implications:

  1. Responses to data analysis questions are often (but not always) the result of calculations, rather than a predicted sequence of words. The algorithm generates code, runs that code to calculate a result, then incorporates that result into a sentence.
  2. Because we can see the code that performed the calculations, it is possible to check how those results were arrived at.
Continue reading

This is what happened when I asked journalism students to keep an ‘AI diary’

Last month I wrote about my decision to use an AI diary as part of assessment for a module I teach on the journalism degrees at Birmingham City University. The results are in — and they are revealing.

AI diary screenshots, including AI diary template which says:
Use this document to paste and annotate all your interactions with genAI tools. 

Interactions should include your initial prompt and response, as well as follow up prompts (“iterations”) and the responses to those. Include explanatory and reflective notes in the right hand column. Reflective notes might include observations about potential issues such as bias, accuracy, hallucinations, etc. You can also explain what you did outside of the genAI tool, in terms of other work. 

At least some of the notes should include links to literature (e.g. articles, videos, research) that you have used in creating the prompt or on reflecting on it. You do not need to use Harvard referencing - but the link must go directly to the material. See the examples on Moodle for guidance.

To add extra rows place your cursor in the last box and press the Tab key on your keyboard, or right-click in any row and select ‘add new row’.
Excerpts from AI diaries

What if we just asked students to keep a record of all their interactions with AI? That was the thinking behind the AI diary, a form of assessment that I introduced this year for two key reasons: to increase transparency about the use of AI, and to increase critical thinking.

Continue reading