In September I took part in a panel at the African Journalism Education Network conference. The most interesting moment came when members of the audience were asked if they didn’t use AI — and why.
Category Archives: AI
How to stop AI making you stupid: hybrid destination-journey prompting

Last month I wrote about destination and journey prompts, and the strategy of designing AI prompts to avoid deskilling. In some situations a third, hybrid approach can also be useful. In this post I explain how such hybrid destination-journey prompting works in practice, and where it might be most appropriate.
Continue readingFAQ: How has journalism been transformed?
In the latest FAQ, I’m publishing here answers to some questions from a Turkish PR company (published on LinkedIn here)…
Q: In your view, what has been the most significant transformation in digital journalism in recent years?
There have been so many major transformations in the last 15 years. Mobile phones in particular have radically transformed both production and consumption — but having been through all those changes, AI feels like a biggest transformation than all the changes that we’ve already been through.
It’s not just playing a role in transforming the way we produce stories, it’s also involved in major changes around what happens with those stories in terms of how they are distributed, consumed, and even how they are perceived: the rise of AI slop and AI-facilitated misinformation is going to radically accelerate the lack of trust in information (not just the media specifically). I’m being careful to say ‘playing a role’ because of course the technology itself doesn’t do anything: it’s how that technology is designed by people and used by people.
Continue reading7 técnicas de design de prompts para IA generativa que todo jornalista deveria conhecer
Ferramentas como o ChatGPT podem parecer falar a sua língua, mas, na verdade, falam uma linguagem de probabilidade e suposições fundamentadas. Você pode fazer-se entender melhor — e obter resultados mais profissionais — com algumas técnicas simples de prompting. Aqui estão as principais para adicionar ao seu kit de ferramentas (Este post foi traduzido do inglês original usando o Claude Sonnet 4.5 como parte de uma experiência. Por favor, avise-me se encontrar algum erro ou traduções incorretas).

Prompting de função
O prompting de função envolve atribuir um função específico à sua IA. Por exemplo, você pode dizer “Você é um correspondente experiente de educação” ou “Você é o editor de um jornal nacional britânico” antes de delinear o que está a pedir que façam. Quanto mais detalhes, melhor.
Há pesquisas contraditórias sobre a eficácia do prompting de função, mas no nível mais básico, fornecer um papel é uma boa maneira de garantir que você fornece contexto, o que faz uma grande diferença na relevância das respostas.
Continue readingAt its best, AI can help us to reflect on our humanity. At its worst, it can lead us to forget it

Almost all conversations around AI come down to these hopes and fears: that at its best AI can help us to reflect on our humanity. At its worst, it can lead us to forget it — or subjugate it.
When AI is dismissed as flawed, it is often through a concern that it will make us less human — or redundant.
The problem with this approach is that it can overlook the very real problems, and risks, in being human.
When people talk about the opportunities in using AI, it is often because they hope it will address the very human qualities of ignorance, bias, human error — or simply lack of time.
The problem with this approach is that it overlooks the very real problems, and risks, in removing tasks from a human workflow, including deskilling and job satisfaction.
So every debate on the technology should come back to this question: are we applying it (or dismissing it) in a way that leads us to ignore our humanity — or in a way that forces us to address our very human strengths and weaknesses?
4 ways you can ‘role play’ with AI

One of the most productive ways of using generative AI tools is role playing: asking Copilot or ChatGPT etc. to adopt a persona in order to work through a scenario or problem. In this post I work through four of the most useful role playing techniques for journalists: “rubber ducking”, mentoring, “red teaming” and audience personas, and identify key techniques for each.
Role playing sits in a particularly good position when it comes to AI’s strengths and weaknesses. It plays to the strengths of AI around counter-balancing human cognitive biases and ‘holding up a mirror’ to workflows and content — and scores low on most measures of risk in using AI, being neither audience-facing nor requiring high accuracy.
Continue readingAI and “editorial independence”: a risk — or a distraction?

TL;DR: By treating AI as a biased actor rather than a tool shaped by human choices, we risk ignoring more fundamental sources of bias within journalism itself. Editorial independence lies in how we manage tools, not which ones we use.
Might AI challenge editorial independence? It’s a suggestion made in some guidance on AI — and I think a flawed one.
Why? Let me count the ways. The first problem is that it contributes to a misunderstanding of how AI works. The second is that it reinforces a potentially superficial understanding of editorial independence and objectivity. But the main danger is it distracts from the broader problems of bias and independence in our own newsrooms.
Continue readingHow to ask AI to perform data analysis

In a previous post I explored how AI performed on data analysis tasks — and the importance of understanding the code that it used to do so. If you do understand code, here are some tips for using large language models (LLMs) for analysis — and addressing the risks of doing so.
Continue readingI tested AI tools on data analysis — here’s how they did (and what to look out for)

TL;DR: If you understand code, or would like to understand code, genAI tools can be a useful tool for data analysis — but results depend heavily on the context you provide, and the likelihood of flawed calculations mean code needs checking. If you don’t understand code (and don’t want to) — don’t do data analysis with AI.
ChatGPT used to be notoriously bad at maths. Then it got worse at maths. And the recent launch of its newest model, GPT-5, showed that it’s still bad at maths. So when it comes to using AI for data analysis, it’s going to mess up, right?
Well, it turns out that the answer isn’t that simple. And the reason why it’s not simple is important to explain up front.
Generative AI tools like ChatGPT are not calculators. They use language models to predict a sequence of words based on examples from its training data.
But over the last two years AI platforms have added the ability to generate and run code (mainly Python) in response to a question. This means that, for some questions, they will try to predict the code that a human would probably write to solve your question — and then run that code.
When it comes to data analysis, this has two major implications:
- Responses to data analysis questions are often (but not always) the result of calculations, rather than a predicted sequence of words. The algorithm generates code, runs that code to calculate a result, then incorporates that result into a sentence.
- Because we can see the code that performed the calculations, it is possible to check how those results were arrived at.
This is what happened when I asked journalism students to keep an ‘AI diary’
Last month I wrote about my decision to use an AI diary as part of assessment for a module I teach on the journalism degrees at Birmingham City University. The results are in — and they are revealing.

What if we just asked students to keep a record of all their interactions with AI? That was the thinking behind the AI diary, a form of assessment that I introduced this year for two key reasons: to increase transparency about the use of AI, and to increase critical thinking.
Continue reading