Category Archives: AI

7 técnicas de design de prompts para IA generativa que todo jornalista deveria conhecer

Ferramentas como o ChatGPT podem parecer falar a sua língua, mas, na verdade, falam uma linguagem de probabilidade e suposições fundamentadas. Você pode fazer-se entender melhor — e obter resultados mais profissionais — com algumas técnicas simples de prompting. Aqui estão as principais para adicionar ao seu kit de ferramentas (Este post foi traduzido do inglês original usando o Claude Sonnet 4.5 como parte de uma experiência. Por favor, avise-me se encontrar algum erro ou traduções incorretas).

Técnicas de design de prompts para IA generativa

Prompting de função

Prompting de exemplo único

Prompting recursivo

Geração aumentada por recuperação

Cadeia de pensamento

Meta prompting

Prompting negativo

Prompting de função

O prompting de função envolve atribuir um função específico à sua IA. Por exemplo, você pode dizer “Você é um correspondente experiente de educação” ou “Você é o editor de um jornal nacional britânico” antes de delinear o que está a pedir que façam. Quanto mais detalhes, melhor.

pesquisas contraditórias sobre a eficácia do prompting de função, mas no nível mais básico, fornecer um papel é uma boa maneira de garantir que você fornece contexto, o que faz uma grande diferença na relevância das respostas.

Continue reading

At its best, AI can help us to reflect on our humanity. At its worst, it can lead us to forget it

Frankenstein's monster and Maria

Almost all conversations around AI come down to these hopes and fears: that at its best AI can help us to reflect on our humanity. At its worst, it can lead us to forget it — or subjugate it.

When AI is dismissed as flawed, it is often through a concern that it will make us less human — or redundant.

The problem with this approach is that it can overlook the very real problems, and risks, in being human.

When people talk about the opportunities in using AI, it is often because they hope it will address the very human qualities of ignorance, bias, human error — or simply lack of time.

The problem with this approach is that it overlooks the very real problems, and risks, in removing tasks from a human workflow, including deskilling and job satisfaction.

So every debate on the technology should come back to this question: are we applying it (or dismissing it) in a way that leads us to ignore our humanity — or in a way that forces us to address our very human strengths and weaknesses?

4 ways you can ‘role play’ with AI

4 roleplay design techniques for genAI
Rubber ducking
Using AI for ‘self explanation’ to work through a problem.
Critical friend/mentor
Using AI for feedback or guidance while avoiding deskilling.
Red teaming/
devil’s advocate
Using AI to identify potential lines of attack by an adversary, or potential flaws/gaps in a story.
Audience personas
Using AI to review content from the position of the target audience.

One of the most productive ways of using generative AI tools is role playing: asking Copilot or ChatGPT etc. to adopt a persona in order to work through a scenario or problem. In this post I work through four of the most useful role playing techniques for journalists: “rubber ducking”, mentoring, “red teaming” and audience personas, and identify key techniques for each.

Role playing sits in a particularly good position when it comes to AI’s strengths and weaknesses. It plays to the strengths of AI around counter-balancing human cognitive biases and ‘holding up a mirror’ to workflows and content — and scores low on most measures of risk in using AI, being neither audience-facing nor requiring high accuracy.

Continue reading

AI and “editorial independence”: a risk — or a distraction?

Tools
When you have a hammer does everything look like a nail? Photo by Hunter Haley on Unsplash

TL;DR: By treating AI as a biased actor rather than a tool shaped by human choices, we risk ignoring more fundamental sources of bias within journalism itself. Editorial independence lies in how we manage tools, not which ones we use.

Might AI challenge editorial independence? It’s a suggestion made in some guidance on AI — and I think a flawed one.

Why? Let me count the ways. The first problem is that it contributes to a misunderstanding of how AI works. The second is that it reinforces a potentially superficial understanding of editorial independence and objectivity. But the main danger is it distracts from the broader problems of bias and independence in our own newsrooms.

Continue reading

How to ask AI to perform data analysis

Consider the model: Some models are better for analysis — check it has run code

Name specific columns and functions: Be explicit to avoid ‘guesses’ based on your most probably meaning

Design answers that include context: Ask for a top/bottom 10 instead of just one answer

'Ground' the analysis with other docs: Methodologies, data dictionaries, and other context

Map out a method using CoT: Outline the steps needed to be taken to reduce risk

Use prompt design techniques to avoid gullibility and other risks: N-shot prompting (examples), role prompting, negative prompting and meta prompting can all reduce risk

Anticipate conversation limits: Regularly ask for summaries you can carry into a new conversation

Export data to check: Download analysed data to check against the original

Ask to be challenged: Use adversarial prompting to identify potential blind spots or assumptions

In a previous post I explored how AI performed on data analysis tasks — and the importance of understanding the code that it used to do so. If you do understand code, here are some tips for using large language models (LLMs) for analysis — and addressing the risks of doing so.

Continue reading

I tested AI tools on data analysis — here’s how they did (and what to look out for)

Mug with 'Data or it didn't happen' on it
Photo: Jakub T. Jankiewicz | CC BY-SA 2.0

TL;DR: If you understand code, or would like to understand code, genAI tools can be a useful tool for data analysis — but results depend heavily on the context you provide, and the likelihood of flawed calculations mean code needs checking. If you don’t understand code (and don’t want to) — don’t do data analysis with AI.

ChatGPT used to be notoriously bad at maths. Then it got worse at maths. And the recent launch of its newest model, GPT-5, showed that it’s still bad at maths. So when it comes to using AI for data analysis, it’s going to mess up, right?

Well, it turns out that the answer isn’t that simple. And the reason why it’s not simple is important to explain up front.

Generative AI tools like ChatGPT are not calculators. They use language models to predict a sequence of words based on examples from its training data.

But over the last two years AI platforms have added the ability to generate and run code (mainly Python) in response to a question. This means that, for some questions, they will try to predict the code that a human would probably write to solve your question — and then run that code.

When it comes to data analysis, this has two major implications:

  1. Responses to data analysis questions are often (but not always) the result of calculations, rather than a predicted sequence of words. The algorithm generates code, runs that code to calculate a result, then incorporates that result into a sentence.
  2. Because we can see the code that performed the calculations, it is possible to check how those results were arrived at.
Continue reading

This is what happened when I asked journalism students to keep an ‘AI diary’

Last month I wrote about my decision to use an AI diary as part of assessment for a module I teach on the journalism degrees at Birmingham City University. The results are in — and they are revealing.

AI diary screenshots, including AI diary template which says:
Use this document to paste and annotate all your interactions with genAI tools. 

Interactions should include your initial prompt and response, as well as follow up prompts (“iterations”) and the responses to those. Include explanatory and reflective notes in the right hand column. Reflective notes might include observations about potential issues such as bias, accuracy, hallucinations, etc. You can also explain what you did outside of the genAI tool, in terms of other work. 

At least some of the notes should include links to literature (e.g. articles, videos, research) that you have used in creating the prompt or on reflecting on it. You do not need to use Harvard referencing - but the link must go directly to the material. See the examples on Moodle for guidance.

To add extra rows place your cursor in the last box and press the Tab key on your keyboard, or right-click in any row and select ‘add new row’.
Excerpts from AI diaries

What if we just asked students to keep a record of all their interactions with AI? That was the thinking behind the AI diary, a form of assessment that I introduced this year for two key reasons: to increase transparency about the use of AI, and to increase critical thinking.

Continue reading

How to reduce the environmental impact of using AI

Generative AI: reducing environmental impact
Disable AI or switch tool
Compare AI vs non-AI
Compare models
Prompt planning
Prompt design and templating
Measuring and reviewing
Run locally

One of the biggest concerns over the use of generative AI tools like ChatGPT is their environmental impact. But what is that impact — and what strategies are there for reducing it? Here is what we know so far — and some suggestions for good practice.

What exactly is the environmental impact of using generative AI? It’s not an easy question to answer, as the MIT Technology Review’s James O’Donnell and Casey Crownhart found when they set out to find some answers.

“The common understanding of AI’s energy consumption,” they write, “is full of holes.”

Continue reading

9 takeaways from the Data Journalism UK conference

Attendees in a lecture theatre with 'data and investigative journalism conference 2025 BBC Shared Data Unit' on the screen.

Last month the BBC’s Shared Data Unit held its annual Data and Investigative Journalism UK conference at the home of my MA in Data Journalism, Birmingham City University. Here are some of the highlights…

Continue reading

Teaching journalism students generative AI: why I switched to an “AI diary” this semester

The Thinker status
Image by Fredrik Rubensson CC BY-SA 2.0

As universities adapt to a post-ChatGPT era, many journalism assessments have tried to address the widespread use of AI by asking students to declare and reflect on their use of the technology in some form of critical reflection, evaluation or report accompanying their work. But having been there and done that, I didn’t think it worked.

So this year — my third time round teaching generative AI to journalism students — I made a big change: instead of asking students to reflect on their use of AI in a critical evaluation alongside a portfolio of journalism work, I ditched the evaluation entirely.

Continue reading