In a previous post I explored how AI performed on data analysis tasks — and the importance of understanding the code that it used to do so. If you do understand code, here are some tips for using large language models (LLMs) for analysis — and addressing the risks of doing so.
TL;DR: If you understand code, or would like to understand code, genAI tools can be a useful tool for data analysis — but results depend heavily on the context you provide, and the likelihood of flawed calculations mean code needs checking. If you don’t understand code (and don’t want to) — don’t do data analysis with AI.
ChatGPT used to be notoriously bad at maths. Then it got worse at maths. And the recent launch of its newest model, GPT-5, showed that it’s still bad at maths. So when it comes to using AI for data analysis, it’s going to mess up, right?
Well, it turns out that the answer isn’t that simple. And the reason why it’s not simple is important to explain up front.
But over the last two years AI platforms have added the ability to generate and run code (mainly Python) in response to a question. This means that, for some questions, they will try to predict the code that a human would probably write to solve your question — and then run that code.
When it comes to data analysis, this has two major implications:
Responses to data analysis questions are often (but not always) the result of calculations, rather than a predicted sequence of words. The algorithm generates code, runs that code to calculate a result, then incorporates that result into a sentence.
Because we can see the code that performed the calculations, it is possible to checkhow those results were arrived at.
In the latest in a series of posts on using generative AI, I look at how tools such as ChatGPT and Claude.ai can help help identify potential bias and check story drafts against relevant guidelines.
We are all biased — it’s human nature. It’s the reason stories are edited; it’s the reason that guidelines require journalists to stick to the facts, to be objective, and to seek a right of reply. But as the Columbia Journalism Review noted two decades ago: “Ask ten journalists what objectivity means and you’ll get ten different answers.”
Generative AI is notoriously biased itself — but it has also been trained on more material on bias than any human likely has. So, unlike a biased human, when you explicitly ask it to identify bias in your own reporting, it can perform surprisingly well.
It can also be very effective in helping us consider how relevant guidelines might be applied to our reporting — a checkpoint in our reporting that should be just as baked-in as the right of reply.
In this post I’ll go through some template prompts and tips on each. First, a recap of the rules of thumb I introduced in the previous post.
In the fifth of a series of posts from a workshop at the Centre for Investigative Journalism Summer School, I look at using generative AI tools such as ChatGPT and Google Gemini to help with reviewing your work to identify ways it can be improved, from technical tweaks and tightening your writing to identifying jargon.
Having an editor makes you a better writer. At a basic level, an editor is able to look at your work with fresh eyes and without emotional attachment: they will not be reluctant to cut material just because it involved a lot of work, for example.
An editor should also be able to draw on more experience and knowledge — identifying mistakes and clarifying anything that isn’t clear.
But there are good editors, and there are bad editors. There are lazy editors who don’t care about what you’re trying to achieve, and there are editors with great empathy and attention to detail. There are editors who make you a better writer, and those who don’t.
Generative AI can be a bad editor. Ensuring it isn’t requires careful prompting and a focus on ensuring that it’s not just the content that improves, but you as a writer.