Paul teaches data journalism at Birmingham City University and is the author of a number of books and book chapters about online journalism and the internet, including the Online Journalism Handbook, Mobile-First Journalism, Finding Stories in Spreadsheets, Data Journalism Heist and Scraping for Journalists.
From 2010-2015 he was a Visiting Professor in Online Journalism at City University London and from 2009-2014 he ran Help Me Investigate, an award-winning platform for collaborative investigative journalism. Since 2015 he has worked with the BBC England and BBC Shared Data Units based in Birmingham, UK. He also advises and delivers training to a number of media organisations.
In many countries public data is limited, and access to data is either restricted, or information provided by the authorities is not credible. So how do you obtain data for a story? Here are some techniques used by reporters around the world.
Tools like ChatGPT might seem to speak your language, but they actually speak a language of probability and educated guesswork. You can make yourself better understood — and get more professional results — with a few simple prompting techniques. Here are the key ones to add to your toolkit. (also in Portuguese)
Datadrevet historiefortelling kan deles i syv hovedkategorier ifølge en analyse av 200 artikler. I den første av to poster vil jeg demonstrere de fire mest brukte vinklene i nyhetshistorier, hvordan de kan gi deg flere muligheter som reporter, og hvordan de kan hjelpe deg med å arbeide mer effektivt med data.
De fleste datasett kan fortelle mange historier — så mange at det for noen kan virke overveldende eller forstyrrende. Å identifisere hvilke historier som er mulige, og å velge den beste historien innenfor den tiden og de ferdighetene du har tilgjengelig, er en viktig redaksjonell ferdighet.
Mange nybegynnere innen datajournalistikk søker ofte først etter historier om sammenhenger (årsak og virkning) — men disse historiene er vanskelig og tidkrevende. Du kan ønske å fortelle en historie om ting som blir verre eller bedre — men mangle dataene for å fortelle den. Hvis du har svært liten tid og vil komme i gang med datajournalistikk, er de raskeste og enkleste historiene du kan fortelle med data, historier om omfang.
There are three broad paths in ethics. Image by pfly CC BY-SA 2.0
Many people — including me — are quite uncomfortable with generative AI. Most of this discomfort can be traced to the various ethical challenges that AI raises. But an understanding of the different schools of ethics can help us both to better address those challenges and what to do about them.
Three different ethical approaches
The first thing to say about the ethics of AI is that there is no single ‘ethics’. When we engage with ethical issues there are typically at least three different systems that might be in play:
Journalists are no strangers to disagreement: the job regularly involves reporting on conflicts, putting one party’s point of view to another, or engaging with audience challenges around bias and veracity.
The Bureau of Investigative Journalism’s Big Tech Reporter Niamh McIntyre has been working with data for eight years — but it all stemmed from an “arbitrary choice” at university. She spoke to MA Data Journalism student Leyla Reynoldsabout how she got started in the field, why you don’t need to be a maths whizz to excel, and navigating the choppy waters of the newsroom.
Starting out on any new path can be daunting, but in the minutes before my phone call with Niamh McIntyre, I’m acutely aware that upping sticks to Birmingham and training in data journalism at the grand old age of 29 is nothing less than a tremendous luxury.
Strong factual storytelling relies on good idea development. In this video, part of a series of video posts made for students on the MA in Data Journalism at Birmingham City University, I explain how to generate good ideas by avoiding common mistakes, applying professional techniques and considering your audience.
In the latest in a series of posts on using generative AI, I look at how tools such as ChatGPT and Claude.ai can help help identify potential bias and check story drafts against relevant guidelines.
We are all biased — it’s human nature. It’s the reason stories are edited; it’s the reason that guidelines require journalists to stick to the facts, to be objective, and to seek a right of reply. But as the Columbia Journalism Review noted two decades ago: “Ask ten journalists what objectivity means and you’ll get ten different answers.”
Generative AI is notoriously biased itself — but it has also been trained on more material on bias than any human likely has. So, unlike a biased human, when you explicitly ask it to identify bias in your own reporting, it can perform surprisingly well.
It can also be very effective in helping us consider how relevant guidelines might be applied to our reporting — a checkpoint in our reporting that should be just as baked-in as the right of reply.
In this post I’ll go through some template prompts and tips on each. First, a recap of the rules of thumb I introduced in the previous post.
One of the techniques that can be used to come up with more creative story ideas is the SCAMPER method.
SCAMPER is a mnemonic for seven different actions that can be applied to ideas in order to improve those ideas or generate more interesting alternatives. It is a technique adapted from design and engineering circles — but with just a little thought it can be applied to journalism too.
In the fifth of a series of posts from a workshop at the Centre for Investigative Journalism Summer School, I look at using generative AI tools such as ChatGPT and Google Gemini to help with reviewing your work to identify ways it can be improved, from technical tweaks and tightening your writing to identifying jargon.
Having an editor makes you a better writer. At a basic level, an editor is able to look at your work with fresh eyes and without emotional attachment: they will not be reluctant to cut material just because it involved a lot of work, for example.
An editor should also be able to draw on more experience and knowledge — identifying mistakes and clarifying anything that isn’t clear.
But there are good editors, and there are bad editors. There are lazy editors who don’t care about what you’re trying to achieve, and there are editors with great empathy and attention to detail. There are editors who make you a better writer, and those who don’t.
Generative AI can be a bad editor. Ensuring it isn’t requires careful prompting and a focus on ensuring that it’s not just the content that improves, but you as a writer.