Tag Archives: factchecking

Using generative AI to help review your reporting: subbing, jargon, brevity and factchecking

Applications of genAI in the journalism process 
Pyramid with the third 'Production' tier highlighted: 

Identify jargon and bias; improve spelling, grammar, structure and brevity

In the fifth of a series of posts from a workshop at the Centre for Investigative Journalism Summer School, I look at using generative AI tools such as ChatGPT and Google Gemini to help with reviewing your work to identify ways it can be improved, from technical tweaks and tightening your writing to identifying jargon.

Having an editor makes you a better writer. At a basic level, an editor is able to look at your work with fresh eyes and without emotional attachment: they will not be reluctant to cut material just because it involved a lot of work, for example.

An editor should also be able to draw on more experience and knowledge — identifying mistakes and clarifying anything that isn’t clear.

But there are good editors, and there are bad editors. There are lazy editors who don’t care about what you’re trying to achieve, and there are editors with great empathy and attention to detail. There are editors who make you a better writer, and those who don’t.

Generative AI can be a bad editor. Ensuring it isn’t requires careful prompting and a focus on ensuring that it’s not just the content that improves, but you as a writer.

Continue reading

Investigative journalism’s AI challenges: accuracy and bias, explainability and resources

screenshots of guidelines on AI

Having outlined the range of ways in which artificial intelligence has been applied to journalistic investigations in a previous post, some clear challenges emerge. In this second part of a forthcoming book chapter, I look at those challenges and other themes: from accuracy and bias to resources and explainability.

Continue reading

AI in investigative journalism: mapping the field

screenshots of various examples of AI being used in journalism, including Serenata de Amor, Leprosy of the Land and The Implant Files

Investigative journalists have been among the earliest adopters of artificial intelligence in the newsroom, and pioneered some of its most compelling — and award-winning — applications. In this first part of a draft book chapter, I look at the different branches of AI and how they’ve been used in a range of investigations.

Continue reading

How should journalists report “fiddling the figures” on coronavirus tests?

The BBC’s live stream included an alert that 122,347 tests had been “carried out” yesterday. In fact 40,000 of those had merely been sent out.

When a prominent UK politician announced on live TV that the Government had hit its target of 100,000 coronavirus tests a day by the end of April, on the very last day of that month no less, journalists faced a challenge.

Two hours earlier, specialist publication Health Service Journal had revealed that the figures had been fudged: instead of counting the numbers of tests that had been conducted on samples, a source informed them, the Government had quietly changed its own metric so that a test that had been sent out in the post — and not returned or tested — could now be added to the figures.

40,000 tests were then sent out in one day.

By any reasonable understanding, a test sent was not the same thing as a test done, as a raft of jokes — from people saying they had marked their students’ homework by sticking it in the mail, or paid their tax by receiving a letter from the taxman — pointed out.

And yet there was the Government making its claim — at length and without question, on the national broadcaster, and on the websites of national news organisations.

It was 20 minutes before the claim was queried by a reporter, by which time many viewers had switched off.

How journalists responded to this announcement — in different ways, at different times, and in different places — provides a valuable case study for anyone dealing with numbers and the claims that politicans make about them. Continue reading

Data journalism at the 2015 UK General Election: geeks bearing gifts

bbc election quizThis has been the election when the geeks came in from the cold. There may be no Nate Silver-style poster boy for the genre this side of the pond – but instead, I believe we’ve finally seen the culmination of a decade of civic hacking outside the newsroom. And if anyone deserves credit for that, it is not the Guardian or the Telegraph, but MySociety, Tweetminster, and Democracy Club.

Looking back at my review of online election reporting in 2010 it’s striking how much has changed. Back then data journalism’s contribution was all about interactive presentation of results, but little else.

In the time between that election and this one, however, two things have changed within the news industry: firstly, a more code-literate workforce, including dedicated data project teams; and secondly, the rise of mobile, social media-driven consumption and, as part of that, visual journalism. Continue reading