No, the explainer isn’t dead. It just needs a reason to live.

A collection of explainer headlines

Marie Gilot says the explainer is dead. Because AI.

“Today, our readers query AI for all that stuff,” she writes. “They like the AI answers well enough and they don’t click on article links.”

Here’s the type of content losing to AI: explainers, how-tos, evergreens, aggregated news, resource lists, hours of operation for government offices, recipes.

Gilot is right, of course. But only partly.

It’s right that the commercial imperative to produce explainers — low cost, high traffic — is going to come under severe challenge at one end.

But that doesn’t mean the explainer is dead. It just means they need to have a reason to fight for their life beyond money.

A reason for explainers to live

And one of those reasons? Because AI.

When Hilke Schellmann tested AI tools’ performance on journalistic research tasks in August, she concluded that their inconsistency raised “concerns about how these tools define relevance or importance in a field.

“If [someone] relies on these tools to understand the context surrounding new research, they risk misunderstanding and misrepresenting [new information], omitting published critiques, and overlooking prior work that challenges the findings.”

Abandoning this territory to the large language models is like saying we shouldn’t do product reviews because content creators are doing it already for their very generous sponsors (or cover politics. Or sport. Etc).

Carefully curated explainers might not boost the ego of a reporter who sees themselves as a dogged news hound, but we should probably still be writing them to serve audiences and compete with AI-generated alternatives — especially given their propensity to repeat false claims.

It is particularly the case if we are have any mission to give a voice to the voiceless — by definition underrepresented in AI training data.

There remain some commercial arguments in explainers’ favour, too. They retain a useful function in improving metrics such as bounce rates: if a reader has a question about some element of a story, and it links to an explainer answering that, they don’t even have to leave the site to bother asking AI.

If readers discover that they enjoy the creativity, freshness, rigour or wit of your explainers in a way that they don’t warm to the dryness, verbosity, sycophancy or gullibility of AI, they may be more likely to keep coming back.

Explainers can play an important role in reaching new audiences post-search too, on video platforms like TikTok and Instagram where users may stumble across your content without necessarily looking for it.

And then there’s the unmeasured value of branding and trust: a well-designed explainer can signal to an audience that we are interested in solving their problems and answering their questions, rather than just telling their stories.

This entry was posted in online journalism and tagged , , , on by .
Unknown's avatar

About Paul Bradshaw

Paul teaches data journalism at Birmingham City University and is the author of a number of books and book chapters about online journalism and the internet, including the Online Journalism Handbook, Mobile-First Journalism, Finding Stories in Spreadsheets, Data Journalism Heist and Scraping for Journalists. From 2010-2015 he was a Visiting Professor in Online Journalism at City University London and from 2009-2014 he ran Help Me Investigate, an award-winning platform for collaborative investigative journalism. Since 2015 he has worked with the BBC England and BBC Shared Data Units based in Birmingham, UK. He also advises and delivers training to a number of media organisations.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.