FAQ: How has journalism been transformed?

In the latest FAQ, I’m publishing here answers to some questions from a Turkish PR company (published on LinkedIn here)…

Q: In your view, what has been the most significant transformation in digital journalism in recent years? 

There have been so many major transformations in the last 15 years. Mobile phones in particular have radically transformed both production and consumption — but having been through all those changes, AI feels like a biggest transformation than all the changes that we’ve already been through. 

It’s not just playing a role in transforming the way we produce stories, it’s also involved in major changes around what happens with those stories in terms of how they are distributed, consumed, and even how they are perceived: the rise of AI slop and AI-facilitated misinformation is going to radically accelerate the lack of trust in information (not just the media specifically). I’m being careful to say ‘playing a role’ because of course the technology itself doesn’t do anything: it’s how that technology is designed by people and used by people. 

Q: How do you evaluate its main differences from traditional journalism?

I think there’s very little difference now between digital journalism and “traditional” journalism. Every journalist is a digital journalist now: you would struggle to find a reporter who only writes printed or broadcast stories, and that’s been the case for many years now. Multiplatform reporting is now the norm. 

In terms of how modern reporting differs from pre-internet reporting, there’s a much stronger sense of the audience. Audience feedback was very basic before the internet. Now we have a very clear picture of what stories people read or watch or listen to, for how long, and what they do with those stories. 

Modern journalism also has lost significant control over how it reaches an audience, with that control now held by search engines, social media, and AI platforms. Because the business models and cultures of companies like Google, Meta, OpenAI now play a major role in shaping what stories audiences are likely to see, that has in turn shaped the business models and cultures of news organisations. 

So journalists must now consider the extent to which their stories fit into those infrastructures and the extent to which they are optimised for search or for social. But there’s also been a growth in forms that have more direct relationships with audiences, such as email newsletters.

Investigative journalism is much easier for modern reporters than it used to be in the analogue era, as journalists are able to gather and analyse more information, faster, and with more power.

Data journalism and OSINT are two new forms of journalism that embody this, and they also embody the much wider range of formats and genres involved in modern journalism.

Podcasts and ‘scrollytelling’, for example, are just two examples of how modern journalism has moved away from formulaic reporting towards more in-depth narrative journalism.

And interactivity has opened up all sorts of new ways of telling stories, too, not only in personalisation, maps, calculators and games — but also in the way that audiences can play an active role in newsgathering, from providing user generated content (UGC) and tip-offs to crowdsourcing. 

Q: From the time you founded the Online Journalism Blog until today, how do you see the trajectory of digital reporting?

Aside from the changes listed above, we’ve seen journalism go through a number of phases. One of the most fundamental – which is often overlooked because of all the changes that followed – was the hyperlinked nature of the web, which made us think about how our reporting connected with the rest of that network, both informationally (linking to sources, and to further information) and socially (connecting with audiences). 

Blogging was another early major change which introduced additional competition that forced journalists and publishers to up their game. It was no longer enough to merely report what people were saying, because our sources now had a direct channel to our audiences. It’s easy to forget how stenographic most journalism was before.

Journalists then had to adapt to new forms of information including data and UGC, so we’ve seen Computer Assisted Reporting become data journalism, an expansion of factchecking, and the emergence of OSINT. Information overload created both supply and demand for curation as a skillset.

Mobile phones added to this, and transformed newsgathering, putting a piece of kit into every reporter’s hands that meant writers had to learn how to film and record clear audio. And because that kit was also in the hands of audiences, and as bandwidth has increased to facilitate it, and algorithms tweaked to prioritise it, the language of news has become less textual and much more visual.

There’s been a tragedy of the commons as initial optimism about the internet has come up against increasing pollution and weaponisation of a public sphere built on algorithms that have learned the most basic human emotional triggers for content that keeps us on the page are hate and anger.

We may well look back at this time as a digital cold war, with dozens of countries funding armies of trolls and sock puppets to destabilise adversaries, and most people completely unaware or unbelieving that the content they were sharing was not authentic. 

But it’s easy to focus on what we’ve lost and forget what we have gained. The voiceless have more of a voice than they did three decades ago; there are more tools to hold power to account, and they are cheaper and more accessible than ever. We have more creative ways to tell stories than we ever did, and as a result the stories are richer and more diverse than they ever were. 

Q: What are your thoughts on the use of artificial intelligence and automation tools in journalism? How do these developments affect the ethical boundaries of the profession?

Automation is just one application of AI, and journalism has used automation for over two decades now, so the ethical challenges are not new. It has always been important to ensure that the outputs of automated steps in a workflow are regularly checked and edited (what is now called ‘Human In The Loop’), and that any algorithm itself is regularly checked and updated. 

Those checks consider the key ethical consideration of accuracy, and ensure another, editorial independence. But if the subject matter or material is likely to raise other issues, major automation is normally not used.

For example, automation might be used to generate stories for earthquake alerts or financial results or football matches where the source is reliable, factual and authoritative, and there is no need for ‘balance’. But in less clear-cut contexts, it might be used only to alert journalists in much the same way that a press release, email alert or newswire. For example Dataminr uses machine learning to classify tweets as newsworthy and alert journalists to those. 

What is different about automation with genAI is that it is a probabilistic tool. So whereas previous automation technologies made mistakes in a systematic way, in which the same input will always lead to the same output, generative AI does not: the same input can lead to different outputs; it’s a roll of the dice each time. That means it requires closer monitoring of both input and output, especially in relation to known weaknesses of AI such as accuracy and bias.

Another major difference is our reliance on third party algorithms. When we automate with someone else’s LLMs (OpenAI, Google, Anthropic etc) we have no control over those algorithms, and may not know if they are changed while we are using it – especially given that they are essentially being constantly trained on the inputs of a world of users.

For that reason it is important to ‘train’ the model as much as possible, whether that is through building your own LLM, or creating a custom GPT, or through prompt design techniques. All of those are examples of asserting some editorial control.

Editorial independence is a complex issue here. Like most ethical considerations, it has to be considered alongside other ethical issues. For example, news organisations lose editorial independence by allowing it to be indexed by Google (allowing the importance of our stories to be decided by their algorithms), but the ethical considerations of accessibility and informing audiences (and having large enough audiences to be financially sustainable) win out over that.

We can see similar trade-offs with AI: some reporting would not be possible without it, and AI can be a useful tool in improving how well we consider ethical issues (it can aid accuracy and bias, even though those are also weaknesses in general).

The second part of this FAQ, focusing on data journalism and open data, is published tomorrow.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.