This week’s GEN Summit marked a breakthrough moment for artificial intelligence (AI) in the media industry. The topic dominated the agenda of the first two days of the conference, from Facebook’s Antoine Bordes opening keynote to voice AI, bots, monetisation and verification – and it dominated my timeline too.
At times it felt like being at a conference in the 1980s discussing how ‘computers’ could be used in the newsroom, or listening to people talking about the use of mobile phones for journalism in the noughties — in other words, it feels very much like early days. But important days nonetheless.
Ludovic Blecher‘s slide on the AI-related projects that received Google Digital News Initiative funding illustrated the problem best, with proposals counted in categories as specific as ‘personalisation’ and as vague as ‘hyperlocal’.
Digging deeper, then, here are some of the most concrete points I took away from Lisbon — and what journalists and publishers can take from those.
The difference between ‘intelligence’ and robots that you can teach
Benedict Evans‘s Thursday keynote was certainly one of the most articulate presentations at the summit, managing to move quickly from orientation (‘what is machine learning’) via potential applications, to problematisation.
The talk echoed other discussions I’ve been involved in recently around the need to move from talking about AI to terms like machine learning — being able to recognise patterns, using those to find needles in haystacks, automate repetitive behaviour, predict possibilities and likelihoods, and so on.
Where ‘artificial intelligence’ threatens robot overlords and robot journalism, machine learning more clearly relates to the process of teaching a relatively ‘dumb’ (but also, in relative terms, smart) robot how to do parts of our job we don’t have the time or consistency to do.
But with great power comes great responsibility — and Evans makes a useful distinction between our responsibility for the accuracy of the algorithms we create as part of that (shaped particularly by input) and the potential social impact of the results. Even an accurate algorithm can have undesirable outcomes.
Algorithmic accountability and the route to intelligibility
Albright’s work on ‘crisis actor’ YouTube videos and Google search suggestions — as well as ProPublica‘s Machine Bias series — are well worth exploring as an introduction to the field. What may have started as a feature of technology and politics beats will clearly eventually play an important role in every area: from reporting on crime and housing to education and health — and even music and film.
It helps that media companies have been among the earliest, and most regular, victims of algorithms. We are acutely aware of how a small change in those recipes can shape whether, and how, our audiences receive our journalism, as Emily Bell‘s talk showed.
What is needed now is not only an awareness of how our communities can be affected by algorithms (and a desire to report on that), but also a reflective attitude to our own reliance on algorithms — and the need for transparency around that.
The work of Nick Diakapoulos has laid some important ground on the subject including the limitations of algorithmic transparency and potential disclosure mechanisms, while AP’s Stuart Myles has spoken about the same subject with regard to algorithmic news, identifying four levels of algorithmic transparency.
The commoditisation of (some) data journalism — and the move beyond automation to augmentation
Frames, Grafiti, RADAR and Le Parisien’s LiveCity project showed just how far data journalism has shifted from the exceptional to the commodified.
Frames provides a service supplying ready-made charts for news organisations to embed alongside their articles, complete with a revenue-sharing business model whereby charts are also sponsored (they have been working with a Portuguese news organisation and claim that chart inclusion leads to higher sharing and recirculation).
Grafiti, meanwhile, is aiming to make social creation of charts easier while building a charts-as-data search engine. RADAR has built one of the UK’s biggest data journalism teams to supply copy newswire-style to local publishers; and Le Parisien is pulling city data into articles in the form of personalised widgets.
In other words, data journalism is starting to scale. For data journalists this means two things: either moving from the low-hanging fruit to more complex or investigative stories, or moving into the scaling of data journalism itself (i.e. coding).
…Or, of course, doing both.
Chatbots and narrative, bots and newsrooms
Chatbots, meanwhile, are moving in the opposite direction — although many news industry chatbots until now have been nothing more than a glorified RSS alert, Quartz’s John Keefe and the BBC’s Paul Sargeant showed that we are increasingly seeing a more sophisticated application of editorial skill to the form.
“The success to a good bot,” as Keefe explained, “is really good humans.”
In other words, chatbots not only represent a technical challenge, but a narrative challenge too — Sargeant spoke about the BBC’s experimentation with chatbots as a storytelling device in articles such as a piece on The unspoken alcohol problem among UK Punjabis, while Keefe highlighted the “very talented humans writing the storylines for the bots” and the demand for those skills from advertisers that has seen the bot team being split into editorial and commercial.
“We’re finding there’s a market for our talents and our storytelling with some of our clients.”
The approach is certainly effective: BBC articles on the royal wedding containing in-story chatbots saw 20% of users engaging with the widgets, “usually [asking] up to five or six questions”. Chatbots may not increase reach, he said, “but will increase engagement.”
At Quartz Keefe similarly reported that “90% of people who start go all the way through.
“The real value is not in reaching more people, but rather in deepening the relationship with the people you reach.”
The use of narrative techniques, however, seems to introduce a tension between user expectations of interactivity and the journalistic pressure to communicate particular facts.
In a separate session BBC Visual Journalism’s Bella Hurrell would note that testers of their in-story bots missed the option “to write text directly, to ask their own questions of the bot.”
Perhaps the increasing use of AI in the industry will help to partly resolve that tension — indeed, Sargeant said that he sees chatbots as “a transitional format. We know we are going to integrate AI and chat together.
“Doing these [chatbots] you’re learning a lot about how tone, and how you structure those conversations and storytelling [gets you] in a position for when AI is on the way”
Meanwhile, an increasing number of bots are being developed for internal use.
Quartz’s Quackbot (code on GitHub) helps journalists cache copies of webpages and suggests sources of data, while the BBC used bots to automatically generate election graphics and tweet those on the @bbcelection account, and Dagens Nyheter’s Martin Jönsson has created a ‘gender bot’ (“Genusroboten”) to help reporters see how representative their coverage is.
The Associated Press has been exploring the potential of AI technology for verification, transcription, personalisation, image recognition and bots. Their artificial intelligence strategy group lead Lisa Gibbs suggested that “I can draw a direct line between AI [for automating parts of journalism] and the increase in investigative journalism our business reporters are able to do.”
If there is a field where the automation-versus-augmentation battle takes place, a place to put algorithmic accountability into practice and experiment with the possibilities of AI and narative, there are probably few better places to look than bots.