There’s a story out this week on the BBC website about dialogue and gender in Game of Thrones. It uses data generated by artificial intelligence (AI) — specifically, machine learning — and it’s a good example of some of the challenges that journalists are increasingly going to face as they come to deal with more and more algorithmically-generated data.
Information and decisions generated by AI are qualitatively different from the sort of data you might find in an official report, but journalists may fall back on treating data as inherently factual.
Here, then, are some of the ways the article dealt with that — and what else we can do as journalists to adapt.
Margins of error: journalism doesn’t like vagueness
The story draws on data from an external organisation, Ceretai, which “uses machine learning to analyse diversity in popular culture.” The organisation claims to have created an algorithm which “has learned to identify the difference between male and female voices in video and provides the speaking time lengths in seconds and percentages per gender.”
Crucially, the piece notes that:
“Like most automatic systems, it doesn’t make the right decision every time. The accuracy of this algorithm is about 85%, so figures could be slightly higher or lower than reported.”
This week’s GEN Summit marked a breakthrough moment for artificial intelligence (AI) in the media industry. The topic dominated the agenda of the first two days of the conference, from Facebook’s Antoine Bordesopening keynote to voice AI, bots, monetisation and verification – and it dominated my timeline too.
At times it felt like being at a conference in the 1980s discussing how ‘computers’ could be used in the newsroom, or listening to people talking about the use of mobile phones for journalism in the noughties — in other words, it feels very much like early days. But important days nonetheless.
Ludovic Blecher‘s slide on the AI-related projects that received Google Digital News Initiative funding illustrated the problem best, with proposals counted in categories as specific as ‘personalisation’ and as vague as ‘hyperlocal’.
In a guest post for OJB Maria Crosas points out three main takeaways that newsrooms should consider when aiming for a complete chatbot experience.
Over the past year I’ve been frequently invited to share ideas around how bots can help newsrooms to deliver news, and advice on how to build an engaging chatbot experiences. And throughout these classes, I’ve also had challenging questions on how these technologies are pushing the boundaries of ethics, artificial intelligence and storytelling.
I’ve boiled down these experiences into 3 takeaways for newsrooms that want to begin the chatbot journey. Here they are…
This week I’m rounding off the first semester of classes on the new MA in Data Journalism with a session on artificial intelligence (AI) and machine learning. Machine learning is a subset of AI — and an area which holds enormous potential for journalism, both as a tool and as a subject for journalistic scrutiny.
So I thought I would share part of the class here, showing some examples of how the 3 types of machine learning — supervised, unsupervised, and reinforcement — have already been used for journalistic purposes, and using those to explain what those are along the way. Continue reading →
The best-known examples of data journalism tend to be based around text and visuals — but it’s harder to find data journalism in video and audio. Ahead of the launch of my new MA in Data Journalism I thought I would share my list of the examples of video data journalism that I use with students in exploring data storytelling across multiple platforms. If you have others, I’d love to hear about them.
FOI stories in broadcast journalism
Freedom of Information stories are one of the most common situations when broadcasters will have to deal with more in-depth data. These are often brought to life by through case studies and interviewing experts. Continue reading →
Every year Nic Newman asks a bunch of people for their reflections on the last 12 months and their anticipations for the year ahead. Here’s what I’ve said this year — as always, to be taken with significant doses of salt.
What surprised you most in 2016?
Perhaps the sheer number of significant developments (compare the posts for 2015 and 2014). It was the year when bots went mainstream very quickly, and platforms took further significant steps towards becoming regulated as publishers.
It was a year of renewed innovation in audio. 2016 saw the launch of a number of new audio apps, including Anchor, Pundit, Clyp and Bumpers.fm, as various companies attempted to be the ‘Facebook of audio’. The only problem: Facebook wants to be the Facebook of audio too: at the end of the year they introduced live audio. Continue reading →
1. You can use the assistant without giving it permission
Whereas other chat apps like Telegram and Facebook Messenger make it possible to interact with bots, Google is making bots central to Allo. Specifically, the Google Assistant.
When you first open the app you are introduced to the assistant. It wants to help, it says, but it will only do so if you agree to give it a whole bunch of creepy permissions. Until you give it those, it will not answer any questions directly. Continue reading →