Local data journalism in the UK has been undergoing a quiet revolution in the last 12 months, but 2018 in particular has seen a number of landmarks already in its first few months. Here’s some of the highlights in just its first 12 and a half weeks…
January: BBC Shared Data Unit publishes its first secondee-led investigation
The BBC Shared Data Unit had already been producing stories before in late 2017 it took on its first three-month secondees from the news industry. Over the next 12 weeks they received training in data journalism and work on a joint investigation. Continue reading →
The latest set of questions in the semi-regular FAQ section on this blog are about UGC, and come from a student at Liverpool John Moores. Here they are…
Is UGC more helpful or harmful to journalism?
Helpful, of course! Journalism has always relied on information and media (photos, video, audio) from readers/the audience and sources. The difference is that we now have access to a much larger amount of that information. Continue reading →
In a guest post for OJB Maria Crosas interviews Ferran Morales, the journalist behind The Story of Zainab, to understand how he tackled the challenge of processing and visualising data about refugees.
Ferran Morales showing infographics from Zainab’ story
Ferran Morales is a data journalist and graphic designer at El Mundo Deportivo. In February, with the team at Media Lab Prado, he published The Story of Zainab, a data-driven narrative following an 11-year-old refugee and her family, that had to leave their home in 2011 because of the war in Syria.
The project was created as part of Visualizar 2017, a workshop for prototyping data visualisation projects, and drew on data on refugees.
Women represent 49.5% of the world’s population, but they do not have a corresponding public, political and social influence. In recent years, more and more women have raised their voices, making society aware of their challenges — data journalists included. To commemorate International Women’s Day, Carla Pedret presentsa list of data journalism projects that detail the sacrifices, injustices and prejudices that women still have to face in the 21st century.
The deadline for the Data Journalism Awards is now just 3 weeks away. One category for educators and young journalists to look out for is the ‘Student and young data journalist of the year‘ which seeks to shine a light “the outstanding work of a new talent in data journalism, for projects done while they are still studying or early in their professional careers.”
The category is open to all data journalists under the age of 27 — but not students over that age (who I’m told should apply for the Best Individual Portfolio category). Submissions can include one or as many as ten pieces of data journalism. Winners get $1801 (the year William Playfair reportedly created the pie chart) and a trophy.
In a guest post for OJB, Barbara Maseda looks at how the media has used text-as-data to cover State of the Union addresses over the last decade.
State of the Union (SOTU) addresses are amply covered by the media —from traditional news reports and full transcripts, to summaries and highlights. But like other events involving speeches, SOTU addresses are also analyzable using natural language processing (NLP) techniques to identify and extract newsworthy patterns.
Every year, a new speech is added to this small collection of texts, which some newsrooms process to add a fresh angle to the avalanche of coverage.
Barbara Maseda is on a John S. Knight Journalism Fellowship project at Stanford University, where she is working on designing text processing solutions for journalists. In a special guest post she explains what she’s found so far — and why she needs your help.
Over the last few months, I have been talking to journalists about their trials and tribulations with textual sources, trying to get as detailed a picture as possible of their processes, namely:
how and in what format they obtain the text,
how they find newsworthy information in the documents,
using what tools,
for what kinds of stories,
…among other details.
What I’ve found so far is fascinating: from tech-savvy reporters who write their own code when they need to analyze a text collection, to old-school investigative journalists convinced that printing and highlighting are the most reliable and effective options — and many shades of approaches in between.
What’s your experience?
If you’ve ever dug a story out of a pile of text, please let me know using this questionnaire. It doesn’t matter if you’ve used more or less sophisticated tools to do it.