I et tidligere innlegg skrev jeg om fire av vinklene som oftest brukes til å fortelle historier om data. I denne andre delen ser jeg på de tre øvrige vinklene: historier som fokuserer på sammenhenger; ‘metadata’-vinkler som fokuserer på dataenes fravær, dårlige kvalitet eller innsamling — og utforskende artikler som blander flere vinkler eller gir en mulighet til å bli kjent med selve dataene.
Datenjournalistische Projekte lassen sich in einzelne Schritte aufteilen – jeder einzelne Schritt bringt eigene Herausforderungen. Um dir zu helfen, habe ich die “Umgekehrte Pyramide des Datenjournalismus” entwickelt. Sie zeigt, wie du aus einer Idee eine fokussierte Datengeschichte machst. Ich erkläre dir Schritt für Schritt, worauf du achten solltest, und gebe dir Tipps, wie du typische Stolpersteine vermeiden kannst.
Last month the BBC’s Shared Data Unit held its annual Data and Investigative Journalism UK conference at the home of my MA in Data Journalism, Birmingham City University. Here are some of the highlights…
In many countries public data is limited, and access to data is either restricted, or information provided by the authorities is not credible. So how do you obtain data for a story? Here are some techniques used by reporters around the world.
Datadrevet historiefortelling kan deles i syv hovedkategorier ifølge en analyse av 200 artikler. I den første av to poster vil jeg demonstrere de fire mest brukte vinklene i nyhetshistorier, hvordan de kan gi deg flere muligheter som reporter, og hvordan de kan hjelpe deg med å arbeide mer effektivt med data.
De fleste datasett kan fortelle mange historier — så mange at det for noen kan virke overveldende eller forstyrrende. Å identifisere hvilke historier som er mulige, og å velge den beste historien innenfor den tiden og de ferdighetene du har tilgjengelig, er en viktig redaksjonell ferdighet.
Mange nybegynnere innen datajournalistikk søker ofte først etter historier om sammenhenger (årsak og virkning) — men disse historiene er vanskelig og tidkrevende. Du kan ønske å fortelle en historie om ting som blir verre eller bedre — men mangle dataene for å fortelle den. Hvis du har svært liten tid og vil komme i gang med datajournalistikk, er de raskeste og enkleste historiene du kan fortelle med data, historier om omfang.
The Bureau of Investigative Journalism’s Big Tech Reporter Niamh McIntyre has been working with data for eight years — but it all stemmed from an “arbitrary choice” at university. She spoke to MA Data Journalism student Leyla Reynoldsabout how she got started in the field, why you don’t need to be a maths whizz to excel, and navigating the choppy waters of the newsroom.
Starting out on any new path can be daunting, but in the minutes before my phone call with Niamh McIntyre, I’m acutely aware that upping sticks to Birmingham and training in data journalism at the grand old age of 29 is nothing less than a tremendous luxury.
Strong factual storytelling relies on good idea development. In this video, part of a series of video posts made for students on the MA in Data Journalism at Birmingham City University, I explain how to generate good ideas by avoiding common mistakes, applying professional techniques and considering your audience.
In the latest in a series of posts on using generative AI, I look at how tools such as ChatGPT and Claude.ai can help help identify potential bias and check story drafts against relevant guidelines.
We are all biased — it’s human nature. It’s the reason stories are edited; it’s the reason that guidelines require journalists to stick to the facts, to be objective, and to seek a right of reply. But as the Columbia Journalism Review noted two decades ago: “Ask ten journalists what objectivity means and you’ll get ten different answers.”
Generative AI is notoriously biased itself — but it has also been trained on more material on bias than any human likely has. So, unlike a biased human, when you explicitly ask it to identify bias in your own reporting, it can perform surprisingly well.
It can also be very effective in helping us consider how relevant guidelines might be applied to our reporting — a checkpoint in our reporting that should be just as baked-in as the right of reply.
In this post I’ll go through some template prompts and tips on each. First, a recap of the rules of thumb I introduced in the previous post.
One of the most common reasons a journalist might need to learn to code is scraping: compiling information from across multiple webpages, or from one page across a period of time.
But scraping is tricky: it requires time learning some coding basics, and then further time learning how to tackle the particular problems that a specific scraping task involves. If the scraping challenge is anything but simple, you will need help to overcome trickier obstacles.
Large language models (LLMs) like ChatGPT are especially good at providing this help because writing code is a language challenge, and material about coding makes up a significant amount of the material that these models have been trained on.
This can make a big difference in learning to code: in the first year that I incorporated ChatGPT into my data journalism Masters at Birmingham City University I noticed that students were able to write more advanced scrapers earlier than previously — and also that students were less likely to abandon their attempts at coding.
You can also start scraping pretty quickly with the right prompts (Google Colab allows you to run Python code within Google Drive). Here are some tips on how to do so…