
The Bureau of Investigative Journalism’s Big Tech Reporter Niamh McIntyre has been working with data for eight years — but it all stemmed from an “arbitrary choice” at university. She spoke to MA Data Journalism student Leyla Reynolds about how she got started in the field, why you don’t need to be a maths whizz to excel, and navigating the choppy waters of the newsroom.
Starting out on any new path can be daunting, but in the minutes before my phone call with Niamh McIntyre, I’m acutely aware that upping sticks to Birmingham and training in data journalism at the grand old age of 29 is nothing less than a tremendous luxury.
A younger me might have — would have — quaked at such a scenario, so I’m keen to know more about Niamh’s work, which ranges from investigating the gig work industry to private children’s homes.
Niamh currently works for The Bureau of Investigative Journalism (TBIJ) where she is a reporter on their Big Tech team, investigating powerful technology companies. Before joining TBIJ, she spent four years as a data journalist at the Guardian covering political advertising and dark money, local councils’ secretive use of algorithms and online misinformation.
And before that, she studied data journalism.
“I didn’t even really learn to code until I was at The Guardian“
Niamh describes her decision to focus on data journalism as “a slightly arbitrary choice” following a friend’s decision to switch course: “He’s a really smart guy [so] I was like: well, if he’s doing that, maybe I’ll do that as well. And as I started learning about data journalism, I really enjoyed it.”
Niamh had studied English as an undergraduate and says she “hated” maths at school. “I started from a position of zero technical skill. But I just really liked the stories that you could tell with it.
“I had some sessions that were led by Pamela Duncan, who was eventually my editor on the Guardian’s data team and she just made it really inspiring and interesting. I just started off learning Excel and I didn’t even really learn to code until I was at the Guardian. But there’s still a lot of great stories you can do, even if you’re just primarily using Excel.”
Now, Niamh says, she can do “basic stuff” in Python. “I can use APIs to get data and I can do some basic data analysis. I definitely never became a super technical person and that’s fine. Good journalism is the more important bit of data journalism.
“I’ve definitely met the odd person who was a programmer first and then moved into data journalism but I would say it’s much more common to come from the other direction: someone who’s interested in journalism first and then picks up data skills.”
Standing out in the journalism jobs market
Data was key in helping Niamh stand out in a competitive jobs market when starting out. “Even if it’s not something you want to do forever,” she says, “it was definitely a really helpful experience.
“I got a full-time job with the Guardian when I was 23, which there’s no way in hell I would have ever got if I was just trying to be a news reporter first.”

I mention to Niamh how I am enjoying learning about data journalism despite my lack of natural mathematical ability. There’s something really satisfying about it.
“Yeah, I agree,” she says. “And particularly when you do come from a more magazine-y background it is kind of satisfying.
“It is also quite stressful. Checking that your findings are correct and, if it’s a complicated story, having to be on top of every stage and the potential mistakes that you could make in that stage is important. But overall it’s definitely really rewarding and a lot of the stories I did on the Guardian’s data team do feel really valuable and worthwhile.”
The balance between quick turnaround and longer term stories
Niamh joined The Guardian as part of the Data Projects team, which had a remit to pursue longer term stories than the newspaper’s pioneering Datablog. The Covid pandemic shifted the focus to “much more quick turnaround stuff” for a period. “But since there now aren’t a million different COVID datasets to analyse, I think it’s gone back to that original model.”

Niamh describes her work for The Guardian as “a real mix of finding stories and pre-existing datasets — but I definitely had scope to create my own datasets and those are probably the best stories I did.
“For example, I did one about fossil fuels and I worked with someone to build a scraper for Google ads on climate-related search terms, and did analysis into how much the fossil fuel industry was spending on such terms like net zero, clean energy and other stuff. That was really fun. I also wrote a scraper for schools using crowdfunders.
“When you work on the data team, you will have to do quick turnaround stuff, but there will be scope to do more ambitious and longer term stuff as well. I don’t know exactly what the balance is now, but there’s scope for both.”
I ask Niamh if she regularly abandons story ideas because the end result has been prematurely assumed. “No,” she replies. “I would always start with a hypothesis. Sometimes there is a bit of looking at data sources and just seeing what they might show. And that might be for example, looking at the biggest spenders on Facebook ads or data on political donations or school exclusions. But, I would say it’s very important to test your hypothesis and try to check everybody’s natural confirmation bias.”
“Datasets are always incomplete”

Alongside the need to check for confirmation bias, Niamh notes that “datasets are always to a greater or lesser extent incomplete” — missing certain context or measurements that might provide a fuller picture, or timeliness.
“So, I guess, to acknowledge that incompleteness we would try to address it by briefly describing the methodology or adding in a caveat about showing its limitations in some way.
She highlights the value of matching datasets: “If you’re just working off one public dataset, then anyone else can easily reach the same conclusions. But if you can add extra data sources, you can reach more interesting conclusions and come up with something that other people don’t have.”
She notes, however, that matching across datasets in the UK can “often be a nightmare”, particularly when trying to match between different geographies such as local authorities and health areas, or between different local authorities with different data structures.
I ask Niamh if she has ever done a story on a complete lack of data. “We did stuff speaking to that in the pandemic,” she recalls. “Particularly in the earlier stages when there weren’t really any data sources.
“My first editor at the Guardian — Caelainn Barr — and Gary Younge did a cool project [Beyond the Blade] about the lack of good data on knife crime.
“They worked to build their own dataset to try to address the gaps [in the data], while also part of the story focused on the fact that there was a lack of data.
“That’s a really good example of trying to address the lack of data by building your own dataset, but also drawing attention to the lack of data and making that part of the story. That theme is a really potent one for data journalism.”
You can find Niamh’s recent work at The Bureau of Investigative Journalism.

Congratulations to her for the hard work she did