This, says FSB, is a lie that demonstrates the “pretence” that “‘crunching the numbers’ is somehow an an abstract, scientific, mathematical task”.
There are two problems with this: the first is that I’ve never heard a data journalist make this claim; and the second is that the ‘lie’ does not come from a data journalist (they generally don’t write headlines). It is, in short, a straw man.
The story itself is, however, perfectly valid. While FSB points to the exclusion of students, for example, The Guardian’s story mentions that early on.
It’s fair to say that those who are economically inactive should not be included in unemployment figures. Indeed, the Datablog post which expands on the data does a very good job in explaining how that activity is mentioned:
“Youth unemployment figures are always slightly odd, and as with many things in life, it’s students that get the blame.
“Students can be counted in three different ways: a full-time student doing an evening job in a bar counts as employed. A student who wants bar work but can’t get it is unemployed. A full-time student who’s not topping up his income with a job (and isn’t trying to) is economically inactive.”
Fleet Street Blues uses the raw data published by the Datablog to highlight a number of other ways of interpreting the data, all of which are interesting – and in fact, I’ll probably use them in future as an example of how the same data can tell many different stories.
More than one story
But again, this proves nothing about the ‘pretence’ of data journalism. All it proves is that there is more than one story to be found in a dataset, and that journalists will pick the one that is most newsworthy for their particular market.
In fact, not only journalists, but politicians, PR staff, marketers, scientists, lobbyists and anyone else who wants to tell a story.
It’s because of this that data journalism is not something which should be snootily written off as a “fad”. Data is important. Journalists need to be able to interrogate it and find the stories that are not being told.
That is exactly what The Guardian have done. Yes, the headline could be more accurate* – but how many times has a headline writer omitted key details due to the limitations of space (on every type of story)? And yes, as one FSB commenter points out, the inclusion of whole numbers would have added further context.
But the irony is that it’s precisely because The Guardian isn’t trying to pretend to be ‘The Only Truth’ that FSB and its commenter can interrogate the data, and that the reader can understand the subtleties in how data is gathered and classified.
If there is a pretence about data journalism, it is a wider one: a belief in society that somehow numbers equate to truth. A belief which is exploited by politicians but which is coming – and should come – under increasing scrutiny from journalists. (The story, for example, began as a column by a Labour politician in The Guardian, fact-checked by Channel 4 News, and followed up by The Guardian’s journalists)
The more good data journalism we have, the less that anyone – including journalists – can pretend to the idea of a “scientific” process.
*(Notably, the online version includes a second headline which is clearer: “Unemployment rate for black 16 to 24-year-olds available for work now double that for white counterparts, ONS data shows”)