Is data journalism ‘time consuming’ or ‘resource intensive’? The excuse – and I think it is an excuse – seems to come up at an increasing number of events whenever data journalism is discussed. “It’s OK for the New York Times/Guardian/BBC,” goes the argument. “But how can our small team justify the resources – especially in a time of cutbacks?”
The idea that data journalism inherently requires extra resources is flawed – but understandable. Spectacular interactives, large scale datasets and investigative projects are the headliners of data journalism’s recent history. We have oohed and aahed over what has been achieved by programmer-journalists and data sleuths…
Instead it’s falling to the likes of Tony Hirst (an Open University academic), Dan Herbert (an Oxford Brookes academic) and Chris Taggart (a developer who used to be a magazine publisher) to fill the scrutiny gap. Recently all three have shone a light into the move towards transparency and open data which anyone with an interest in information would be advised to read.
What all three highlight is how control of information still represents the exercise of power, and how shifts in that control as a result of the transparency/open data/linked data agenda are open to abuse, gaming, or spin. Continue reading →
There have been quite a few scraping-related stories that I’ve been meaning to blog about – so many I’ve decided to write a round up instead. It demonstrates just the increasing role that scraping is playing in journalism – and the possibilities for those who don’t know them:
So here’s person number 4: Gary Becker, a Nobel prize-winning economist.
Fifty years ago he used the phrase ‘human capital’ to refer to the economic value that companies should ascribe to their employees.
These days, of course, it is common sense to invest time in recruiting, training and retaining good employees. But at the time employees were seen as a cost.
We need a similar change in the way we see our readers – not as a cost on our time but as a valuable part of our operations that we should invest in recruiting, developing and retaining. Continue reading →
I have now released the source code behind Help Me Investigate, meaning others can adapt it, install it, and add to it if they wish to create their own crowdsourcing platform or support the idea behind it.
I’m looking for collaborators and coders to update the code to Rails 3, write documentation to help users install it, improve the code/test, or even be the project manager for this project.
Over the past 18 months the site has surpassed my expectations. It’s engaged hundreds of people in investigations, furthered understanding and awareness of crowdsourcing, and been runner-up for Multimedia Publisher of the Year. In the process it attracted attention from around the world – people wanting to investigate everything from drug running in Mexico to corruption in South Africa.
Having the code on one site meant we couldn’t help those people: making it open source opens up the possibility, but it needs other people to help make that a reality.
If you know anyone who might be able to help, please shoot them a link. Or email me at paul(at)helpmeinvestigate.com
Many thanks to Chris Taggart and Josh Hart for their help with moving the code across.
“Few parts of the corporate world are limited to a single country, and so the world needs a way of bringing the information together in a single place, and more than that, a place that’s accessible to anyone, not just those who subscribe to proprietary datasets.”
Taggart and McKinnon are well placed to do this. In addition to charities data, Taggart has created websites that make it easier to interrogate council spending data and hyperlocal websites; McKinnon has done the same for the New Zealand parliament and UK lobbying.