I was very excited recently to read on the Scraperwiki mailing list that the website was working on making it possible to create an RSS feed from a SQL query.
Yes, that’s the sort of thing that gets me excited these days.
But before you reach for a blunt object to knock some sense into me, allow me to explain…
Scraperwiki has, until now, done very well at trying to make it easier to get hold of hard-to-reach data. It has done this in two ways: firstly by creating an environment which lowers the technical barrier to creating scrapers (these get hold of the data); and secondly by lowering the social barrier to creating scrapers (by hosting a space where journalists can ask developers for help in writing scrapers).
This move, however, does something different.
It allows you to ask questions – of any dataset on the site. Not only that, but it allows you to receive updates as those answers change. And those updates come in an RSS feed, which opens up all sorts of possibilities around automatically publishing those answers.
The blog post explaining the development already has a couple of examples of this in practice:
Anna, for example, has scraped data on alcohol licence applications. The new feature not only allows her to get a constant update of new applications in her RSS reader – but you could also customise that feed to tell you about licence applications on a particular street, or from a particular applicant, and so on.
You will need to know some SQL, which is widely used in data journalism – particularly in the US – but it’s pretty simple to learn, because as a query language, it is designed to ask questions like ‘Select all the applications from that dataset where the application is of this status and the applicant has this name’.
And because RSS is so flexible, Stuart can use the same technology to publish live updates on restaurant inspections to @EatSafeWalsall (it could also feed a widget on a blog or website, or a map, a Facebook page, or an email newsletter).
So you can put that blunt object away. This makes Scraperwiki useful in wholly new ways: asking questions, and publishing and distributing the results, automatically.