In our latest interview with hyperlocal practitioners, Damian Radcliffe speaks to Mark Baynes from Love Wapping. A journalist, professional photographer and user experience designer; Mark explains how his mutual love of data and wildlife has manifested itself in this East London hyperlocal site.
The fifth in our new series of Hyperlocal Voices explores the work done by the team behind the Londonist. Despite having a large geographic footprint – Londonist covers the whole of Greater London – the site is full of ultra-local content, as well as featuring stories and themes which span the whole of the capital.
Having worked for the BBC News Entertainment website for a decade, Darryl Chamberlain took voluntary redundancy and set up the widely successful 853 Blog. As part of the Hyperlocal Voices series he shares some of the secrets of his success.
1) Who where the people behind the blog, and what where their backgrounds?
853’s all mine. My background’s actually in showbiz news. I worked for the BBC News website’s entertainment desk for a decade in a variety of roles – mainly sub-editing and being the daily editor, but also reporting and feature writing.
I took voluntary redundancy and a career break in 2009 – standing in a council election in May 2010, and doing odd bits of freelance work. While standing in an election will probably leave me hopelessly biased in many eyes, it helped introduce me to local issues which simply weren’t being touched, and potential contacts of all political hues. After my glorious defeat, I realised I could do a bit more for my local area by sticking to what I was good at – finding things out and writing about them. Continue reading
Tonight I had the pleasure of chairing an extremely informative panel discussion on data and the future of journalism at the first London Linked Data Meetup. On the panel were:
- Martin Belam (Information Architect, The Guardian; blogger, Currybet)
- John O’Donovan (Chief Architect, BBC News Online)
- Dan Brickley (Friend of a Friend project; VU University, Amsterdam; SpyPixel Ltd; ex-W3C)
- Leigh Dodds (Talis)
What follows is a series of notes from the discussion, which I hope are of some use.
For a primer on Linked Data there is A Skim-Read Introduction to Linked Data; Linked Data: The Story So Far PDF) by Tom Heath, Christian Bizer and Berners-Lee; and this TED video by Sir Tim Berners-Lee (who was on the panel before this one).
To set some brief context, I talked about how 2009 was, for me, a key year in data and journalism – largely because it has been a year of crisis in both publishing and government. The seminal point in all of this has been the MPs’ expenses story, which both demonstrated the power of data in journalism, and the need for transparency from government – for example, the government appointment of Sir Tim Berners-Lee, seeking developers to suggest things to do with public data, and the imminent launch of Data.gov.uk around the same issue.
Even before then the New York Times and Guardian both launched APIs at the beginning of the year, MSN Local and the BBC have both been working with Wikipedia and we’ve seen the launch of a number of startups and mashups around data including Timetric, Verifiable, BeVocal, OpenlyLocal, MashTheState, the open source release of Everyblock, and Mapumental.
Q: What are the implications of paywalls for Linked Data?
The general view was that Linked Data – specifically standards like RDF – would allow users and organisations to access information about content even if they couldn’t access the content itself. To give a concrete example, rather than linking to a ‘wall’ that simply requires payment, it would be clearer what the content beyond that wall related to (e.g. key people, organisations, author, etc.)
Leigh Dodds felt that using standards like RDF would allow organisations to more effectively package content in commercially attractive ways, e.g. ‘everything about this organisation’.
Q: What can bloggers do to tap into the potential of Linked Data?
This drew some blank responses, but Leigh Dodds was most forthright, arguing that the onus lay with developers to do things that would make it easier for bloggers to, for example, visualise data. He also pointed out that currently if someone does something with data it is not possible to track that back to the source and that better tools would allow, effectively, an equivalent of pingback for data included in charts (e.g. the person who created the data would know that it had been used, as could others).
Q: Given that the problem for publishing lies in advertising rather than content, how can Linked Data help solve that?
Dan Brickley suggested that OAuth technologies (where you use a single login identity for multiple sites that contains information about your social connections, rather than creating a new ‘identity’ for each) would allow users to specify more specifically how they experience content, for instance: ‘I only want to see article comments by users who are also my Facebook and Twitter friends.’
The same technology would allow for more personalised, and therefore more lucrative, advertising.
John O’Donovan felt the same could be said about content itself – more accurate data about content would allow for more specific selling of advertising.
Martin Belam quoted James Cridland on radio: “[The different operators] agree on technology but compete on content”. The same was true of advertising but the advertising and news industries needed to be more active in defining common standards.
Leigh Dodds pointed out that semantic data was already being used by companies serving advertising.
I asked members of the audience who they felt were the heroes and villains of Linked Data in the news industry. The Guardian and BBC came out well – The Daily Mail were named as repeat offenders who would simply refer to “a study” and not say which, nor link to it.
Martin Belam pointed out that The Guardian is increasingly asking itself ‘How will that look through an API’ when producing content, representing a key shift in editorial thinking. If users of the platform are swallowing up significant bandwidth or driving significant traffic then that would probably warrant talking to them about more formal relationships (either customer-provider or partners).
A number of references were made to the problem of provenance – being able to identify where a statement came from. Dan Brickley specifically spoke of the problem with identifying the source of Twitter retweets.
Dan also felt that the problem of journalists not linking would be solved by technology. In conversation previously, he also talked of “subject-based linking” and the impact of SKOS and linked data style identifiers. He saw a problem in that, while new articles might link to older reports on the same issue, older reports were not updated with links to the new updates. Tagging individual articles was problematic in that you then had the equivalent of an overflowing inbox.
(I’ve invited all 4 participants to correct any errors and add anything I’ve missed)
Finally, here’s a bit of video from the very last question addressed in the discussion (filmed with thanks by @countculture):
I’ve very quickly created a Yahoo! Pipes mashup for today’s council and London mayor elections in the UK. All it does at the moment is
- take the RSS feed for Tweetscan searches for ‘election’, ‘voted’, ‘voting’, ‘vote’, ‘Ken Livingstone‘ and ‘Boris Johnson‘,
- gets rid of duplicate results,
- and spits out a feed.
- UPDATE: Now it also takes feeds from Google News and Technorati searches for local election and the two london candidates
- It also filters out anything with ‘Zimbabwe’ in it, as reports on those elections were coming through.
I’d like to invite you to clone the mashup and make improvements. Or you can just suggest them here.
Some things I’d like to do are: add images; geo information and mapping; other feeds; filtering based on user input (e.g. location).
On the evening of Thursday May 29th I’ll be at ‘Power Your Business with Web 2.0’, on a discussion panel. That’s at the Technology Innovation Centre in B4 7XG from 6pm till 10pm. Email email@example.com or register online at www.creativenetworksonline.com
On Friday June 13th I’ll be in conference at Westminster University. The day will include the official launch of the second edition of the book Investigative Journalism, for which I’ve written a chapter on ‘Investigative Journalism and Blogs‘. I’ll be on a panel discussing “What is the point of investigative journalism in the online media world?”at the