Open data in Spain – guest post by Ricard Espelt

Ahead of speaking this week in Barcelona, I spoke to a few people in Spain about the situation regarding open data in the country. One of those people is Ricard Espelt, a member of Nuestracausa, “a group of people who wanted to work on projects like MySociety [in Spain]”. The group broke up and Ricard now runs Redall Comunicacao. Among Ricard’s projects is Copons 2.0: an “approach to consensus decision making”.

This is what Ricard had to say about the problems around open data, e-democracy and bottom-up projects in Spain:

I think there are three points to bear in mind when we to try to analyse how the tools are changing politics & public administration:

  • The process of the governments to review data, so it will be easier to use data for all the citizens. Open data.
  • The process of the governments to involve the citizens in the decisions. E-democracy.
  • The action of the citizens (individuals or groups) to engage other citizens to work for the community. Is a good way to make lobby and influence in the decisions of the governments.

Spain, like other countries, has been developing all these points with different levels of success. Continue reading

Hyperlocal voices: the Worst of Perth

Having already interviewed hyperlocal bloggers in the US and the Netherlands, this week’s Hyperlocal Voices profiles an Australian blogger: The Worst of Perth. Launched 3 years ago to criticise a local newspaper, the blog is approaching a million views this year and has the an impact on the local political scene.

Who were the people behind the blog, and what were their backgrounds?

Just me. I have a background in stand-up comedy and photography amongst many things, with a bit of dabbling in graphic design and art too.

I used to work for quite a while in video production, (as well as a few occasions as best boy/lighting assistant in a tax write-off kung fu/zombie movie or two). I currently work for Curtin University and am also a student of Mandarin.

What made you decide to set up the blog?

Heh. Well, amusingly from an online journalism point of view, my very first motivation was to label a senior print journo “Australia’s worst journalist”!

Perth has a single daily newspaper, The West Australian, (circ I think about 250 000 daily) which has in many people’s opinion not been best served by being the monopoly daily provider. The paper and its journalists used to be a frequent target of TWOP, but not so much anymore.

The reason for this is at the heart of what’s happening to journalism around the world. Because The West was the only daily paper, in pre-news blog times, people used to be passionate about its faults.

Now no-one really cares how bad it is, because they can get their real news elsewhere. The paper hasn’t got any better, in fact it’s consistently worse, but the difference now is that nobody really cares that much. Continue reading

Guest post – launching hyperlocal startups: Opinion 250 and Locally Informed

In a guest post for the Online Journalism Blog, Shane Redlick shares his experiences of launching two hyperlocal startups – one, launched 5 years ago, based on a traditional advertising model. The second – launched this year – seeking to innovate with a broker-based model and crowdsourcing technologies.

2005: Opinion 250 News

In 2005, myself along with 2 partners launched the hyperlocal startup Opinion 250 News in Prince George, British Columbia (Canada). Myself and my company performed technical development, admin and financial tasks, while the other 2 partners (long time media industry people/semi-retired) did all the reporting and managed a small team of topical/weekly writers.

All content is original for local news. We had a lot going for us and we managed to make some good gains in the first year. To date the company is profitable and can pay modest salaries for those involved. But it has taken the better of 4 years to reach that point.

The effect we were having locally was significant (read comments to story here, for instance). The biggest challenge for us was building monthly ad revenue.

We did not sell on CPC or CPM basis. It was a flat monthly cost. We had a couple of people selling the ads and we had quite a bit of local good will and resulting support via ads. Even with a lot going for us however, this was a big challenge. In fact in the first month, when we launched, we’d sold nearly $10,000 CAD (monthly recurring) in ads. Continue reading

Help Me Investigate – anatomy of an investigation

Earlier this year I and Andy Brightwell conducted some research into one of the successful investigations on my crowdsourcing platform Help Me Investigate. I wanted to know what had made the investigation successful – and how (or if) we might replicate those conditions for other investigations.

I presented the findings (presentation embedded above) at the Journalism’s Next Top Model conference in June. This post sums up those findings.

The investigation in question was ‘What do you know about The London Weekly?‘ – an investigation into a free newspaper that was (they claimed – part of the investigation was to establish if this was a hoax) about to launch in London.

The people behind the paper had made a number of claims about planned circulation, staffing and investment that most of the media reported uncritically. Martin Stabe, James Ball and Judith Townend, however, wanted to dig deeper. So, after an exchange on Twitter, Judith logged onto Help Me Investigate and started an investigation.

A month later members of the investigation had unearthed a wealth of detail about the people behind The London Weekly and the facts behind their claims. Some of the information was reported in MediaWeek and The Media Guardian podcast Media Talk; some formed the basis for posts on James Ball’s blog, Journalism.co.uk and the Online Journalism Blog. Some has, for legal reasons, remained unpublished. Continue reading

Manchester Police tweets and the MEN – local data journalism part 2

Manchester Evening News visualisation of Police incident tweets

A week ago I blogged about how the Manchester Evening News were using data visualisation to provide a deeper analysis of the local police force’s experiment in tweeting incidents for 24 hours. In that post Head of Online Content Paul Gallagher said he thought the real benefit would “come afterwards when we can also plot the data over time”.

Now that data has been plotted, and you can see the results here.

In addition, you can filter the results by area, type (crime or ‘social work’) and category (specific sort of crime or social issue). To give the technical background: Carl Johnstone put the data into a mysql database, wrote some code in Perl for the filters and used a Flash applet for the graphs. Continue reading

Creating an emergency notification system in 15 hours

I’ve written a post on the Scraperwiki blog about a hackathon I attended where a small group of developers and people with experience of crowdsourcing in emergencies created a fantastic tool to inform populations in an emergency.

The primary application is non-journalistic, but the subject matter has obvious journalistic potential for any event that requires exchanges of information. Here are just some that spring to mind:

  • A protest where protestors and local residents can find out where it is at that moment and what streets are closed.
  • A football match with potential for violence (i.e. local derby) where supporters can be alerted of any trouble and what routes to use to avoid it.
  • A music festival where you could text the name of the bands you want to see and receive alerts of scheduled appearances and any delays
  • A conference where you could receive all the above – as well as text updates on presentations that you’re missing (taken from hashtagged tweets, even)

There are obvious commercial applications for some of the above too – you might have to register your mobile ahead of the event and pay a fee to ensure you receive the texts.

Not bad for 15 hours’ work.

You can read the blog post in full here.

A template for '100 percent reporting'

progress bar for 100 percent reporting

Last night Jay Rosen blogged about a wonderful framework for networked journalism – what he calls the ‘100 percent solution‘:

“First, you set a goal to cover 100 percent of… well, of something. In trying to reach the goal you immediately run into problems. To solve those problems you often have to improvise or innovate. And that’s the payoff, even if you don’t meet your goal”

In the first example, he mentions a spreadsheet. So I thought I’d create a template for that spreadsheet that tells you just how far you are in achieving your 100% goal, makes it easier to organise newsgathering across a network of actors, and introduces game mechanics to make the process more pleasurable. Continue reading

Review: Yahoo! Pipes tutorial ebook

Pipes Tutorial ebook

I’ve been writing about Yahoo! Pipes for some time, and am consistently surprised that there aren’t more books on the tool. Pipes Tutorial – an ebook currently priced at $14.95 – is clearly aiming to address that gap.

The book has a simple structure: it is, in a nutshell, a tour around the various ‘modules’ that you combine to make a pipe.

Some of these will pull information from elsewhere – RSS feeds, CSV spreadsheets, Flickr, Google Base, Yahoo! Local and Yahoo! Search, or entire webpages.

Some allow the user to input something themselves – for example, a search phrase, or a number to limit the type of results given.

And others do things with all the above – combining them, splitting them, filtering, converting, translating, counting, truncating, and so on.

When combined, this makes for some powerful possibilities – unfortunately, its one-dimensional structure means that this book doesn’t show enough of them.

Modules in isolation

While the book offers a good introduction into the functionality of the various parts of Yahoo! Pipes, it rarely demonstrates how those can be combined. Typically, tutorial books will take you through a project that utilises the power of the tools covered, but Pipes Tutorial lacks this vital element. Sometimes modules will be combined in the book but this is mainly done because that is the only way to show how a single module works, rather than for any broader pedagogical objective.

At other times a module is explained in isolation and it is not explained how the results might actually be used. The Fetch Page module, for example – which is extremely useful for scraping content from a webpage – is explained without reference to how to publish the results, only a passing mention that the reader will have to use ‘other modules’ to assign data to types, and that Regex will be needed to clean it up.

Continue reading

Hyperlocal voices: James Rudd, Towcester News

Hyperlocal voices: Towcester News

James Rudd launched his website covering “Towcester and the villages of NN12” after conducting research for a newspaper group. “Their mentality was one of territory and regions,” he explains, and they didn’t listen to his suggestion of a hyperlocal focus – so he went ahead and launched it independently. This is the latest in a series of interviews with hyperlocal publishers.

Who were the people behind the blog, and what were their backgrounds?

I originally worked in the family business of free distribution newspapers in the late 70s early 80s (after that years in the media side of the pre press world mostly working on magazines and catalogues), so the concept was quite clear in my mind.

What made you decide to set up the blog?

I did some research for a newspaper group on the internet and discovered that their mentality was one of territory and regions. This, however, only suited national and large local companies for advertising. I suggested that they produce hyperlocal websites providing advertising opportunities and content in smaller areas. Continue reading

Practical steps for improving visualisation

Here’s a useful resource for anyone involved in data journalism and visualising the results. ‘Dataviz‘ – a site for “improving data visualisation in the public sector” – features a step by step guide to good visualisation, as well as case studies and articles.

Although it’s aimed at public sector workers, the themes in the former provide a good starting point for journalists: “What do we need to do?”; “How do we do it?” and “How did we do?” Each provides a potential story angle. Clicking through those themes takes you through some of the questions to ask of the data, taking you to a gallery of visualisation possibilities. Even if you never get that far, it’s a good way to narrow the question you’re asking – or find other questions that might result in interesting stories and insights.