Monthly Archives: October 2010

Creating an emergency notification system in 15 hours

I’ve written a post on the Scraperwiki blog about a hackathon I attended where a small group of developers and people with experience of crowdsourcing in emergencies created a fantastic tool to inform populations in an emergency.

The primary application is non-journalistic, but the subject matter has obvious journalistic potential for any event that requires exchanges of information. Here are just some that spring to mind:

  • A protest where protestors and local residents can find out where it is at that moment and what streets are closed.
  • A football match with potential for violence (i.e. local derby) where supporters can be alerted of any trouble and what routes to use to avoid it.
  • A music festival where you could text the name of the bands you want to see and receive alerts of scheduled appearances and any delays
  • A conference where you could receive all the above – as well as text updates on presentations that you’re missing (taken from hashtagged tweets, even)

There are obvious commercial applications for some of the above too – you might have to register your mobile ahead of the event and pay a fee to ensure you receive the texts.

Not bad for 15 hours’ work.

You can read the blog post in full here.

A template for '100 percent reporting'

progress bar for 100 percent reporting

Last night Jay Rosen blogged about a wonderful framework for networked journalism – what he calls the ‘100 percent solution‘:

“First, you set a goal to cover 100 percent of… well, of something. In trying to reach the goal you immediately run into problems. To solve those problems you often have to improvise or innovate. And that’s the payoff, even if you don’t meet your goal”

In the first example, he mentions a spreadsheet. So I thought I’d create a template for that spreadsheet that tells you just how far you are in achieving your 100% goal, makes it easier to organise newsgathering across a network of actors, and introduces game mechanics to make the process more pleasurable. Continue reading

Review: Yahoo! Pipes tutorial ebook

Pipes Tutorial ebook

I’ve been writing about Yahoo! Pipes for some time, and am consistently surprised that there aren’t more books on the tool. Pipes Tutorial – an ebook currently priced at $14.95 – is clearly aiming to address that gap.

The book has a simple structure: it is, in a nutshell, a tour around the various ‘modules’ that you combine to make a pipe.

Some of these will pull information from elsewhere – RSS feeds, CSV spreadsheets, Flickr, Google Base, Yahoo! Local and Yahoo! Search, or entire webpages.

Some allow the user to input something themselves – for example, a search phrase, or a number to limit the type of results given.

And others do things with all the above – combining them, splitting them, filtering, converting, translating, counting, truncating, and so on.

When combined, this makes for some powerful possibilities – unfortunately, its one-dimensional structure means that this book doesn’t show enough of them.

Modules in isolation

While the book offers a good introduction into the functionality of the various parts of Yahoo! Pipes, it rarely demonstrates how those can be combined. Typically, tutorial books will take you through a project that utilises the power of the tools covered, but Pipes Tutorial lacks this vital element. Sometimes modules will be combined in the book but this is mainly done because that is the only way to show how a single module works, rather than for any broader pedagogical objective.

At other times a module is explained in isolation and it is not explained how the results might actually be used. The Fetch Page module, for example – which is extremely useful for scraping content from a webpage – is explained without reference to how to publish the results, only a passing mention that the reader will have to use ‘other modules’ to assign data to types, and that Regex will be needed to clean it up.

Continue reading

Hyperlocal voices: James Rudd, Towcester News

Hyperlocal voices: Towcester News

James Rudd launched his website covering “Towcester and the villages of NN12” after conducting research for a newspaper group. “Their mentality was one of territory and regions,” he explains, and they didn’t listen to his suggestion of a hyperlocal focus – so he went ahead and launched it independently. This is the latest in a series of interviews with hyperlocal publishers.

Who were the people behind the blog, and what were their backgrounds?

I originally worked in the family business of free distribution newspapers in the late 70s early 80s (after that years in the media side of the pre press world mostly working on magazines and catalogues), so the concept was quite clear in my mind.

What made you decide to set up the blog?

I did some research for a newspaper group on the internet and discovered that their mentality was one of territory and regions. This, however, only suited national and large local companies for advertising. I suggested that they produce hyperlocal websites providing advertising opportunities and content in smaller areas. Continue reading

Practical steps for improving visualisation

Here’s a useful resource for anyone involved in data journalism and visualising the results. ‘Dataviz‘ – a site for “improving data visualisation in the public sector” – features a step by step guide to good visualisation, as well as case studies and articles.

Although it’s aimed at public sector workers, the themes in the former provide a good starting point for journalists: “What do we need to do?”; “How do we do it?” and “How did we do?” Each provides a potential story angle. Clicking through those themes takes you through some of the questions to ask of the data, taking you to a gallery of visualisation possibilities. Even if you never get that far, it’s a good way to narrow the question you’re asking – or find other questions that might result in interesting stories and insights.

Lessons in community from community managers #12: Lorna Mitchell

It’s been a while since the last in the community management series. In this latest post Lorna Mitchell gives her 3 tips. Lorna is co-project lead for – an open source development project for gathering event feedback. She says “The other project lead is Chris Cornutt, a guy I’ve met three times over three years, who lives in a timezone 6 hours out from mine.”

Lorna worked as a telecommuter for a number of years and did community relations in that role, and was involved in running PHPWomen, a global user group bringing together women programming PHP, “with all the cultural and linguistic variations that brings.”

Lorna’s tips are:

Keep communicating

A running commentary of what you are doing and thinking is essential when you are working with people who can’t see you and may not have met you.

Communicate appropriately

Don’t hold a discussion over Twitter that would be better in long hand over email. Make a phone call rather than having days of comment and response on a bug tracker.

Be inclusive

Nothing turns newcomers off faster than lots of in-jokes or references to people they don’t know or places they didn’t go.

Mapping the budget cuts

budget cuts map

Richard Pope and Jordan Hatch have been building a very useful site tracking recent budget cuts, building up to this week’s spending review.

Where Are The Cuts? uses the code behind the open source Ushahidi platform (covered previously on OJB by Claire Wardle) to present a map of the UK representing where cuts are being felt. Users can submit their own reports of cuts, or add details to others via a comments box.

It’s early days in the project – currently many of the cuts are to national organisations with local-level impacts yet to be dug out.

Closely involved is the public expenditure-tracking site Where Does My Money Go? which has compiled a lot of relevant data.

Meanwhile, in Birmingham a couple of my MA Online Journalism students have set up a hyperlocal blog for the 50,000 public sector workers in the region, primarily to report those budget cuts and how they are affecting people. Andy Watt, who – along with Hedy Korbee – is behind the site, has blogged about the preparation for the site’s launch here. It’s a good example of how journalists can react to a major issue with a niche blog. Andy and Hedy will be working with the local newspapers to combine expertise.

Hyperlocal voices: Bart Brouwers, Telegraaf hyperlocal project, Netherlands

Bart Brouwers has been overseeing the establishment of a whole group of hyperlocal sites in the Netherlands with the Telegraaf Media Group. As part of the Hyperlocal Voices series, he explains the background to the project and what they’ve learned so far. Two presentations on the project can be seen above.

Who were the people behind the blog, and what were their backgrounds?

About a year ago, I came up with the plan for a hyperlocal, hyperpersonal news and data network covering all of the Netherlands. My dream was to give every single Dutchman (we have 16 million & counting…) his own platform for local relevance.

I wanted to roll it out myself and in order to get it financed I made contact with the board of directors of the Telegraaf Media Groep. I already worked for them (as the editor-in-chief of national free newspaper Sp!ts and before that as the editor-in-chief of regional newspaper Dagblad De Limburger), so it felt kind of natural to tell and ask them before I would pitch my idea somewhere else.

What I didn’t know is that TMG was already working on a hyperlocal platform, so after a few talks we decided to combine both plans. So instead of quitting TMG and starting my own company, I’m still an employee.

What made you decide to set up the blogs?

I was convinced local relevance would/will be a strong force in media. The combination of local business and local information (news, data) could easily become the trigger for a fine enterprise. Continue reading

Hyperlocal voices: Warren Free, Tamworth Blog

Hyperlocal blog: Tamworth Blog

In the latest in the hyperlocal voices series, Tamworth Blog‘s Warren Free talks about how the same frustration with lack of timely local coverage – and the example set by the nearby Lichfield Blog – led him to start publishing last year.

Who were the people behind the blog, and what were their backgrounds?

I started up the blog after seeing what was happening around the Midlands, primarily in Lichfield and saw the concept would give us something in Tamworth where we could communicate the news as it happened. At the time I was working from home, so in Tamworth the majority of the time.

My background though isn’t one which is littered with journalism experience. My only brush with journalism was during my GCSE’s where I studied Media Studies: we took part in a national newspaper competition, where we came in the top 20. That’s kind of where I left it, until Tamworth Blog was set up in 2009.

What made you decide to set up the blog?

I saw what was happening in Lichfield and suffered the same frustration: local news in Tamworth wasn’t accessible unless you purchased the weekly newspaper. Great if you wanted to find out what happened on Saturday a week later. So I endeavoured to try to provide this service to people in Tamworth. Continue reading

Manchester police tweets – live data visualisation by the MEN

Manchester police tweets - live data visualisation

Greater Manchester Police (GMP) have been experimenting today with tweeting every incident they deal with. The novelty value of the initiative has been widely reported – but local newspaper the Manchester Evening News has taken the opportunity to ask some deeper questions of the data generated by experimenting with data visualisation.

A series of bar charts – generated from Google spreadsheets and updated throughout the day – provide a valuable – and instant – insight into the sort of work that police are having to deal with.

In particular, the newspaper is testing the police’s claim that they spend a great deal of time dealing with “social work” as well as crime. At the time of writing, it certainly does take up a significant proportion – although not the “two-thirds” mentioned by GMP chief Peter Fahy. (Statistical disclaimer: the data does not yet even represent 24 hours, so is not yet going to be a useful guide. Fahy’s statistics may be more reliable).

Also visualised are the areas responsible for the most calls, the social-crime breakdown of incidents by area, and breakdowns of social incidents and serious crime incidents by type.

I’m not sure how much time they had to prepare for this, but it’s a good quick hack.

That said, the visualisation could be improved: 3D bars are never a good idea, for instance, and the divisional breakdown showing serious crime versus “social work” is difficult to visually interpret (percentages of the whole would be more easy to directly compare). The breakdowns of serious crimes and “social work”, meanwhile, should be ranked from most popular down with labelling used rather than colour.

Head of Online Content Paul Gallagher says that it’s currently a manual exercise that requires a page refresh to see updated visuals. But he thinks “the real benefit of this will come afterwards when we can also plot the data over time”. Impressively, the newspaper plans to publish the raw data and will be bringing it to tomorrow’s Hacks and Hackers Hackday in Manchester.

More broadly, the MEN is to be commended for spotting this more substantial angle to what could easily be dismissed as a gimmick by the GMP. Although that doesn’t stop me enjoying the headlines in coverage elsewhere (shown below).

UPDATE: The data is also visualised as a word cloud and line chart at Data Driven.

Manchester police twitter headlines