Category Archives: blogs

Crowdsourcing investigative journalism: a case study (part 2)

Continuing the serialisation of the research underpinning a new Help Me Investigate project, in this second part I describe the basis for the way that the original site was constructed – and the experiences of its first few months. (Part 1 is available here)

Building the site

By 2008 two members had joined the Help Me Investigate team: web developer Stef Lewandowski and community media specialist Nick Booth, and the project won funding that year from Channel 4’s 4iP fund and regional development agency Screen West Midlands.

Two part time members of ‘staff’ were recruited to work one day per week for the site throughout the 12 week funded ‘proof of concept’ period: a support journalist and a community manager.

Site construction began in April 2009, and began by expanding on the four target user profiles in the bid document to outline 12 profiles of users who might be attracted to the site, identifying what they would want to do with the site and how the design might facilitate that – or prevent it (as in the case, for example, of users who might want to hijack or hoax the site).

This was followed by rapid site development, and testing for 6 weeks with a small private beta. The plan was to use ‘agile’ principles of web development – launching when the site was not ‘finished’ to gain an understanding of how users actually interacted with the technology, and saving the majority of the development budget for ‘iterations’ of the software in response to user demand.

The resulting site experience can be described as follows: a user coming across the site was presented with two choices: to join an existing investigation, or start their own. If they started an investigation they would be provided with suggestions for ways of breaking it down into smaller tasks and of building a community around the question being pursued. If they joined an existing investigation they would be presented with those tasks – called ‘challenges’ – that needed completing to take the investigation forward. They could then choose to accept a particular challenge and share the results of their progress underneath.

The concepts of Actor-Network Theory (Paterson and Domingo, 2008) were accounted for in development: this describes how the ‘inventors’ of a technology are not the only actors that shape its use; the technology itself (including its limitations and its relationship with other technologies, and institutional and funding factors), and those who use it would also be vital in what happened from there.

Reserving the majority of the development budget to account for the influence of these ‘actors’ on the development of the technology was a key part of the planning of the site. This proved to be a wise strategy, as user behaviour differed in some respects from the team’s expectations, and development was able to adapt accordingly.

For legal reasons, casual visitors to the site (and search engines) could only see investigation titles (which were pre-moderated) and, later, the Reports and KnowledgeBase sections of the site (which were written by site staff). Challenges and updates (the results of challenges) – which were only post-moderated – could only be seen by registered users of the site.

A person could only become a user of the site if they were invited by another user. There was also a ‘request an invite’ section on the homepage. Non-UK requests were refused for legal reasons but most other requests were granted. At this stage the objective was not to build a huge user base but to develop a strong culture on the site that would then influence its healthy future development. This was a model based on the successful development of the constructive Seesmic video blogging community.

On July 1 HelpMeInvestigate.com went live with no promotion. The day after launch one tweet was published on Twitter, linking to the site. By the end of the week the site was investigating what would come to be one of the biggest stories of the summer in Birmingham – the overspend of £2.2m by the city council on a new website. It would go on to complete further investigations into parking tickets and the use of surveillance powers, as well as much smaller-scale questions such as how a complaint was handled, or why two bus companies were charging different prices on the same route.

In the next part I look at the strengths and limitations of the site’s model of working, and how people used the site in practice.

Crowdsourcing investigative journalism: a case study (part 1)

As I begin on a new Help Me Investigate project, I thought it was a good time to share some research I conducted into the first year of the site, and the key factors in how that project tried to crowdsource investigative and watchdog journalism.

The findings of this research have been key to the development of this new project. They also form the basis of a chapter in the book Face The Future, and another due to be published in the Handbook of Online Journalism next year (not to be confused with my own Online Journalism Handbook). Here’s the report:

In both academic and mainstream literature about the world wide web, one theme consistently recurs: the lowering of the barrier allowing individuals to collaborate in pursuit of a common goal. Whether it is creating the world’s biggest encyclopedia (Lih, 2009), spreading news about a protest (Morozov, 2011) or tracking down a stolen phone (Shirky, 2008), the rise of the network has seen a decline in the role of the formal organisation, including news organisations.

Two examples of this phenomenon were identified while researching a book chapter on investigative journalism and blogs (De Burgh, 2008). The first was an experiment by The Florida News Press: when it started receiving calls from readers complaining about high water and sewage connection charges for newly constructed homes the newspaper, short on in-house resources to investigate the leads, decided to ask their readers to help. The result is by now familiar as a textbook example of “crowdsourcing” – outsourcing a project to ‘the crowd’ or what Brogan & Smith (2009, p136) describe as “the ability to have access to many people at a time and to have them perform one small task each”:

“Readers spontaneously organized their own investigations: Retired engineers analyzed blueprints, accountants pored over balance sheets, and an inside whistle-blower leaked documents showing evidence of bid-rigging.” (Howe, 2006a)

The second example concerned contaminated pet food in the US, and did not involve a mainstream news organisation. In fact, it was frustration with poor mainstream ‘churnalism’ (see Davies, 2009) that motivated bloggers and internet users to start digging into the story. The resulting output from dozens of blogs ranged from useful information for pet owners and the latest news to the compilation of a database that suggested the official numbers of pet deaths recorded by the US Food and Drug Administration was short by several thousand. One site, Itchmo.com, became so popular that it was banned in China, the source of the pet food in question.

What was striking about both examples was not simply that people could organise to produce investigative journalism, but that this practice of ‘crowdsourcing’ had two key qualities that were particularly relevant to journalism’s role in a democracy. The first was engagement: in the case of the News-Press for six weeks the story generated more traffic to its website than “ever before, excepting hurricanes” (Weise, 2007). Given that investigative journalism often concerns very ‘dry’ subject matter that has to be made appealing to a wider audience, these figures were surprising – and encouraging for publishers.

The second quality was subject: the contaminated pet food story was, in terms of mainstream news values, unfashionable and unjustifiable in terms of investment of resources. It appeared that the crowdsourcing model of investigation might provide a way to investigate stories which were in the public interest but which commercial and public service news organisations would not consider worth their time. More broadly, research on crowdsourcing more generally suggested that it worked “best in areas that are not core to your product or central to your business model” (Tapscott and Williams, 2006, p82).

Investigative journalism: its history and discourses

DeBurgh (2008, p10) defines investigative journalism as “distinct from apparently similar work [of discovering truth and identifying lapses from it] done by police, lawyers and auditors and regulatory bodies in that it is not limited as to target, not legally founded and usually earns money for media publishers.” The term is notoriously problematic and contested: some argue that all journalism is investigative, or that the recent popularity of the term indicates the failure of ‘normal’ journalism to maintain investigative standards. This contestation is a symptom of the various factors underlying the growth of the genre, which range from journalists’ own sense of a democratic role, to professional ambition and publishers’ commercial and marketing objectives.

More recently investigative journalism has been used to defend traditional print journalism against online publishing, with publishers arguing that true investigative journalism cannot be maintained without the resources of a print operation. This position has become harder to defend as online-only operations and journalists have won increasing numbers of awards for their investigative work – Clare Sambrook in the UK and VoiceOfSanDiego.com and Talking Points Memo in the US are three examples – while new organisations have been established to pursue investigations without any associated print operation including Canada’s OpenFile; the UK’s Bureau of Investigative Journalism and a number of bodies in the US such as ProPublica, The Florida Center for Investigative Reporting, and the Huffington Post’s investigative unit.

In addition, computer technology has started to play an increasingly important role in print investigative journalism: Stephen Grey’s investigation into the CIA’s ‘extraordinary rendition’ programme (Grey, 2006) was facilitated by the use of software such as Analyst’s Notebook, which allowed him to analyse large amounts of flight data and identify leads. The Telegraph’s investigation into MPs’ expenses was made possible by digitisation of data and the ability to store large amounts on a small memory stick. And newspapers around the world collaborated with the Wikileaks website to analyse ‘warlogs’ from Iraq and Afghanistan, and hundreds of thousands of diplomatic cables. More broadly the success of Wikipedia inspired a raft of examples of ‘Wiki journalism’ where users were invited to contribute to editorial coverage of a particular issue or field, with varying degrees of success.

Meanwhile, investigative journalists such as The Guardian’s Paul Lewis have been exploring a more informal form of crowdsourcing, working with online communities to break stories including the role of police in the death of newspaper vendor Ian Tomlinson; the existence of undercover agents in the environmental protest movement; and the death of a man being deported to Angola (Belam, 2011b).

This is part of a broader move to networked journalism explored by Charlie Beckett (2008):

“In a world of ever-increasing media manipulation by government and business, it is even more important for investigative journalists to use technology and connectivity to reveal hidden truths. Networked journalists are open, interactive and share the process. Instead of gatekeepers they are facilitators: the public become co-producers. Networked journalists “are ‘medium agnostic’ and ‘story-centric’”. The process is faster and the information sticks around longer.” (2008, p147)

As one of its best-known practitioners Paul Lewis talks particularly of the role of technology in his investigations – specifically Twitter – but also the importance of the crowd itself and journalistic method:

“A crucial factor that makes crowd-sourcing a success [was that] there was a reason for people to help, in this case a perceived sense of injustice and that the official version of events did not tally with the truth. Six days after Tomlinson’s death, Paul had twenty reliable witnesses who could be placed on a map at the time of the incident – and only one of them had come from the traditional journalistic tool of a contact number in his notebook.” (Belam, 2011b)

A further key skill identified by Lewis is listening to the crowd – although he sounds a note of caution in its vulnerability to deliberately placed misinformation, and the need for verification.

“Crowd-sourcing doesn’t always work […] The most common thing is that you try, and you don’t find the information you want […] The pattern of movement of information on the internet is something journalists need to get their heads around. Individuals on the web in a crowd seem to behave like a flock of starlings – and you can’t control their direction.” (Belam, 2011b)

Conceptualising Help Me Investigate

The first plans for Help Me Investigate were made in 2008 and were further developed over the next 18 months. They built on research into crowdsourced investigative journalism, as well as other research into online journalism and community management. In particular the project sought to explore concepts of “P2P journalism” which enables “more engaged interaction between and amongst users” (Bruns, 2005, p120, emphasis in original) and of “produsage”, whose affordances included probabilistic problem solving, granular tasks, equipotentiality, and shared content (Bruns, 2008, p19).

A key feature in this was the ownership of the news agenda by users themselves (who could be either members of the public or journalists). This was partly for reasons identified above in research into the crowdsourced investigation into contaminated pet food. It would allow the site to identify questions that would not be considered viable for investigation within a traditional newsroom; but the feature was also implemented because ‘ownership’ was a key area of contestation identified within crowdsourcing research (Lih, 2009; Benkler, 2006; Surowiecki, 2005) – ‘outsourcing’ a project to a group of people raises obvious issues regarding claims of authorship, direction and benefits (Bruns, 2005).

These issues were considered carefully by the founders. The site adopted a user interface with three main modes of navigation for investigations: most-recent-top; most popular (those investigations with the most members); and two ‘featured’ investigations chosen by site staff: these were chosen on the basis that they were the most interesting editorially, or because they were attracting particular interest and activity from users at that moment. There was therefore an editorial role, but this was limited to only two of the 18 investigations listed on the ‘Investigations’ page, and was at least partly guided by user activity.

In addition there were further pages where users could explore investigations through different criteria such as those investigations that had been completed, or those investigations with particular tags (e.g. ‘environment’, ‘Bristol’, ‘FOI’, etc.).

A second feature of the site was that ‘journalism’ was intended to be a by-product: the investigation process itself was the primary objective, which would inform users, as research suggested that if users were to be attracted to the site, it must perform the function that they needed it to (Porter, 2008), which was – as became apparent – one of project management. The ‘problem’ that the site was attempting to ‘solve’ needed to be user-centric rather than publisher-centric: ‘telling stories’ would clearly be lower down the priority list for users than it was for journalists and publishers. Of higher priority were the need to break down a question into manageable pieces; find others to investigate those with; and get answers. This was eventually summarised in the strapline to the site: “Connect, mobilise, uncover”.

Thirdly, there was a decision to use ‘game mechanics’ that would make the process of investigation inherently rewarding. As the site and its users grew, the interface was changed so that challenges started on the left hand side of the screen, coloured red, then moved to the middle when accepted (the colour changing to amber), and finally to the right column when complete (now with green border and tick icon). This made it easier to see at a glance what needed doing and what had been achieved, and also introduced a level of innate satisfaction in the task. Users, the idea went, might grow to like to feeling of moving those little blocks across the screen, and the positive feedback (see Graham, 2010 and Dondlinger, 2007) provided by the interface.

Similar techniques were coincidentally explored at the same time by The Guardian’s MPs’ expenses app (Bradshaw, 2009). This provided an interface for users to investigate MP expense claim forms that used many conventions of game design, including a ‘progress bar’, leaderboards, and button-based interfaces. A second iteration of the app – created when a second batch of claim forms were released – saw a redesigned interface based on a stronger emphasis on positive feedback. As developer Martin Belam explains (2011a):

“When a second batch of documents were released, the team working on the app broke them down into much smaller assignments. That meant it was easier for a small contribution to push the totals along, and we didn’t get bogged down with the inertia of visibly seeing that there was a lot of documents still to process.

“By breaking it down into those smaller tasks, and staggering their start time, you concentrated all of the people taking part on one goal at a time. They could therefore see the progress dial for that individual goal move much faster than if you only showed the progress across the whole set of documents.”

These game mechanics are not limited to games: many social networking sites have borrowed the conventions to provide similar positive feedback to users. Jon Hickman (2010, p2) describes how Help Me Investigate uses these genre codes and conventions:

“In the same way that Twitter records numbers of “followers”, “tweets”, “following” and “listed”, Help Me Investigate records the number of “things” which the user is currently involved in investigating, plus the number of “challenges”, “updates” and “completed investigations” they have to their credit. In both Twitter and Help Me Investigate these labels have a mechanistic function: they act as hyperlinks to more information related to the user’s profile. They can also be considered culturally as symbolic references to the user’s social value to the network – they give a number and weight to the level of activity the user has achieved, and so can be used in informal ranking of the user’s worth, importance and usefulness within the network.” (2010, p8)

This was indeed the aim of the site design, and was related to a further aim of the site: to allow users to build ‘social capital’ within and through the site: users could add links to web presences and Twitter accounts, as well as add biographies and ‘tag’ themselves. They were also ranked in a ‘Most active’ table; and each investigation had its own graph of user activity. This meant that users might use the site not simply for information-gathering reasons, but also for reputation building ones, a characteristic of open source communities identified by Bruns (2005) and Leadbeater (2008) among others.

There were plans to take these ideas much further which were shelved during the proof of concept phase as the team concentrated on core functionality. For example, it was clear that users needed to be able to give other users praise for positive contributions, and they used the ‘update feature’ to do so. A more intuitive function allowing users to give a ‘thumbs up’ to a contribution would have made this easier, and also provided a way to establish the reputation of individual users, and encourage further use.

Another feature of the site’s construction was a networked rather than centralised design. The bid document to 4iP proposed to aggregate users’ material:

“via RSS and providing support to get users onto use web-based services. While the technology will facilitate community creation around investigations, the core strategy will be community-driven, ‘recruiting’ and supporting alpha users who can drive the site and community forward.”

Again, this aggregation functionality was dropped as part of focusing the initial version of the site. However, the basic principle of working within a network was retained, with many investigations including a challenge to blog about progress on other sites, or use external social networks to find possible contributors. The site included guidance on using tools elsewhere on the web, and many investigations linked to users’ blog posts.

In the second part I discuss the building of the site and reflections on the site’s initial few months.

Announcing Help Me Investigate: Networks

Help Me Investigate

Today I’m announcing the launch of a new Help Me Investigate project.

Help Me Investigate: Networks aims to make it easier to investigate public interest questions by providing resources and support, links to investigations across the web, and most importantly: a community.

The project is launching with a focus on 3 areas: Education, Health and Welfare. We’ll be providing tips from practising journalists, updates on ongoing investigations, and useful documents and data.

The existing site blog will continue to provide general advice on investigative journalism.

A launchpad and gathering place

This is an attempt to build a scalable network of journalists, developers and active citizens who are passionate about public interest issues.

Although we’re starting with a focus on three of those, if anyone is willing to manage sites covering other areas, including geographical ones, we may be able to host those too (and some are already being planned).

Unlike the original Help Me Investigate, most investigations will not take place on the HMI:Network sites – instead taking place on other blogs, or through private correspondence -although the tips, documents and data gathered in those investigations will be shared on the site.

How people are contributing

Different people are contributing to the project in different ways.

  • Journalists and bloggers who need help with getting answers to a question (extra eyes on data, legwork), or finding the questions themselves, are using the network as a place to connect.
  • Journalism tutors are tapping into the network for class projects.
  • Journalism students and graduates who want to explore a public interest issue are using it as a place to find others, get help, and publish what they find.

If you want to join in or find out more, please email me on paul@helpmeinvestigate.com or message me on Twitter @paulbradshaw. Or just tell someone about the project. They might find it useful.

Meanwhile, in the following days I’ll be publishing a series of posts about what I learned from the first version of Help Me Investigate, and how that has informed this new project.

Customising your blog – some basic principles (Online Journalism Handbook)

customised car

A customised car. Like a customised blog, only bigger. Image by Steve Metz - click to see original

Although I cover blogging in some depth in my online journalism book, I thought I should write a supplementary section on what happens when you decide to start customising your blog.

Specifically, I want to address 3 key languages which you are likely to encounter, what they do, and how they work.

What’s the difference? HTML, CSS, and PHP

Most blog platforms use a combination of HTML, CSS and PHP (or similar scripting language). These perform very different functions, so it saves you a lot of time and effort if you know which one you might need to customise. Here’s what those functions are:

  • HTML is concerned with content.
  • CSS is concerned with style.
  • And PHP is concerned with functionality.

If you want to change how your blog looks, then, you will need to customise the CSS.

If you want to change what it does, you will need to customise the PHP.

And if you want to change how content is organised or classified, then you need to change the HTML.

All 3 are interrelated: PHP will generate much of the HTML, and the CSS will style the HTML. I’ll explain more about this below.

But before I do so, it’ll help if you have 3 windows open on your computer to see how this works on your own blog. They are:

  1. On your blog, right-click and select ‘View source‘ (or a similar option) so you can see the HTML for that page.
  2. Open another window, log in to your blog, and find the customisation option (you may have to Google around to find out where this option is). You should be able to see a page of code.
  3. Open a third window which you will use to search for useful resources to help you as you customise your blog. Continue reading

When will we stop saying “Pictures from Twitter” and “Video from YouTube”?

Image from YouTube

Image from YouTube

Over the weekend the BBC had to deal with the embarrassing ignorance of someone in their complaints department who appeared to believe that images shared on Twitter were “public domain” and “therefore … not subject to the same copyright laws” as material outside social networks.

A blog post, from online communities adviser Andy Mabbett, gathered thousands of pageviews in a matter of hours before the BBC’s Social Media Editor Chris Hamilton quickly responded:

“We make every effort to contact people, as copyright holders, who’ve taken photos we want to use in our coverage.

“In exceptional situations, ie a major news story, where there is a strong public interest in making a photo available to a wide audience, we may seek clearance after we’ve first used it.”

(Chris also published a blog post yesterday expanding on some of the issues, the comments on which are also worth reading)

The copyright issue – and the existence of a member of BBC staff who hadn’t read the Corporation’s own guidelines on the matter – was a distraction. What really rumbled through the 170+ comments – and indeed Andy’s original complaint – was the issue of attribution.

Continue reading

7 books that journalists working online should read?

Image by B_Zedan

While it’s one thing to understand interactive storytelling, community management, or the history of online journalism, the changes that are affecting journalism are wider than the industry itself. So although I’ve written previously on essential books about online journalism, I wanted to also compile a list of books which I think are essential for those wanting to gain an understanding of wider dynamics affecting the media industries and, by extension, journalism.

These are books that provide historical context to the hysteria surrounding technologies; that give an insight into the cultural movements changing society; that explore key philosophical issues such as privacy; or that explore the commercial dynamics driving change.

But they’re just my choices – please add your own.

Continue reading

An experiment in creating an ‘Auto-Debunker’ twitter account

As the conspiracy theories flew around last Friday, one in particular caught fire: the idea that the News Of The World might have been closed down because it would then allow for its assets – i.e. incriminating evidence – to be destroyed.

Perhaps because it was published under the Reuters brand (although the byline abrogated them of any responsibility for its contents) by the end of the day it had accumulated over 4,000 retweets.

I had already personally tweeted a couple of those users to point out that comments on the article had quickly debunked its argument. And by 6.26 that evening David Allen Green had published an explanation of the flaws in a piece at the New Statesman.

But people were still retweeting: how to connect the two?

Creating @autodebunker

It took me all of 20 minutes to hack together a simple automated service that would reply to people retweeting the Reuters blog post. Continue reading

Why your mark doesn’t matter (and why it does)

It’s that time of year when students get their marks and the usual protests are made. I say “usual” because these tend to follow a particular pattern – and I want to explore why that happens, because I think students and academics often have very different perceptions of what marks mean.

So here are four reasons why your mark does not matter in the way you think it does – as well as some pointers to making sure things are kept in perspective.

1. Marks are not a high score table

Marks measure a number of things, but primarily they (should) measure whether you can demonstrate that you have learned key principles covered in the course. They do not measure your ability. They are not a measure of you as a person. They measure a very specific thing in very specific ways.

This is often the hardest thing to explain to students – particularly those who are extremely talented, but have received bad marks. I might know you are first class; you know you are first class; but that has to be demonstrable and transparent in a piece of work and accompanying documentation not just to me but to a second marker, a moderator and an external examiner (in the UK at least).

This is particularly important when the skills being taught are not just craft skills but involve issues such as research, law, project management, and analysis.

These are skills that sometimes have to be explicitly demonstrated outside of the project they relate to in an evaluation or report. In the field of online journalism – where things are still in flux and part of your skill is being able to follow those changes – I think they are particularly important (They can also often generate objections on the grounds of being too ‘theoretical’, but the point is that these are not objects of abstract study but are intended make you a better practitioner.)

The key advice here is: read the brief, and make sure any documentation explicitly addresses the areas mentioned.

That said, don’t think that you can blag a pass mark through documentation alone – the project will give the lie to that.

2. Effort does not equal success

You can spend twice as much time as somebody else on a project, and still get lower marks. Some people are naturally good at things, and for others it takes a long time. Life’s a bitch.

But also, often, it’s about a lack of focus and planning: spending 20 hours writing blog posts is not going to be as successful as if you spent half of that time reading other blog posts to get a feel for the medium; researching your subject matter; and re-writing what you have written to make it better.

Put another way: it’s better to do something wrong once, then review it and do it better second time, than do it wrong 10 times without reflection.

3. Success does not mean a good mark

If your article is published in a magazine or newspaper, that’s good – but it doesn’t mean that it’s of professional quality. Some editors have low standards – especially if they aren’t paying and need to fill space at short notice.

Likewise, your blog post may have accumulated 3,000 hits – but that doesn’t necessarily mean that it meets the requirements of the brief.

To reiterate point 1: marks measure something very specific. They do not measure you as a person, or even the project as a whole. That’s not to undermine your achievements in getting so many hits or selling an article – those are things you should be rightly proud of (and mention at every interview).

4. Marks don’t matter

I exaggerate, of course. But you shouldn’t take marks too seriously. If you only wanted to pass a module and move on, then move on – it’s quite likely your lack of investment in the subject that was the reason for low marks. If you want a good mark because you want a good degree classification, then you should be using your time effectively and reading feedback from previous assignments – but also be aware of the way that degrees are classified (it’s often more complex than you might assume).

But if you want to be better in the areas that were being measured, then read the feedback – and ask for more if you need to (academics have learned that lengthy written feedback doesn’t tend to be read by students and so keep it short, but are generally happy to talk to you in depth if you want to).

Sometimes the feedback will sound much better than the mark indicates. This quite often comes down to language and style: being ‘sound’ might seem good to you, but in the language of the assessment bands it typically means average or, more often, below average; ‘good’ and ‘excellent’ are equivalents for higher categories. Negative feedback is often sandwiched between positive feedback.

Don’t underestimate how hard it is to get a high mark at undergraduate – and especially postgraduate – level. You may be used to high grades, but the bar is raised at each level. Anything above 60% (in the UK) is actually very good. The average is generally around 58% – anything higher or lower will require explanation for external examiners (who check that marking is consistent across institutions).

To get a first class mark typically means you have to perform at the top level in every category being assessed. That’s in italics for a reason: you might be the greatest writer the world has ever known, but your research isn’t first rate. You might have spent hours scouring official documents for an international scoop (yes, I’m exaggerating again) but your understanding of the legal ramifications was flawed. And so on.

But here’s the thing: given the choice between a great mark and a great story, I think you should go for the latter.

Caveats: this is assuming that your objective is to get a job in journalism, and you are confident of passing. Also, always have a backup plan if the story falls through.

The marks are only a signpost along your journey through education. If you write a blinding blog post, be proud of it regardless of where it ranks against the criteria it was being formally assessed on.

Marks, and the accompanying feedback, are there to focus your attention on blind spots and weak spots in your work. Use them in that way – but don’t use them to rate yourself or to compare yourself with others. Once you start playing a game of high score rankings, you’ve already lost.

Hyperlocal Voices: South Norwich News

South Norwich News

It’s been a while – here’s a new Hyperlocal Voices interview, with South Norwich News, an 18 month-old site set up by former BBC journalist Claire Wood and her husband Tom when she “wanted to test the hypothesis that people’s interest in local news actually only spans a relatively small area.” In the process they discovered the power of social networks and how to avoid the deadline-induced reliance on press releases.

Who were the people behind the blog?

South Norwich News was set up by myself and my husband, Tom Wood, who runs a design agency who specialise in advising online clients on how to make their websites more user-friendly. Using his knowledge, we drew up the information architecture for the site and then found a web developer who could put our ideas into place.

What made you decide to set up the blog?

We came across the idea of hyperlocal news sites being set up and growing in popularity in the States. As a former BBC journalist, I wanted to test the hypothesis that people’s interest in local news actually only spans a relatively small area and that their interest wanes when the stories come from further afield.

In a way I wanted to reclaim the idea of “local news” to mean news that actually matters to you because it’s happening near to where you live. I want to make the news relevant for readers.

I also wanted a way to return to the patch reporting I did in my early days with the BBC, step away from “churnalism” and start setting my own news agenda, dependant on what I believed local people were interested in.

When did you set up the blog and how did you go about it?

We launched in January 2010. In the previous 6 months I started building up contacts, exploring what sort of stories might be of interest to people who live in the area and building the website.

Our site is built on WordPress with some customised changes.

What other blogs, bloggers or websites influenced you?

We looked the USA at sites. There were two or three being set up in New York for example, sometimes funded as an experiment by traditional news titles.

How did – and do – you see yourself in relation to a traditional news operation?

We’re very much in the news business but on a very small scale. We wanted to get away from deadlines and pressures that cause papers and news bulletins to churn out the same press releases across the day.

Some big stories we can’t avoid covering along with the local paper or radio station, but we always try to find a different angle. There’s little point covering the same stories that people can find elsewhere.

As we become more established, it becomes easier to set our own agenda. We aim to delve a little deeper into stories which matter to people locally which other news outlets might not be able to do in such detail..

What have been the key moments in the blog’s development editorially?

Google Analytics gives us a really good insight into which stories interest people the most. We were often surprised by the stories which gained the most traffic. People like hearing about things that are new to the area and also like detailed information on events such as the parade route for Norwich City Football Club’s successful promotion-winning team. We’ve adapted the sort of stories we cover in response to Google analytics research.

After about 6 months, we launched a new Features section, for stories which aren’t strictly “news” but are still of interest to our readers. This allows us to run advertorial features in this section too, which is one of our revenue streams.

What sort of traffic do you get and how has that changed over time?

Last month we had close to 6,000 unique visitors, with 18,000 page views. This has grown month on month since setting up.

I wouldn’t call it a blog. It’s based on WordPress and takes the form of a “blog” but we offer an online news service on a very local level.

Anecdotally, our readers like the service we provide. I think Twitter and Facebook have had a huge impact on our ability to spread our stories to a wider audience, without which we might have floundered.

A new tool for online verification: Google’s ‘Search by Image’

Google have launched a ‘Search by Image’ service which allows you to find images by uploading, dragging over, or pasting the URL of an existing image.

The service should be particularly useful to journalists seeking to verify or debunk images they’re not sure about.

(For examples where it may have been useful, look no further than this week’s Gay Syrian Blogger story, as well as the ‘dead’ Osama Bin Laden images that so many news outlets fell for)/

TinEye, a website and Firefox plugin, does the same thing – but it will be interesting to see if Google’s service is more or less powerful (let me know how you get on with it) Find it hereVideo here.