Category Archives: user generated content

Gatewatching for local news

Among the many good things about Internet news consumption is the fact that audiences can seek any sort of information to suit their interests and inclinations. No longer stifled by editorial, corporate or advertiser monopoly, readers browse everything from obscure blogs to mainstream news sites to get the information they want.

Ever since Internet media started going mainstream, however, many have raised the question of whether this vast and tolerant space is causing people to replace news that informs and educates with that which merely entertains. One has only to look at the slew of sensational Internet videos that go viral, or the latest online reiteration of Jessica Simpson’s gaffe to accept that this is a legitimate concern. In addition, people have more options than ever before to confine themselves to fragmented communities and echo chambers to get the news they want in lieu of what they need.

As Charlie Beckett points out in Supermedia, while the diversity provided by the Internet with regard to information dissemination is important, it also tends to further the divide between those looking for real, relevant information and those who merely want instant gratification through the latest celebrity gossip.

Of course, blaming new media for its endless possibilities would be sort of like blaming that decadent chocolate cake for existing. Just because it is there, doesn’t mean you need to seek it.

This has been a more major concern with regard to local news. Citizens might tend to focus on the latest iPhone application released by Apple at the expense of important news happening at home – information that would be vital to them as contributors to a democracy.

But while lack of reader interest is a problem, it is often spurred on by scarcity of engaging content from news organizations – if all a local paper can provide is a string of wire service accounts and press releases, how do they expect to keep readers motivated? This was hard enough to accept in an age where the newspaper or the evening news broadcast was the only source of information. It is simply untenable in the Web 2.0 world, where readers can get actual, eyewitness accounts from their Twitter followers and view firsthand pictures through Flickr groups. In other words, in this age of social media and online networks, local journalists seem almost out of touch with the community they live in.

The question then is, can residents of a community do well as their own gatewatchers?

The New York-based site NYC.is, which functions as a “Digg” for the city and its surrounding areas is trying to do just that. “Our goal is to connect bloggers, independent reporters and activists in different parts of the five boroughs, rewarding the best work by sending it traffic and increasing potential for impact,” reads the mission statement.

I got a chance to talk to Susannah Vila, a graduate student at Columbia University, who launched the site. “The inspiration behind the concept is [it provides] ways of democratizing the Web.  This was part of what excited me about making the site,” she says.

Readers themselves direct attention to local news that they deem important, while also channeling traffic to independent bloggers, regional Web sites and mainstream sites. Anything from New York City mayor Michael Bloomberg’s job approval ratings to rising prices of a pizza slice in Brooklyn can turn up on the front page.  “The point is, it is not just one type of story that gets popular. There is a lot of range,” says Vila. The common thread is relevance to people of the community. In true Digg fashion, the top contributors get a mention on the home page, as do the most popular stories.

Can this go one step further, and actually motivate people to do original reporting or garner data for a new story? “Once I get more of a community on the site with more engaged readers there is definitely a possibility to prompt them to investigate certain things or to [urge them] to go to community board meetings,” Vila says. ““It would also be cool to let people vote on ideas for stories.”

A gatewatching site at a local community level may not be sufficient to provide all the information residents need, but it certainly allows a comprehensive look at what readers are looking for, and what is important to them as residents, and as citizens: it can sometimes be an aspiring young band, or the New York Mets’ dismal season, but more often than not, it is about hard issues, such as the annual decline in household incomes, grassroots candidates for City Council, and governmental oversight of local schools.

Add context to news online with a wiki feature

In journalism school you’re told to find the way that best relates a story to your readers. Make it easy to read and understand. But don’t just give the plain facts, also find the context of the story to help the reader fully understand what has happened and what that means.

What better way to do that than having a Wikipedia-like feature on your newspaper’s web site? Since the web is the greatest causer of serendipity, says Telegraph Communities Editor Shane Richmond, reading a story online will often send a reader elsewhere in search of more context wherever they can find it.

Why can’t that search start and end on your web site?

What happens today

Instead of writing this out, I’ll try to explain this with a situation:

While scanning the news on your newspaper’s web site, one story catches your eye. You click through and begin to read. It’s about a new shop opening downtown.

As you read, you begin to remember things about what once stood where the new shop now is. You’re half-way through the story and decide you need to know what was there, so you turn to your search engine of choice and begin hunting for clues.

By now you’ve closed out the window of the story you were reading and are instead looking for context. You don’t return to the web site because once you find the information you were looking for, you have landed on a different news story on a different news web site.

Here’s what the newspaper has lost as a result of the above scenario: Lower site stickiness, fewer page views, fewer uniques (reader could have forwarded the story onto a friend), and a loss of reader interaction through potential story comments. Monetarily, this all translates into lower ad rates that you can charge. That’s where it hurts the most.

How it could be

Now here’s how it could be if a newspaper web site had a wiki-like feature:

The story about the new shop opening downtown intrigues you because, if memory serves, something else used to be there years ago. On the story there’s a link to another page (additional page views!) that shows all of the information about that site that is available in public records.

You find the approximate year you’re looking for, click on it, and you see that before the new shop appeared downtown, many years ago it was a restaurant you visited as a child.

It was owned by a friend of your father’s and it opened when you were six years old. Since you’re still on the newspaper web site (better site stickiness!), you decide to leave a comment on the story about what was once there and why it was relevant to you (reader interaction!). Then you remember that a friend often went there with you, so you email it to them (more uniques!) to see if they too will remember.

Why it matters to readers

For consumers, news is the pursuit of truth and context. Both the news organization and the journalists it employs are obligated to give that to them. The hardest part of this is disseminating public records and putting it online.

The option of crowd-sourcing it, much like Wikipedia does with its records, could work out well. However just the act of putting public records online in a way that makes theme contextually relevant would be a big step forward. It’s time consuming, however the rewards are great.

Newspapers on Twitter – how the Guardian, FT and Times are winning

National newspapers have a total of 1,068,898 followers across their 120 official Twitter accounts – with the Guardian, Times and FT the only three papers in the top 10. That’s according to a massive count of newspaper’s twitter accounts I’ve done (there’s a table of all 120 at that link).

The Guardian’s the clear winner, as its place on the Twitter Suggested User List means that its @GuardianTech account has 831,935 followers – 78% of the total …

@GuardianNews is 2nd with 25,992 followers, @TimesFashion is 3rd with 24,762 and @FinancialTimes 4th with 19,923.

Screenshot of the data

Screenshot of the data

Other findings

  • Glorified RSS Out of 120 accounts, just 16 do something other than running as a glorified RSS feed. The other 114 do no retweeting, no replying to other tweets etc (you can see which are which on the full table).
  • No following. These newspaper accounts don’t do much following. Leaving GuardianTech out of it, there are 236,963 followers, but they follow just 59,797. They’re mostly pumping RSS feeds straight to Twitter, and  see no reason to engage with the community.
  • Rapid drop-off There are only 6 Twitter accounts with more than 10,000 followers. I suspect many of these accounts are invisible to most people as the newspapers aren’t engaging much – no RTing of other people’s tweets means those other people don’t have an obvious way to realise the newspaper accounts exist.
  • Sun and Mirror are laggards The Sun and Mirror have work to do – they don’t seem to have much talent at this so far and have few accounts with any followers. The Mail only seems to have one account but it is the 20th largest in terms of followers.

The full spreadsheet of data is here (and I’ll keep it up to date with any accounts the papers forgot to mention on their own sites)… It’s based on official Twitter accounts – not individual journalists’. I’ve rounded up some other Twitter statistics if you’re interested.

8% of Telegraph.co.uk traffic from social sites

Telegraph.co.uk gets an amazing 8% of its visitors from social sites like Digg, Delicious, Reddit and Stumbleupon, Julian Sambles, Head of Audience Development, has revealed.

The figure explains how the Telegraph is now the most popular UK newspaper site.

75,000 visitors a day

The Telegraph had about 28 million unique visitors in March, which means social sites are sending it almost 75,000 unique visitors a day.

Search engines are responsible for about a third of the Telegraph’s traffic Julian also revealed – or about 300,000 unique visitors a day.

This means the Telegraph gets 1 social visitor for every 4 search ones – an astonishingly high ratio.

You can read more of what Julian said about the Telegraph’s social media strategy here. The statistics were originally given for an article on social sites on FUMSI.

Do blogs make reporting restrictions pointless?

The leaked DNA test on 13-year-old alleged dad Alfie Patten has revealed a big problem with court-ordered reporting restrictions in the internet age. (NB This is a cut down version of a much longer original post on blogging and reporting restrictions that was featured on the Guardian).

Court orders forbidding publication of certain facts apply only to people or companies who have been sent them. But this means there is nothing to stop bloggers publishing material that mainstream news organisations would risk fines and prison for publishing.

Even if a blogger knows that there is an order, and so could be considered bound by it, an absurd catch 22 means they can’t found out the details of the order – and so they risk contempt of court and prison.

Despite the obvious problem the Ministry of Justice have told me they have no plans to address the issue. Continue reading

3 weeks in: launching a Midlands environmental news site

3 weeks ago my class of online journalism students were introduced to the website they were going to be working on: BirminghamRecycled.co.uk – environmental news for Birmingham and the West Midlands.

The site has been built by final year journalism degree student Kasper Sorensen, who studied the online journalism module last year.

In building and running the service Kasper has done a number of clever, networked things I thought I should highlight. They include:

  • Creating a Delicious network for the site – every journalist in the team has a Delicious account; this gathers together all of the useful webpages that journalists are bookmarking
  • Tweetgrid of all journalists’ tweets – again, every journalist has a Twitter account. This pulls them all together.
  • Twitter account @bhamrecycled
  • Kasper sent the whole team an OPML file of subscriptions to RSS feeds of searches for every Midlands area and environmentally related keywords. In other words, journalists could import this into their Google Reader and at a stroke be monitoring any mention of certain key words (e.g. ‘pollution’, ‘recycling’) in Birmingham areas.
  • He also shared a Google calendar of relevant events

The site itself is clever too.

  • The About page has a list of all contributing journalists with individual RSS feeds.
  • In addition, each author has a link to their own profile page which not only displays their articles but pulls Twitter tweets, Delicious bookmarks and blog posts.

Kasper wanted to explicitly follow a Mashable-style model rather than a traditional news service: he felt an overly formal appearance would undermine his attempts to build a community around the site.

And community is key. When unveiling the site to the journalists Kasper made the following presentation – a wonderful distillation of how journalists need to approach news in a networked world:

External links: the 8 stages of linking out denial

Do you have a link problem? You can handle linking. It’s just one post/article/page without a link. You can link whenever you want to. Or can you?

Where are you on this scale …? (Originally posted here.)

1 Don’t link to anyone

Link to other sites? But people will leave my site. They won’t click on my ads. They won’t read other pages. I’ll leak page rank. No way.

2 Add URLs but don’t make them hyperlinks

OK, that’s a bit ridiculous. If I’m talking about other organisations, I can’t pretend they don’t have a website. I know, I’ll put web addresses in. But i won’t make them hyperlinks. Brilliant, yes?

3 Add an ‘external links’ box

Even I’m finding that no hyperlink thing annoying when I go back to an old page and have to copy and paste the damn things.

I suppose I should have some links on my page. I’ll put them in a box. Over there (down a bit …). I’m going to use some sort of internal redirect or annoying javascript, though, to make sure I don’t pass any page rank. Mwah, hah hah.

4 Put some links in the copy

I don’t seem to be getting many inbound links. I guess I’m not playing fair. I know, I’ll sort out my workflow so that it’s possible to add links easily inside the actual copy. But I’m still not passing any pagerank. I’m going to put “rel=nofollow” on every link.

5 Give my users some google juice

Commenters seem thin on the ground. Maybe I’ll let them link to their own sites. I’ll use some annoying javascript to hide the links from google though. Most of my commenters are probably spammers, and I can’t trust them to police their own community, after all.

6 Link when I have to. And remove nofollow and any other annoying tricks

That seemed to make everyone happier. There are a few proper links on my pages. And people seem to want to link to me now that I’m playing fair with my links.

7 Acknowledge my sources

Oops. Spoke to soon. Been outed as pinching someone else’s idea and not attributing it. From now on, I’m going to make sure I always link to everyone I should.

8 Enlightenment: Make linking part & parcel of what I do

Internet. Inter as in inter-connected. Net as in network.

I get it now. I’m going to become a trusted source for information and advice, AND of what else people should read elsewhere on the internet. Blimey, more and more people are visiting my site and linking to it. Continue reading

Could moderators collect potential leads from comments?

Guardian community moderator Todd Nash* makes an interesting suggestion on his blog about the difficulties journalists face in wading through comments on their stories:

“there is potential for news stories to come out of user activity on newspaper websites. Yet, as far as I know, it is not a particularly well-utlised area. Time is clearly an issue here. How many journalists have time to scroll through all of their comments to search for something that could well resemble a needle in a haystack? It was commented that, ironically, freelancers may make better use of this resource as their need for that next story is greater than their staff member counterparts.

“The moderation team at guardian.co.uk now has a Twitter feed @GuardianVoices which highlights good individual comments and interesting debate. Could they be used as a tool to collect potential leads? After all, moderators will already be reading the majority of content of the publication they work for. However, it would require a rather different mindset to look out for story leads compared to the more usual role of finding and removing offensive content.”

It’s an idea worth considering – although, as Todd himself concludes:

“Increased interactivity with users builds trust, which in turn produces a higher class of debate and, with it, more opportunities for follow-up articles. Perhaps it is now time for the journalists to take inspiration from their communities as well.”

That aside, could this work? Could moderators work to identify leads?

*Disclosure: he’s also a former student of mine

User generated content and citizen journalism (Online Journalism lesson #4)

Lesson 4 in this series of Online Journalism classes looks at User Generated Content (UGC) and Citizen Journalism. Now the students have to think creatively of ways to engage communities in the issues they’re covering (and vice versa):

Guardian tops Reddit submissions list

The Guardian has had more stories submitted to Reddit.com than any other major newspaper site.

The news follows the Telegraph topping the Digg list and the Times topping the StumbleUpon list.

The graph shows how many pages have been submitted to Reddit for each site. It’s based on an analysis of newspapers’ Reddit submissions that also suggests the Telegraph is catching up with the Guardian – they tied for the number of stories submitted over the last week.

Submissions to Reddit: Guardian wins

Submissions to Reddit: Guardian wins