The end of objectivity – web 2.0 version

paul bradshaw's facebook network

This week a new nail was driven into the coffin of the notion of journalistic objectivity. The culprit? The Washington Post’s leaked social media policy.

The policy is aimed at preserving the appearance of objectivity rather than its actual existence. It focuses on what journalists are perceived to be, rather than what they actually do.

And in doing so, it hits upon the very reason why their attempt is doomed from the start: Continue reading

What thelondonpaper’s death means for freesheets on the web

On 18 September 2009, beloved London evening freesheet thelondonpaper folded. In its wake, London Lite remains.

While the closure is part of a larger effort by owners News International to trim the fat from their portfolio and erect paywalls around profitable titles, it also speaks to the future of freesheets on the web.

Back in April, thelondonpaper re-launched their web site. What was interesting about that was that London Lite had effectively no web site. It still doesn’t — just a ‘e-edition’. Its content is “incorporated” with morning freesheet Metro.co.uk. Looking back, one has to wonder what would have happened if the money hadn’t been sank into the web presence. Would thelondonpaper still be around?

In a comment on a Guardian article about the closure, a now-former londonpaper web developer had the following to say about the redesign: Continue reading

When the lack of comments damages your news brand

If you want to skip the background, go to the next subheading

Last week the BBC Education website published a piece about a report into the use of technology by schoolchildren: “Tech addiction ‘harms learning'”:

“Technology addiction among young people is having a disruptive effect on their learning, researchers have warned,” the intro led, before describing the results of the study. No one other than the study authors was quoted.

But GP and Clinial Lecturer AnneMarie Cunningham, hearing of the report on Twitter, felt the headline and content of the article didn’t match up: “The headline suggests a causal relationship which a cross-sectional study could not establish, but the body of the text doesn’t really support any relationship between addiction and learning”, she wrote, and she started digging:

“It … was clear that none of the authors had an education background. The 2 main authors, Nadia and Andrew Kakabadse, have a blog showcasing their many interests but education doesn’t feature amongst them. They descibe themselves as “experts in top team and board consulting, training and development”.”

AnneMarie bought the report for $24.99 – the only way to read it – and started reading. This is what she found: Continue reading

Today’s online news: too much surface area, but too little depth?

Even though I had followed the latest financial crisis since its inception on every news site of relevance, I had to wait for the Atlantic’s cover story on the topic to understand where Wall Street had gone wrong (at least to the extent that anyone understood it).

While online news as it exists today is great for 24/7 access, real-time updates, increased transparency, and multiperspectival discussions, it still lacks the depth and detail of a feature story in a print magazine.

As a proponent of digital communication, I can appreciate the pervasiveness of news coverage in the online age, but as a student of journalism I often crave the completeness of long-form journalism, which is lacking on the Internet.

In a very enlightening article in the Nieman Reports’ fall edition, Matt Thompson brings up this very point about digital journalism. Thompson writes that while each new day brings with it an array of breaking news stories on various topics, virtually none of them purport to explain the significance, context or relevance of the subject at hand. Continue reading

Daily Mail has joined the American lunatic fringe

It’s Wednesday and the Daily Mail is still carrying a factually inaccurate story published the previous Sunday morning.

And it’s not like they haven’t been told it’s inaccurate, comment after comment in the 279 thus far point out exactly why they are wrong.

What’s interesting is exactly how come they are wrong. Continue reading

How newspapers SEOed Patrick Swayze’s death

When news breaks, if you want to do well in Google for relevant searches, publish early, publish often and put your keywords at the front.

The Guardian's Patrick-Swayze tag page

The Guardian's Patrick-Swayze tag page

From an SEO point of view, the more stories you can pump out targeting different (or even the same) keywords, the more chance you have of appearing at the top of Google’s search results – and scooping up the traffic.

Get it right, and you can appear twice in the web results – and twice in the news results that Google often shows above them for breaking-news-related searches.

Some of the newspapers may have taken this a little bit far with news of Patrick Swayze’s death

  • The Guardian published 15 stories today (Tuesday 15th), all available from its existing Patrick Swayze tag page. Do we really need 15 stories on this?!? About half had a title that began with ‘Patrick Swayze’.
  • The Telegraph published 10 pages, and while it doesn’t have as many tag pages as the Guardian, it did feature one of its two obituaries (here and here) as a link from its ‘hot topics’ list on its home page, giving it a boost in Google’s web-result rankings. The screenshot, below, shows that it may have run out of ideas to get to 10 pages – the two bottom ones shown are very similar. Also, nine out of 10 of these stories have a title beginning with ‘Patrick Swayze’. The other is just called ‘Dirty Dancing – time of your life’. Now that is front-loading keywords.
  • The Mirror pumped out 5 pages today, and also set up a tag page at some point during the day (they didn’t have one before lunch), hoping to target the searches for ‘patrick swayze’ (yes, they forgot to capitalise it in their haste to set it up). The titles of all 5 begin with ‘Patrick Swazye’.
  • The Independent published 4 pages.
  • The Times managed just 3 pages – maybe with a paywall coming they are less interested in SEO these days ..
  • The Sun published only 2 pages.
  • The Mail published just 1 massively long story – on top of its  existing tag page for the actor. Interestingly, the paper recently claimed it wasn’t interested in celeb stories to drive traffic (although I claimed Michael Jackson was behind its June ABCe success).

The papers weren’t all that successful in their SEO efforts.

The 4th and 5th most viewed stories seem a little bit similar ...

The 4th and 5th most viewed stories seem a little bit similar ...

US sites dominated Google’s results for a search on ‘Patrick Swayze’ and ‘Patrick Swayze death’. The Telegraph did though take the top two web search spots for a search on ‘Patrick Swayze obituary’.

Keith Floyd has also died – and it was a similar story in terms of volume of stories. The Telegraph, for instance, has published 8 stories and the Guardian, via its tag page, published 9. The Guardian pipped the Telegraph to win the results for a search on ‘Keith Floyd obituary’.

If you ever want to target what people are searching for around breaking news, I recently compared the different Google tools for a search on X-factor related terms. And if you want to see SEO taken to the dark side, check out this method of newspapers and paid links.

The 100-25-10 Rule

A curious piece of data emerging from a conference at the American Press Institute. It seems that in “nearly all markets, newspaper websites receive 2.5 visits and 10 pageviews for each unique visitor.” Is this a 90-9-1 rule for the newspaper industry?

If you want to make it snappier, multiply by 10, so it becomes: 100 pageviews and 25 visits for every 10 visitors. The 100-25-10 rule.

Google’s Fast Flip – a cruel joke on the news industry

So Google launched Fast Flip yesterday, a Labs experiment that allows you to ‘experience’ news websites in a similar way to their analogue equivalents. Yes, you can ‘flick’ through pages of news.

Woo-hoo.

Superficially this appears little more than a repeat of many similar experiments in the past decade from publishers who thought readers wanted an analogue experience online and commissioned disproportionately expensive technologies that allowed you to ‘turn the page’ on-screen (I turned down one such technology myself as a magazine editor as long as 10 years ago). Things have moved on so much that anyone can have this flashy technology for themselves for free by going to Issuu.

So why are the web-native minds of Google wasting time on such an analogue-mindset concept?

Here’s the laughable quote that I think is key:

“To make money, Fast Flip also serves up contextual adverts around the screenshots.

“Publishers who have signed up to provide content to the service will share in that revenue; that was proof, said Ms Mayer, that Google was keen to help the industry at a time when it was clearly struggling.”

Oh yes, that’s concrete proof alright.

Allow me to call bullshit. If this is concrete proof of anything, it is proof that Google are prepared to cash in on the blind panic of the news industry in the midst of a crisis. Add in their recently mooted micropayments system and it’s almost as if Google are having a bit of fun tormenting ants with a magnifying glass.

Until now Google has walked a fine line in claiming that it is not the parasite that the news industry says it is. It does did not sell adverts on Google News, it is generally the major source of traffic to news websites, and publishers are free to remove themselves from Google’s listings through a simple piece of script.

Fast Flip and the micropayments system are moves to take them over that line – despite the claims to be ‘helping’ the news industry any relationship is likely to be skewed in the other direction – as anyone who has tried to make a living from AdSense will tell you. Note that, like AdSense:

“Google is running banner ads alongside the article thumbnails, the proceeds of which will be split with publishers (though Google won’t disclose the terms of the revenue split).”

Of course, by hosting screenshots Google are eating into one of the key metrics that publishers use to sell advertising: the time a user spends on your site. And given that many readers don’t read beyond the first few pars, there’s a good chance it will eat into the numbers clicking through to the actual page at all. So unless Google’s ad rates are significantly higher, what reason at all would a commercial publisher have to sign up to a scheme that devalues their own ad inventory in exchange for some pennies from Google? Blind panic in the midst of a crisis, that’s all.

In defence of paywalls redux: what he said

Back in June I posted ‘In defence of paywalls (a thought experiment)‘ where I said: “When you’re driving a tanker and you see a big rock ahead – do you ask everyone on the ship to rebuild it as an aeroplane? Or do you start steering away in the hope that your part of the tanker will somehow avoid the worst?”

I’ve only just come across a piece written in the same month by Michael Nielsen which expresses the same points in a much more rigorous way during a piece on disruption in general (h/t Jo Geary). It’s well worth reading in full, but here’s how he puts it so much better than I:

Continue reading

Data and the future of journalism panel discussion: Linked Data London

Tonight I had the pleasure of chairing an extremely informative panel discussion on data and the future of journalism at the first London Linked Data Meetup. On the panel were:

What follows is a series of notes from the discussion, which I hope are of some use.

For a primer on Linked Data there is A Skim-Read Introduction to Linked DataLinked Data: The Story So Far PDF) by Tom Heath, Christian Bizer and Berners-Lee; and this TED video by Sir Tim Berners-Lee (who was on the panel before this one).

To set some brief context, I talked about how 2009 was, for me, a key year in data and journalism – largely because it has been a year of crisis in both publishing and government. The seminal point in all of this has been the MPs’ expenses story, which both demonstrated the power of data in journalism, and the need for transparency from government – for example, the government appointment of Sir Tim Berners-Lee, seeking developers to suggest things to do with public data, and the imminent launch of Data.gov.uk around the same issue.

Even before then the New York Times and Guardian both launched APIs at the beginning of the year, MSN Local and the BBC have both been working with Wikipedia and we’ve seen the launch of a number of startups and mashups around data including Timetric, Verifiable, BeVocal, OpenlyLocal, MashTheState, the open source release of Everyblock, and Mapumental.

Q: What are the implications of paywalls for Linked Data?

The general view was that Linked Data – specifically standards like RDF – would allow users and organisations to access information about content even if they couldn’t access the content itself. To give a concrete example, rather than linking to a ‘wall’ that simply requires payment, it would be clearer what the content beyond that wall related to (e.g. key people, organisations, author, etc.)

Leigh Dodds felt that using standards like RDF would allow organisations to more effectively package content in commercially attractive ways, e.g. ‘everything about this organisation’.

Q: What can bloggers do to tap into the potential of Linked Data?

This drew some blank responses, but Leigh Dodds was most forthright, arguing that the onus lay with developers to do things that would make it easier for bloggers to, for example, visualise data. He also pointed out that currently if someone does something with data it is not possible to track that back to the source and that better tools would allow, effectively, an equivalent of pingback for data included in charts (e.g. the person who created the data would know that it had been used, as could others).

Q: Given that the problem for publishing lies in advertising rather than content, how can Linked Data help solve that?

Dan Brickley suggested that OAuth technologies (where you use a single login identity for multiple sites that contains information about your social connections, rather than creating a new ‘identity’ for each) would allow users to specify more specifically how they experience content, for instance: ‘I only want to see article comments by users who are also my Facebook and Twitter friends.’

The same technology would allow for more personalised, and therefore more lucrative, advertising.

John O’Donovan felt the same could be said about content itself – more accurate data about content would allow for more specific selling of advertising.

Martin Belam quoted James Cridland on radio: “[The different operators] agree on technology but compete on content”. The same was true of advertising but the advertising and news industries needed to be more active in defining common standards.

Leigh Dodds pointed out that semantic data was already being used by companies serving advertising.

Other notes

I asked members of the audience who they felt were the heroes and villains of Linked Data in the news industry. The Guardian and BBC came out well – The Daily Mail were named as repeat offenders who would simply refer to “a study” and not say which, nor link to it.

Martin Belam pointed out that The Guardian is increasingly asking itself ‘How will that look through an API’ when producing content, representing a key shift in editorial thinking. If users of the platform are swallowing up significant bandwidth or driving significant traffic then that would probably warrant talking to them about more formal relationships (either customer-provider or partners).

A number of references were made to the problem of provenance – being able to identify where a statement came from. Dan Brickley specifically spoke of the problem with identifying the source of Twitter retweets.

Dan also felt that the problem of journalists not linking would be solved by technology. In conversation previously, he also talked of “subject-based linking” and the impact of SKOS and linked data style identifiers. He saw a problem in that, while new articles might link to older reports on the same issue, older reports were not updated with links to the new updates. Tagging individual articles was problematic in that you then had the equivalent of an overflowing inbox.

(I’ve invited all 4 participants to correct any errors and add anything I’ve missed)

Finally, here’s a bit of video from the very last question addressed in the discussion (filmed with thanks by @countculture):

Linked Data London 090909 from Paul Bradshaw on Vimeo.