Author Archives: Paul Bradshaw

Why offering free wifi could be one way for publishers to save journalism

The recent announcement that Swindon will be the first UK town to offer free wifi to all its citizens has piqued my curiosity on a number of levels. MA Online Journalism student Andrew Brightwell first got me thinking when he pointed out that the ability for the local council (which owns a 35% stake) to sell advertising represented a new threat to the local paper.

But think beyond the immediate threat and you have an enormous opportunity here. Because offering universal wifi could present a real opportunity for publishers to recapture some of the qualities that made their print products so successful. Continue reading

Ditching the template: the rise of the ‘blogazine’

Take a look at this:

blogazine screengrab

These are blog posts, tackled with the attitude of a magazine designer. There’s a whole lot more in this post at Smashing Magazine, which looks at the rise of the ‘blogazine’, and interviews four of its leading exponents. Stunning stuff – well worth a read. Now, is there a plugin that makes this as easy to do as magazine layout?

Does news aggregation benefit consumers? Does it harm journalists? (another response to govt)

Here’s a second question I’m expecting on Thursday when I give evidence to the Department for Culture, Media and Sport committee‘s sixth evidence session on The future for local and regional media. As Google are on before me (some act to follow) and aggregators are being waved around as the Big Baddie of traditional journalism, the question’s going to be asked: are aggregators really bad? And for whom?

What is aggregation?

The first point I’ll need to pick apart is what is meant by aggregation. The biggest news aggregators are local broadcasters and national newspapers, who habitually lift stories from local newspapers to fill their newshole. ‘But we add value!’ they might cry. Yes, and so do Google News and most of the other aggregators out there: by displaying them alongside each other for context, by using algorithms to identify which ones came first or are most linked to.

Of course the biggest value that aggregators add is by driving traffic back to the original material. Given that a) around a third of traffic to a typical news site comes from search engines and aggregators and b) most news sites have visitor numbers far in excess of their print readerships it’s fair to say that aggregators are not “parasites” eating into news website traffic. A more accurate description would be symbiotes, using content for mutual benefit.

Much of the objections to aggregators appear to me to come down to control and monopoly: Google is making a lot out of advertising, and newspapers are not. That’s simply the result of a competitor doing something better than you, which is selling advertising.

They do not sell that against newspaper content, they sell it against their index, and their functionality. A good analogy is map-makers: the owners of Tussaud’s Waxwork Museum don’t sue mapmakers for making money from featuring their attraction in their map, because visitors still have to go into the museum to enjoy its content. (And yes, any website publisher can instantly take itself off Google with a simple script anyway).

A second objection, it feels to me, comes from the fact that Google actually makes it much easier for you to bypass the parasites of the news industry who pass off other people’s hard work as their own, and go straight to the source. So if a local paper has a story you can read that instead of the national newspaper’s rewrite; if a scientific organisation did the research you can read that instead of the sensationalist write-up; the politician’s statement can be read in full, not out of context.

Is it good for consumers? Bad for journalists?

For the reasons given above, I don’t feel that aggregators are bad for consumers, although it would be over-simplistic to suggest they are therefore purely good. Aggregators require a different media literacy and are subject to the oddities of a selection process based on mathematical formula. In short, it’s not worse, or better, just different. But as a potential avenue to more information around a story, as well as a way of highlighting just how much news organisations reproduce the same material, I welcome them.

As for local journalists, aggregators don’t make things worse. In terms of their work processes, journalists benefit from aggregation by being able to find out information more quickly and efficiently. The downside, of course, is that so can their readers and so rewriting copy from elsewhere becomes less appropriate. Journalists have to add value, which seems to me a better and more rewarding use of their skills. On a basic level that might be through becoming expert aggregators themselves – better than the algorithms – or it may be by adding extra information, context or analysis that they can see is missing elsewhere. Either way, I can’t see how that is a bad thing.

But these are just my thoughts on the question of aggregation and their influence on the news industry. I’d welcome other perspectives as I prepare my responses.

What quality guarantees do blogs have? (response to government)

On Thursday I’ll be giving evidence to the Department for Culture, Media and Sport committee‘s sixth evidence session on The future for local and regional media. Based on the series of responses to their consultation earlier this year, I expect to be asked questions around particular themes. One of these revolves around the quality of blogs and how you guarantee that.

The quality issue is an interesting one that I expect to rear its head increasingly as hyperlocal startups become taken more seriously, lobby for equal treatment, and compete with established players for funding and advertising. We’ve already seen it, in fact, in some of the talk by ITN and PA around the bidding for local news consortia, and their talk of experience and reliability. The implication, of course, is that you can’t expect that from these ‘Johnny Come Latelies’.

When you look at it, the mainstream media can actually make claim to guarantees of quality (regardless of whether that quality exists) through a number of avenues: firstly, from being answerable to the market and to regulators, secondly, through professional codes of conduct, training and internal procedures, and finally through membership of professional organisations like the NUJ.

Bloggers, by contrast, can’t call on any of those same guarantees to ‘quality’. Many come from journalistic backgrounds and so have the same standards, but they don’t generally adhere to a formal code. Any time a ‘Bloggers’ Code of Conduct’ has been mooted it’s been greeted with derision because of the sheer diversity of practitioners. Still, I do think having individual codes that express your values and how people can obtain redress could count for a lot here.

What guarantees the quality of blogs?

Bloggers’ guarantees of quality, it appears to me, are enshrined in two key generic practices: the right of reply (comments) and transparency (linking). And a key overarching guarantee: accountability.

I’m not sure how to conceptualise this accountability, but it’s something of the web that needs exploration. You might call this ‘Google Juice’ or PageRank or simply reputation – what I’m trying to express is that the medium itself makes it difficult to get away with Bad Journalism as often as happened in less conversational media.

There’s also another guarantee of quality: lack of pressure from production deadlines, sales, proprietors and need to fill space. I’m not sure how long these will last, and in many cases they don’t apply (e.g. blogs who churn content for hits), but still, broadly, they deserve mention. Bloggers can pursue a story on its own merits, and indeed, when the collaboration of users is a major factor, they are reliant on serving their interests rather than those of advertisers or owners. I guess that’s another aspect of accountability.

Production versus Post-Publication

Looking at those claims you’ll notice that there’s a clear divide between Old and New Media. Almost all Old Media’s guarantees of quality relate to the production phase of journalism: once it’s published, there is very little ‘guarantee’ of quality at all. If it’s wrong, it’s wrong, and there’s little chance of that being changed.

New Media’s guarantees are more about post-publication – bloggers can’t guarantee that it will be balanced but they can guarantee that it will be fixed quickly if there’s something not quite correct, or missing, or that’s happened since.

Once again it’s the divide between the filter-then-publish and the publish-then-filter models.

And this brings us to the fact that the whole question rests on what you assume is ‘quality’. I can guess that MPs will assume that ‘quality’ means, for example, ‘objectivity’ and ‘balance’. I’m not saying that those are not good qualities to have, but we should be careful of assuming they are the only qualities, or that they carry the same importance in a world of universal publishing as they did in a world where you could count the number of publishers on two hands.

In short, the importance of traditional values of news quality is changing and that needs to be recognised.

Equally, then, there are the qualities of being ‘accurate’, ‘up to date’, ‘comprehensive’ and ‘correctable’. The quality of being ‘up to date’, for example, had little meaning beyond the production deadline in a pre-web world. Its importance is much more important now that content is always accessible. ‘Accuracy’ was a quality subject to the limitations of time, sources and newsroom knowledge, but now it’s possible for experts and eyewitnesses to contribute. I could go on.

But for now let me hang this question out and, in the spirit of its subject, invite you to improve the quality of this blog post and answer the question: what guarantees can blogs draw on for their quality? What exactly is quality in a networked age? And how do we articulate that to those from a different era?

What would Google do? AOL has the answer: the algorithm as editor

AOL is making plans for its post-Time Warner life that show just how news could be organised if you started with a blank canvas and two words: user data:

In December, when it becomes a stand-alone company, AOL will begin to tap a new digital-newsroom system that uses a series of algorithms to predict the types of stories, videos and photos that will be most popular with consumers and marketers.

The predictions, it says, are based on a wide swath of data AOL collects, from the Web searches people make on its site to the sites visited by subscribers to its Internet services.

The system is designed to track breaking newsand trends and identify the best times to write about seasonal events, such as Halloween or Monday Night Football.

Based on these recommendations, the company’s editorial staff, which totals about 500, will assign articles to a network of free-lancers across the country via a new Web site called Seed.com. AOL says it now works with about 3,000 free-lancers, but it is hoping to sharply increase that number through the Web site, which is open to anyone looking to submit a story.

It’s brave stuff. For years we’ve heard traditional publishers state flatly that, while user data is useful, they would never think of handing over the editorial agenda. Whether that’s pride, vanity, professionalism, or all three, AOL doesn’t have it.

And I lied: it’s not two words on that blank canvas, but 4: user and advertiser data. The article goes on:

AOL says it will pay free-lancers based on how much its technology predicts marketers will pay to advertise next to their articles or videos. It says that will range from nothing upfront, with a promise to share ad revenues the article generates, to more than $100 per item.

In addition to selling standard ads to run alongside the story or video on a Web page, AOL says it will offer custom content. For instance, AOL says, if its algorithms show consumers are searching for information about the Zhu Zhu Pets robotic hamster, a retailer could pay AOL to sponsor an article about where to find the hot toy. Some traditional media outlets, including magazines and TV studios, offer similar services.

This is Google’s auction-based contextual advertising model applied to journalism, essentially matching supply and demand from readers and advertisers to set the market rate. The one variable that is notable by its absence is the supply of journalists: AOL don’t say whether payment rates will go up if no one decides to volunteer their writing for a mere ‘share of ad revenues’ (I’m guessing in that instance one of AOL’s editors will have to write it themselves – but at least they’ll be being paid. Hopefully.)

Indeed, with an upper rate of ‘more than $100 per item’ you wonder how large the supply of writers will be – yes, there’s lots of people writing for nothing online, but they generally write out of choice and for pleasure, not based on the arbitrary demand of an algorithm. And clearly, based on the number of editors they look set to employ, AOL are not expecting writers with great knowledge and talent (the payment of journalists also sounds similar to the content factories of the search engine optimisation industry).

Ryan Singel points out that Demand Media are already doing something similar. That’s true, but AOL have access to data that Demand could only dream of, along with a number of growing brands.

Ultimately, it’s a clever idea, but one that looks like it has already been taken to an extreme too far for advertisers who like to see their brand next to quality journalism. A lot rests on whether AOL can manage the churn of contributors, and the bottleneck of editing, long enough for advertisers to get used to the model. It’s a peculiarly new media model, with its own downfall built in.

FAQ: How can news organisations compete at a hyperlocal level? (and other questions from AOP)

These questions were submitted to me in advance of the next AOP meeting, on ‘Microlocal Media’, and have been published on the AOP site. As usual, I’m republishing here as part of my FAQ series.

Q. How can publishers compete with zero-cost base community developed and run sites?

They can’t – and they shouldn’t. When it comes to the web, the value lies in the network, not in the content. Look at the biggest web success story: Google. Google’s value – contrary to the opinion of AP or Rupert Murdoch or the PCC – is not in its content. It is in its connections; its links; its network. You don’t go to Google to read; you go there to find. The same is true of so many things on the internet. One of the problems for publishers is that people use the web as a communications channel first, as a tool second, and as a destination after that. The successful operations understand the other two uses and work on those by forging partnerships, and linking, linking, linking. Continue reading

FAQ: How would paywalls affect advertisers? (and other questions)

More questions from a student that I’m publishing as part of the FAQ section:

1. If News Corp starts charging for news stories, do you think readers would pay or they would just go to different newspapers?

Both, but mostly the latter. Previous experiments with paywalls saw audiences drop between 60 and 97%. And you also have to figure in that a paywall will likely make content invisible to search engines (either directly or indirectly, because no one will link to them which will drop their ranking). Search engines are responsible for a significant proportion of visits (even the Wall Street Journal receives a quarter of its traffic from Google). Still, some people will always pay – the question is: how many? Continue reading

FAQ: What do you see in the future for investigative journalism?

Here’s another collection of questions from a University of Montana student that I’m answering here as part of my FAQ section:

Q: What do you see is the future for investigative journalism? Do you still see it as having a home at newspapers?

I think the future of investigative journalism is already here – it’s just unevenly distributed,as William Gibson would say. Nonprofit organisations (such as Amnesty or Human Rights Watch) are an increasingly significant source of investigative journalism. Then there are the more general investigative journalism operations, funded by foundations and donations, such as ProPublica. Crowdfunding projects such as Spot.us are going to be increasingly important. And then there are crowdsourcing operations such as those done by Talking Points Memo and, of course, my own project Help Me Investigate. Continue reading

Dear Mandy … An Open Letter to Peter Mandelson from Dan Bull

Dan Bull is clearly a star in the making…

An open letter to Peter Mandelson regarding the newly announced Digital Economy Bill.

And an interesting use of video, whatever your view.

If you disapprove of the Bill, sign the petition at http://petitions.number10.gov.uk/dont

Write your own message to Lord Mandelson at http://threestrikes.openrightsgroup.org/

Dan Bull’s home page: http://www.myspace.com/danbull

Follow Dan on Twitter @itsDanBull – share the message with the #dearmandy tag.

Could Mashlogic be the answer to infoglut in the Web 2.0 world?

Combating information overload in the Internet age can be a tricky thing. The reader is often overwhelmed with the plethora of Web sites and news portals, and the publisher has to come up with a way to retain loyal users who will stick to their brand even while they are taken from hyperlink to hyperlink through an endless loop of news stories on a singular topic of interest.

Mashlogic, a tool that allows users to personalize their Web searches and define information on their own terms, promises to change that. The site assures readers that it can bring relief to their “RSS indigestion” woes in the Internet age.

Consumer version

In addition to allowing the user to choose his or her most trusted sources of news on the Web, the consumer version of Mashlogic, which can be downloaded as a plugin for the Firefox or IE browser, permits readers to outline topics of interest in order to adapt Web-surfing to their needs.

“Mashlogic adds a layer of contextual information to casual viewing experience on a Web site,” says John Bryan, vice president of business development.

Users can go to the Mashlogic site and build their own “mashes.” Here, they can customize source feeds, which may include everything from brand names such as the Guardian or the New York Times, to aggregate mixes, which may incorporate celebrity news and sports teams they follow, and content from bloggers and tweeters. Everything from Wikipedia definitions to LinkedIn profiles of people mentioned in articles can be tracked based on a user’s interest. Mashlogic also allows readers to highlight and choose sources and order them based on their priorities. Little wonder then, that Techcrunch is calling it a “Swiss Army Knife for hyperlinks.” Behind the scenes, the tool scans RSS and XML feeds from the chosen sites for “strings of words” in Web pages based on the user’s pre-selected choices.

2Stay Tool

Internet readers trying to distill information overload on the Web aren’t the only ones who can take advantage of Mashlogic. Companies and news sites that are interested in preserving their brand, retaining readers and generating page views and revenue can utilize the company’s more recent tool, aptly named, “2Stay.”

Here, the publisher takes a few lines of java script and embeds it on a page. When the tool looks for matching terms on a site that has this embedded script, a branded box alerting the reader to relevant articles from the site itself will pop up as the user drags his cursor over specific terms. It gives site owners a way to let users navigate news on their site without having to rely on search engines, which can often turn up irrelevant information from untrusted sources. The technology works on two levels – it looks at direct tags, which would redirect the reader to articles based solely on words or phrases, and also contextually scans tags around a term, yielding associated tags, and hence secondary stories. This not only prompts the user to stay on a site longer, but also directs traffic to more popular – and hence, more profitable – parts of a Web site.

“It keeps people on the site for longer and allows people to navigate around a site. It’s a way of drilling down archival content,” says Bryan. “What’s really cool about it from the publisher’s perspective is that we have the ability to drive people from a low cpm area to a high cpm area.”

When I ask him how this is different from the “most popular” or “most commented” articles that most sites showcase, Bryan reminds me that it’s not a contest, “We don’t see Mashlogic as being a replacement to any of the other tools that you have on your site.”

Nevertheless, he is quick to point out that a lot of such lists are usually buried at the end of an article on conventional Web sites, or that they often take a reader through a maze of related stories, without the option of going back to the original article. The Mashlogic tool, on the other hand, opens up relevant stories in different tabs, aiding the horizontal reading experience, literally.

“What we offer the user is a way of quickly finding the associated article without leaving the page.” The tool is also intuitive in the sense that it recognizes terms that would be of interest to the user, and the longer time one spends on a site, the deeper it starts to reference buried content.

One of the places this technology works best, according to Bryan, is in the case of celebrity news. As if to reinforce this point he shows me how you can follow stories tagged with Indianapolis football star Peyton Manning on the citizen sports site, Bleacher Report. Merely moving the cursor over the quarterback’s name prompts a callout, which gleans Manning stories from all around the site – a list that includes everything from his team’s latest victory to his place on the NFL power rankings.

But could this excess of Peyton Manning news, so characteristic of niche information and fragmented audiences in the online world, carry with it the very real danger of obscuring the more important news items? Would this entice readers to spend too much time on Manning and too little on the health care bill, for instance?

“I’d like to think they’d use it for both,” says Bryan. In the age of democratization of the Web, the user should indeed be able to choose what he reads and where he reads it. And Mashlogic allows him to do this well. If, in fact, a user were interested in healthcare, the technology would allow him to access the leading magazines, sites, blogs, forums and even tweets on the topic, to create a 360-degree view. “Mashlogic does that better than anybody else because we would scour all the sources that you said you trusted or wanted to reference,” Bryan says.

2Go Tool

The company’s third product, “To Go” is for the ultimate brand fanatic. The brand can be anything from a preferred site to a favorite sports team or celebrity, or even a topic of interest. Readers would be required to download a button from their chosen sites, which would offer one-click access from anywhere on the Web.

Hence, 2Go is for the reader what  2Stay is for the publisher. “As a user, I have opted in to the have the ability to jump back, never be more than one click away from my favorite site,” explains Bryan.

Sure enough, as we traverse the ESPN site for news, a Bleacher Report-branded callout pops up, with related stories on B/R, ready to take the B/R fan back to his preferred source with one click. Mashlogic is currently in negotiations with about ten companies to install this tool, and according to Bryan, it’s being pretty well received.

Thus, what the three technologies being offered collectively do is adapt a reader’s experience to his preferences while allowing publishers to retain their most loyal users on their sites. “Mashlogic does not affect the way a site works, in any shape or form, the site works just the way it works,” Bryan says, as he closes an annoying popup ad.

The company has developed a pretty savvy e-commerce strategy for revenue generation. Any references to books or music in articles can directly take the user to the Amazon or iTunes site to purchase a specific item. The technology is also cleverly using third party sites to play sample music for the user, before he chooses to buy it. The feature can reference video, audio and text URLs. Hence, an NPR callout can jump the reader straight to a podcast from their broadcasts. Bryan also envisions having the callouts sponsored by advertisers. What would be more apt than having a Clorox callout advising a reader about environmentally-friendly Green Works products as he reads about the H1N1 virus, he reasons.

Too much distraction, perhaps? In an Internet age where readers are already in danger of encountering endlessly tantalizing hyperlinks, one too many sidebars, and interactive rich-media advertising, do they need more? But, on the other hand, don’t you want to be alerted to that contextual piece on Sarah Palin, as you glimpse through an article about her latest gaffe on a news show?

“A lot of the content, which is still very relevant tends to fall off the radar due to breaking news stories; it’s still pretty relevant, it’s just not current,” as Bryan points out. Mashlogic has the potential to combat the low attention span of the Internet age and bring that content to readers’ attention. In addition, it can provide them with the hundred and seventy-sixth article on Jon and Kate that they may have missed. What’s not to love about that?