Tag Archives: BBC

BBC regional sites to consider including links to hyperlocal blogs

Old BBC North ident

Image from MHP The Ident Zone - click to see in context

The BBC’s social media lead for the English Regions Robin Morley has invited requests from “reputable hyperlocal websites” who want links to their stories included in the BBC’s regional news websites. Continue reading

Tinkering With Scraperwiki – The Bottom Line, OpenCorporates Reconciliation and the Google Viz API

Having got to grips with adding a basic sortable table view to a Scraperwiki view using the Google Chart Tools (Exporting and Displaying Scraperwiki Datasets Using the Google Visualisation API), I thought I’d have a look at wiring in an interactive dashboard control.

You can see the result at BBC Bottom Line programme explorer:

The page loads in the contents of a source Scraperwiki database (so only good for smallish datasets in this version) and pops them into a table. The searchbox is bound to the Synopsis column and and allows you to search for terms or phrases within the Synopsis cells, returning rows for which there is a hit.

Here’s the function that I used to set up the table and search control, bind them together and render them:

    google.load('visualization', '1.1', {packages:['controls']});

    google.setOnLoadCallback(drawTable);

    function drawTable() {

      var json_data = new google.visualization.DataTable(%(json)s, 0.6);

    var json_table = new google.visualization.ChartWrapper({'chartType': 'Table','containerId':'table_div_json','options': {allowHtml: true}});
    //i expected this limit on the view to work?
    //json_table.setColumns([0,1,2,3,4,5,6,7])

    var formatter = new google.visualization.PatternFormat('<a href="http://www.bbc.co.uk/programmes/{0}">{0}</a>');
    formatter.format(json_data, [1]); // Apply formatter and set the formatted value of the first column.

    formatter = new google.visualization.PatternFormat('<a href="{1}">{0}</a>');
    formatter.format(json_data, [7,8]);

    var stringFilter = new google.visualization.ControlWrapper({
      'controlType': 'StringFilter',
      'containerId': 'control1',
      'options': {
        'filterColumnLabel': 'Synopsis',
        'matchType': 'any'
      }
    });

  var dashboard = new google.visualization.Dashboard(document.getElementById('dashboard')).bind(stringFilter, json_table).draw(json_data);

    }

The formatter is used to linkify the two URLs. However, I couldn’t get the table to hide the final column (the OpenCorporates URI) in the displayed table? (Doing something wrong, somewhere…) You can find the full code for the Scraperwiki view here.

Now you may (or may not) be wondering where the OpenCorporates ID came from. The data used to populate the table is scraped from the JSON version of the BBC programme pages for the OU co-produced business programme The Bottom Line (Bottom Line scraper). (I’ve been pondering for sometime whether there is enough content there to try to build something that might usefully support or help promote OUBS/OU business courses or link across to free OU business courses on OpenLearn…) Supplementary content items for each programme identify the name of each contributor and the company they represent in a conventional way. (Their role is also described in what looks to be a conventionally constructed text string, though I didn’t try to extract this explicitly – yet. (I’m guessing the Reuters OpenCalais API would also make light work of that?))

Having got access to the company name, I thought it might be interesting to try to get a corporate identifier back for each one using the OpenCorporates (Google Refine) Reconciliation API (Google Refine reconciliation service documentation).

Here’s a fragment from the scraper showing how to lookup a company name using the OpenCorporates reconciliation API and get the data back:

ocrecURL='http://opencorporates.com/reconcile?query='+urllib.quote_plus("".join(i for i in record['company'] if ord(i)<128))
    try:
        recData=simplejson.load(urllib.urlopen(ocrecURL))
    except:
        recData={'result':[]}
    print ocrecURL,[recData]
    if len(recData['result'])>0:
        if recData['result'][0]['score']>=0.7:
            record['ocData']=recData['result'][0]
            record['ocID']=recData['result'][0]['uri']
            record['ocName']=recData['result'][0]['name']

The ocrecURL is constructed from the company name, sanitised in a hack fashion. If we get any results back, we check the (relevance) score of the first one. (The results seem to be ordered in descending score order. I didn’t check to see whether this was defined or by convention.) If it seems relevant, we go with it. From a quick skim of company reconciliations, I noticed at least one false positive – Reed – but on the whole it seemed to work fairly well. (If we look up more details about the company from OpenCorporates, and get back the company URL, for example, we might be able to compare the domain with the domain given in the link on the Bottom Line page. A match would suggest quite strongly that we have got the right company…)

As @stuartbrown suggeted in a tweet, a possible next step is to link the name of each guest to a Linked Data identifier for them, for example, using DBPedia (although I wonder – is @opencorporates also minting IDs for company directors?). I also need to find some way of pulling out some proper, detailed subject tags for each episode that could be used to populate a drop down list filter control…

PS for more Google Dashboard controls, check out the Google interactive playground…

PPS see also: OpenLearn Glossary Search and OpenLearn LEarning Outcomes Search

Games are just another storytelling device

Whenever people talk about games as a potential journalistic device, there is a reaction against the idea of ‘play’ as a method for communicating ‘serious’ news.

Malcolm Bradbrook’s post on the News:Rewired talk by Newsgames author Bobby Schweizer is an unusually thoughtful exploration of that reaction, where he asks whether the use of games might contribute to the wider tabloidisation of news, the key aspects of which he compares with games as follows:

  1. “Privileging the visual over analysis – I think this is obvious where games are concerned. Actual levels of analysis will be minimal compared to the visual elements of the game
  2. “Using cultural knowledge over analysis – the game will become a shared experience, just as the BBC’s One in 7bn was in October. But how many moved beyond typing in their date of birth to reading the analysis? It drove millions to the BBC site but was it for the acquisition of understanding or something to post on Facebook/Twitter?
  3. “Dehistoricised and fragmented versions of events - as above, how much context can you provide in a limited gaming experience?”

These are all good points, and designers of journalism games should think about them carefully, but I think there’s a danger of seeing games in isolation.

Hooking the user – and creating a market

With the BBC’s One in 7bn interactive, for example, I’d want to know how many users would have read the analysis if there was no interactive at all. Yes, many people will not have gone further than typing in their date of birth – but that doesn’t mean all of them didn’t. 10% of a lot (and that interactive attracted a huge audience) can be more than 100% of few.

What’s more, the awareness driven by that interactive creates an environment for news discussion that wouldn’t otherwise exist. Even if 90% of users (pick your own proportion, it doesn’t matter) never read the analysis directly, they are still more likely to discuss the story with others, some of whom would then be able to talk about the analysis the others missed.

Without that social context, the ‘serious’ news consumer has less opportunity to discuss what they’ve read.

News is multi-purpose

Then there’s the idea that people read the news for “acquisition of understanding”. I’m not sure how much news consumption is motivated by that, and how much by the need to be able to operate socially (discussing current events) or professionally (reacting to them) or even emotionally (being stimulated by them).

As someone who has tried various techniques to help students “acquire understanding”, I’m aware that the best method is not always to present them with facts, or a story. Sometimes it’s about creating a social environment; sometimes it’s about simulating an experience or putting people in a situation where they are faced with particular problems (all of which are techniques used by games).

Bradbrook ends with a quote from Jeremy Paxman on journalism’s “first duty” as disclosure. But if you can’t get people to listen to that disclosure then it is purposeless (aside from making the journalist feel superior). That is why journalists write stories, and not research documents. It is why they use case studies and not just statistics.

Games are another way of communicating information. Like all the other methods, they have their limitations as well as strengths. We need to be aware of these, and think about them critically, but to throw out the method entirely would be a mistake, I think.

UPDATE: Some very useful tweets from Mary Hamilton, Si Lumb, Chris Unitt and Mark Sorrell drew my attention to some very useful posts on games and storytelling more generally.

Sorrell’s post Games Good Stories Bad, for example, includes this passage:

“Games can create great stories, don’t get me wrong. But they are largely incapable oftelling great stories. Games are about interaction and agency, about choice and self-determination. One of the points made by fancy-pants French sociologist Roger Caillois when defining what a game is, was that the outcome of a game must be uncertain. The result cannot be known in advance. When you try and tell a story in a game, you must break that rule, you must make the outcome of events pre-determined.”
And while reading Lumb’s blog I came across this post with this point:

” A story as an entity, as a thing doesn’t exist until some event, some imagination, some narrative is constructed, relived, shared or described. It must be told. It is “story telling”, after all. Only at the point that you tell someone about that something does it become real, does it become a story. It is always from your perspective, it is always your interpretation, it is a gift you wish to share and that is how it comes to be.

“In a game you can plant narrative as discoverable, you can have cut scenes, you can have environments and situations and mechanics and toys and rules and delight and wonderful play – and in all of this you hide traditional “stories” from visual and textual creators (until read or viewed they don’t exist) and you have the emergence of events that may indeed become stories when you share with another person.”

And finally, if you just want to explore these issues in a handy diagram, there’s this infographic tweeted by Lumb:

A Model of Play - Dubberly Design Office

A Model of Play - Dubberly Design Office

For more background on games in journalism, see my Delicious bookmarks at http://delicious.com/paulb/gamejournalism

Are Sky and BBC leaving the field open to Twitter competitors?

At first glance, Sky’s decision that its journalists should not retweet information that has “not been through the Sky News editorial process” and the BBC’s policy to prioritise filing “written copy into our newsroom as quickly as possible” seem logical.

For Sky it is about maintaining editorial control over all content produced by its staff. For the BBC, it seems to be about making sure that the newsroom, and by extension the wider organisation, takes priority over the individual.

But there are also blind spots in these strategies that they may come to regret.

Our content?

The Sky policy articulates an assumption about ‘content’ that’s worth picking apart.

We accept as journalists that what we produce is our responsibility. When it comes to retweeting, however, it’s not entirely clear what we are doing. Is that news production, in the same way that quoting a source is? Is it newsgathering, in the same way that you might repeat a lead to someone to find out their reaction? Or is it merely distribution?

The answer, as I’ve written before, is that retweeting can be, and often is, all three.

Writing about a similar policy at the Oregonian late last year, Steve Buttry made the point that retweets are not endorsements. Jeff Jarvis argued that they were “quotes”.

I don’t think it’s as simple as that (as I explain below), but I do think it’s illustrative: if Sky News were to prevent journalists from using any quote on air or online where they could not verify its factual basis, then nothing would get broadcast. Live interviews would be impossible.

The Sky policy, then, seems to treat retweets as pure distribution, and – crucially – to treat the tweet in isolation. Not as a quote, but as a story, consisting entirely of someone else’s content, which has not been through Sky editorial processes but which is branded or endorsed as Sky journalism.

There’s a lot to admire in the pride in their journalism that this shows – indeed, I would like to see the same rigour applied to the countless quotes that are printed and broadcast by all media without being compared with any evidence.
But do users really see retweets in the same way? And if they do, will they always do so?

Curation vs creation

There’s a second issue here which is more about hard commercial success. Research suggests that successful users of Twitter tend to combine curation with creation. Preventing journalists from retweeting  leaves them – and their employers – without a vital tool in their storytelling and distribution.

The tension surrounding retweeting can be illustrated in the difference between two broadcast journalists who use Twitter particularly effectively: Sky’s own Neal Mann, and NPR’s Andy Carvin. Andy retweets habitually as a way of seeking further information. Neal, as he explained in this Q&A with one of my classes, feels that he has a responsibility not to retweet information he cannot verify (from 2 mins in).

Both approaches have their advantages and disadvantages. But both combine curation with creation.

Network effects

A third issue that strikes me is how these policies fit uncomfortably alongside the networked ways that news is experienced now.

The BBC policy, for example, appears at first glance to prevent journalists from diving right into the story as it develops online. Social media editor Chris Hamilton does note, importantly, that they have “a technology that allows our journalists to transmit text simultaneously to our newsroom systems and to their own Twitter accounts”. However, this is coupled with the position that:

“Our first priority remains ensuring that important information reaches BBC colleagues, and thus all our audiences, as quickly as possible – and certainly not after it reaches Twitter.”

This is an interesting line of argument, and there are a number of competing priorities underlying it that I want to understand more clearly.

Firstly, it implies a separation of newsroom systems and Twitter. If newsroom staff are not following their own journalists on Twitter as part of their systems, why not? Sky pioneered the use of Twitter as an internal newswire, and the man responsible, Julian March, is now doing something similar at ITV. The connection between internal systems and Twitter is notable.

Then there’s that focus on “all our audiences” in opposition to those early adopter Twitter types. If news is “breaking news, an exclusive or any kind of urgent update”, being first on Twitter can give you strategic advantages that waiting for the six o’clock – or even typing a report that’s over 140 characters – won’t. For example:

  • Building a buzz (driving people to watch, listen to or search for the fuller story)
  • Establishing authority on Google (which ranks first reports over later ones)
  • Establishing the traditional authority in being known as the first to break the story
  • Making it easier for people on the scene to get in touch (if someone’s just experienced a newsworthy event or heard about it from someone who was, how likely is it that they search Twitter to see who else was there? You want to be the journalist they find and contact)

“When the technology [to inform the newsroom and generate a tweet at the same time] isn’t available, for whatever reason, we’re asking them to prioritise telling the newsroom before sending a tweet.

“We’re talking a difference of a few seconds. In some situations.

“And we’re talking current guidance, not tablets of stone. This is a landscape that’s moving incredibly quickly, inside and outside newsrooms, and the guidance will evolve as quickly.”

Everything at the same time

There’s another side to this, which is evidence of news organisations taking a strategic decision that, in a world of information overload, they should stop trying to be the first (an increasingly hard task), and instead seek to be more authoritative. To be able to say, confidently, “Every atom we distribute is confirmed”, or “We held back to do this spectacularly as a team”.

There’s value in that, and a lot to be admired. I’m not saying that these policies are inherently wrong. I don’t know the full thinking that went into them, or the subtleties of their implementation (as Rory Cellan-Jones illustrates in his example, which contrasts with what can actually happen). I don’t think there is a right and a wrong way to ‘do Twitter’. Every decision is a trade off, because so many factors are in play. I just wanted to explore some of those factors here.

As soon as you digitise information you remove the physical limitations that necessitated the traditional distinctions between the editorial processes of newsgathering, production, editing and distribution.

A single tweet can be doing all at the same time. Social media policies need to recognise this, and journalists need to be trained to understand the subtleties too.

Location, Location, Location

In this guest post, Damian Radcliffe highlights some recent developments in the intersection between hyper-local SoLoMo (social, location, mobile). His more detailed slides looking at 20 developments across the sector during the last two months of 2011 are cross-posted at the bottom of this article.

Facebook’s recent purchase of location-based service Gowalla (Slide 19 below,) suggests that the social network still thinks there is a future for this type of “check in” service. Touted as “the next big thing” ever since Foursquare launched at SXSW in 2009, to date Location Based Services (LBS) haven’t quite lived up to the hype.

Certainly there’s plenty of data to suggest that the public don’t quite share the enthusiasm of many Silicon Valley investors. Yet.

Part of their challenge is that not only is awareness of services relatively low – just 30% of respondents in a survey of 37,000 people by Forrester (Slide 27) – but their benefits are also not necessarily clearly understood.

In 2011, a study by youth marketing agency Dubit found about half of UK teenagers are not aware of location-based social networking services such as Foursquare and Facebook Places, with 58% of those who had heard of them saying they “do not see the point” of sharing geographic information.

Safety concerns may not be the primary concern of Dubit’s respondents, but as the “Please Rob Me” website says: “….on one end we’re leaving lights on when we’re going on a holiday, and on the other we’re telling everybody on the internet we’re not home… The danger is publicly telling people where you are. This is because it leaves one place you’re definitely not… home.”

Reinforcing this concern are several stories from both the UK and the US of insurers refusing to pay out after a domestic burglary, where victims have announced via social networks that they were away on holiday – or having a beer downtown.

For LBS to go truly mass market – and Forrester (see Slide 27) found that only 5% of mobile users were monthly LBS users – smartphone growth will be a key part of the puzzle. Recent Ofcom data reported that:

  • Ownership nearly doubled in the UK between February 2010 and August 2011 (from 24% to 46%).
  • 46% of UK internet users also used their phones to go online in October 2011.

For now at least, most of our location based activity would seem to be based on previous online behaviours. So, search continues to dominate.

Google in a recent blog post described local search ads as “so hot right now” (Slide 22, Sept-Oct 2011 update). The search giant launched hyper-local search ads a year ago, along with a “News Near You” feature in May 2011. (See: April-May 2011 update, Slide 27.)

Meanwhile, BIA/Kelsey forecast that local search advertising revenues in the US will increase from $5.1 billion in 2010 to $8.2 billion in 2015. Their figures suggest by 2015, 30% of search will be local.

The other notable growth area, location based mobile advertising, also offers a different slant on the typical “check in” service which Gowalla et al tend to specialise in. Borrell forerecasts this space will increase 66% in the US during 2012 (Slide 22).

The most high profile example of this service in the UK is O2 More, which triggers advertising or deals when a user passes through certain locations – offering a clear financial incentive for sharing your location.

Perhaps this – along with tailored news and information manifest in services such as News Near You, Postcode Gazette and India’s Taazza – is the way forward.

Jiepang, China’s leading Location-Based Social Mobile App, offered a recent example of how to do this. Late last year they partnered with Starbucks, offering users a virtual Starbucks badge if they “checked-in” at a Starbucks store in the Shanghai, Jiangsu and Zhejiang provinces. When the number of badges issued hit 20,000, all badge holders got a free festive upgrade to a larger cup size. When coupled with the ease of NFC technology deployed to allow users to “check in” then it’s easy to understand the consumer benefit of such a service.

Mine’s a venti gingerbread latte. No cream. Xièxiè.

2011: the UK hyper-local year in review

In this guest post, Damian Radcliffe highlights some topline developments in the hyper-local space during 2011. He also asks for your suggestions of great hyper-local content from 2011. His more detailed slides looking at the previous year are cross-posted at the bottom of this article.

2011 was a busy year across the hyper-local sphere, with a flurry of activity online as well as more traditional platforms such as TV, Radio and newspapers.

The Government’s plans for Local TV have been considerably developed, following the Shott Review just over a year ago. We now have a clearer indication of the areas which will be first on the list for these new services and how Ofcom might award these licences. What we don’t know is who will apply for these licences, or what their business models will be. But, this should become clear in the second half of the year.

Whilst the Leveson Inquiry hasn’t directly been looking at local media, it has been a part of the debate. Claire Enders outlined some of the challenges facing the regional and local press in a presentation showing declining revenue, jobs and advertising over the past five years. Her research suggests that the impact of “the move to digital” has been greater at a local level than at the nationals.

Across the board, funding remains a challenge for many. But new models are emerging, with Daily Deals starting to form part of the revenue mix alongside money from foundations and franchising.

And on the content front, we saw Jeremy Hunt cite a number of hyper-local examples at the Oxford Media Convention, as well as record coverage for regional press and many hyper-local outlets as a result of the summer riots.

I’ve included more on all of these stories in my personal retrospective for the past year.

One area where I’d really welcome feedback is examples of hyper-local content you produced – or read – in 2011. I’m conscious that a lot of great material may not necessarily reach a wider audience, so do post your suggestions below and hopefully we can begin to redress that.

The strikes and the rise of the liveblog

Liveblogging the strikes: Twitter's #n30 stream

Liveblogging the strikes: Twitter's #n30 stream

Today sees the UK’s biggest strike in decades as public sector workers protest against pension reforms. Most news organisations are covering the day’s events through liveblogs: that web-native format which has so quickly become the automatic choice for covering rolling news.

To illustrate just how dominant the liveblog has become take a look at the BBCChannel 4 News, The Guardian’s ‘Strikesblog‘ or The TelegraphThe Independent’s coverage is hosted on their own live.independent.co.uk subdomain while Sky have embedded their liveblog in other articles. There’s even a separate Storify liveblog for The Guardian’s Local Government section, and on Radio 5 Live you can find an example of radio reporters liveblogging.

Regional newspapers such as the Chronicle in the north east and the Essex County Standard are liveblogging the local angle; while the Huffington Post liveblog the political face-off at Prime Minister’s Question Time and the PoliticsHome blog liveblogs both. Leeds Student are liveblogging too. And it’s not just news organisations: campaigning organisation UK Uncut have their own liveblog, as do the public sector workers union UNISON and Pensions Justice (on Tumblr).

So dominant so quickly

The format has become so dominant so quickly because it satisfies both editorial and commercial demands: liveblogs are sticky – people stick around on them much longer than on traditional articles, in the same way that they tend to leave the streams of information from Twitter or Facebook on in the background of their phone, tablet or PC – or indeed, the way that they leave on 24 hour television when there are big events.

It also allows print outlets to compete in the 24-hour environment of rolling news. The updates of the liveblog are equivalent to the ‘time-filling’ of 24-hour television, with this key difference: that updates no longer come from a handful of strategically-placed reporters, but rather (when done well) hundreds of eyewitnesses, stakeholders, experts, campaigners, reporters from other news outlets, and other participants.

The results (when done badly) can be more noise than signal – incoherent, disconnected, fragmented. When done well, however, a good liveblog can draw clarity out of confusion, chase rumours down to facts, and draw multiple threads into something resembling a canvas.

At this early stage liveblogging is still a form finding its feet. More static than broadcast, it does not require the same cycle of repetition; more dynamic than print, it does, however, demand regular summarising.

Most importantly, it takes place within a network. The audience are not sat on their couches watching a single piece of coverage; they may be clicking between a dozen different sources; they may be present at the event itself; they may have friends or family there, sending them updates from their phone. If they are hearing about something important that you’re not addressing, you have a problem.

The list of liveblogs above demonstrates this particularly well, and it doesn’t include the biggest liveblog of all: the #n30 thread on Twitter (and as Facebook users we might also be consuming a liveblog of sorts of our friends’ updates).

More than documenting

In this situation the journalist is needed less to document what is taking place, and more to build on the documentation that is already being done: by witnesses, and by other journalists. That might mean aggregating the most important updates, or providing analysis of what they mean. It might mean enriching content by adding audio, video, maps or photography. Most importantly, it may mean verifying accounts that hold particular significance.

Liveblogging: adding value to the network

Liveblogging: adding value to the network

These were the lessons that I sought to teach my class last week when I reconstructed an event in the class and asked them to liveblog it (more in a future blog post). Without any briefing, they made predictable (and planned) mistakes: they thought they were there purely to document the event.

But now, more than ever, journalists are not there solely to document.

On a day like today you do not need to be journalist to take part in the ‘liveblog’ of #n20. If you are passionate about current events, if you are curious about news, you can be out there getting experience in dealing with those events – not just reporting them, but speaking to the people involved, recording images and audio to enrich what is in front of you, creating maps and galleries and Storify threads to aggregate the most illuminating accounts. Seeking reaction and verification to the most challenging ones.

The story is already being told by hundreds of people, some better than others. It’s a chance to create good journalism, and be better at it. I hope every aspiring journalist takes it, and the next chance, and the next one.