Tag Archives: BBC

Schofield’s list, the mob and a very modern moral panic

Someone, somewhere right now will be writing a thesis, dissertation or journal paper about the very modern moral panic playing out across the UK media.

What began as a story about allegations of sexual abuse by TV and radio celebrity Jimmy Savile turned into a story about that story being covered up, into how the abuse could take place (at the BBC too, in the 1970s, but also in hospitals and schools), then into wider allegations of a paedophile ring involving politicians.

Continue reading

“Genuinely digital content for the first time” – the BBC under Entwistle

For those who missed it, from George Entwistle’s speech to BBC staff this week, a taste of the corporation’s priorities under the new DG:

“It’s the quest for this – genuinely new forms of digital content – that represents the next profound moment of change we need to prepare for if we’re to deserve a new charter.

“As we increasingly make use of a distribution model – the internet – principally characterised by its return path, its capacity for interaction, its hunger for more and more information about the habits and preferences of individual users, then we need to be ready to create content which exploits this new environment – content which shifts the height of our ambition from live output to living output.

“We need to be ready to produce and create genuinely digital content for the first time. And we need to understand better what it will mean to assemble, edit and present such content in a digital setting where social recommendation and other forms of curation will play a much more influential role.”

BBC regional sites to consider including links to hyperlocal blogs

Old BBC North ident

Image from MHP The Ident Zone - click to see in context

The BBC’s social media lead for the English Regions Robin Morley has invited requests from “reputable hyperlocal websites” who want links to their stories included in the BBC’s regional news websites. Continue reading

Tinkering With Scraperwiki – The Bottom Line, OpenCorporates Reconciliation and the Google Viz API

Having got to grips with adding a basic sortable table view to a Scraperwiki view using the Google Chart Tools (Exporting and Displaying Scraperwiki Datasets Using the Google Visualisation API), I thought I’d have a look at wiring in an interactive dashboard control.

You can see the result at BBC Bottom Line programme explorer:

The page loads in the contents of a source Scraperwiki database (so only good for smallish datasets in this version) and pops them into a table. The searchbox is bound to the Synopsis column and and allows you to search for terms or phrases within the Synopsis cells, returning rows for which there is a hit.

Here’s the function that I used to set up the table and search control, bind them together and render them:

    google.load('visualization', '1.1', {packages:['controls']});

    google.setOnLoadCallback(drawTable);

    function drawTable() {

      var json_data = new google.visualization.DataTable(%(json)s, 0.6);

    var json_table = new google.visualization.ChartWrapper({'chartType': 'Table','containerId':'table_div_json','options': {allowHtml: true}});
    //i expected this limit on the view to work?
    //json_table.setColumns([0,1,2,3,4,5,6,7])

    var formatter = new google.visualization.PatternFormat('<a href="http://www.bbc.co.uk/programmes/{0}">{0}</a>');
    formatter.format(json_data, [1]); // Apply formatter and set the formatted value of the first column.

    formatter = new google.visualization.PatternFormat('<a href="{1}">{0}</a>');
    formatter.format(json_data, [7,8]);

    var stringFilter = new google.visualization.ControlWrapper({
      'controlType': 'StringFilter',
      'containerId': 'control1',
      'options': {
        'filterColumnLabel': 'Synopsis',
        'matchType': 'any'
      }
    });

  var dashboard = new google.visualization.Dashboard(document.getElementById('dashboard')).bind(stringFilter, json_table).draw(json_data);

    }

The formatter is used to linkify the two URLs. However, I couldn’t get the table to hide the final column (the OpenCorporates URI) in the displayed table? (Doing something wrong, somewhere…) You can find the full code for the Scraperwiki view here.

Now you may (or may not) be wondering where the OpenCorporates ID came from. The data used to populate the table is scraped from the JSON version of the BBC programme pages for the OU co-produced business programme The Bottom Line (Bottom Line scraper). (I’ve been pondering for sometime whether there is enough content there to try to build something that might usefully support or help promote OUBS/OU business courses or link across to free OU business courses on OpenLearn…) Supplementary content items for each programme identify the name of each contributor and the company they represent in a conventional way. (Their role is also described in what looks to be a conventionally constructed text string, though I didn’t try to extract this explicitly – yet. (I’m guessing the Reuters OpenCalais API would also make light work of that?))

Having got access to the company name, I thought it might be interesting to try to get a corporate identifier back for each one using the OpenCorporates (Google Refine) Reconciliation API (Google Refine reconciliation service documentation).

Here’s a fragment from the scraper showing how to lookup a company name using the OpenCorporates reconciliation API and get the data back:

ocrecURL='http://opencorporates.com/reconcile?query='+urllib.quote_plus("".join(i for i in record['company'] if ord(i)<128))
    try:
        recData=simplejson.load(urllib.urlopen(ocrecURL))
    except:
        recData={'result':[]}
    print ocrecURL,[recData]
    if len(recData['result'])>0:
        if recData['result'][0]['score']>=0.7:
            record['ocData']=recData['result'][0]
            record['ocID']=recData['result'][0]['uri']
            record['ocName']=recData['result'][0]['name']

The ocrecURL is constructed from the company name, sanitised in a hack fashion. If we get any results back, we check the (relevance) score of the first one. (The results seem to be ordered in descending score order. I didn’t check to see whether this was defined or by convention.) If it seems relevant, we go with it. From a quick skim of company reconciliations, I noticed at least one false positive – Reed – but on the whole it seemed to work fairly well. (If we look up more details about the company from OpenCorporates, and get back the company URL, for example, we might be able to compare the domain with the domain given in the link on the Bottom Line page. A match would suggest quite strongly that we have got the right company…)

As @stuartbrown suggeted in a tweet, a possible next step is to link the name of each guest to a Linked Data identifier for them, for example, using DBPedia (although I wonder – is @opencorporates also minting IDs for company directors?). I also need to find some way of pulling out some proper, detailed subject tags for each episode that could be used to populate a drop down list filter control…

PS for more Google Dashboard controls, check out the Google interactive playground…

PPS see also: OpenLearn Glossary Search and OpenLearn LEarning Outcomes Search

Games are just another storytelling device

Whenever people talk about games as a potential journalistic device, there is a reaction against the idea of ‘play’ as a method for communicating ‘serious’ news.

Malcolm Bradbrook’s post on the News:Rewired talk by Newsgames author Bobby Schweizer is an unusually thoughtful exploration of that reaction, where he asks whether the use of games might contribute to the wider tabloidisation of news, the key aspects of which he compares with games as follows:

  1. “Privileging the visual over analysis – I think this is obvious where games are concerned. Actual levels of analysis will be minimal compared to the visual elements of the game
  2. “Using cultural knowledge over analysis – the game will become a shared experience, just as the BBC’s One in 7bn was in October. But how many moved beyond typing in their date of birth to reading the analysis? It drove millions to the BBC site but was it for the acquisition of understanding or something to post on Facebook/Twitter?
  3. “Dehistoricised and fragmented versions of events – as above, how much context can you provide in a limited gaming experience?”

These are all good points, and designers of journalism games should think about them carefully, but I think there’s a danger of seeing games in isolation.

Hooking the user – and creating a market

With the BBC’s One in 7bn interactive, for example, I’d want to know how many users would have read the analysis if there was no interactive at all. Yes, many people will not have gone further than typing in their date of birth – but that doesn’t mean all of them didn’t. 10% of a lot (and that interactive attracted a huge audience) can be more than 100% of few.

What’s more, the awareness driven by that interactive creates an environment for news discussion that wouldn’t otherwise exist. Even if 90% of users (pick your own proportion, it doesn’t matter) never read the analysis directly, they are still more likely to discuss the story with others, some of whom would then be able to talk about the analysis the others missed.

Without that social context, the ‘serious’ news consumer has less opportunity to discuss what they’ve read.

News is multi-purpose

Then there’s the idea that people read the news for “acquisition of understanding”. I’m not sure how much news consumption is motivated by that, and how much by the need to be able to operate socially (discussing current events) or professionally (reacting to them) or even emotionally (being stimulated by them).

As someone who has tried various techniques to help students “acquire understanding”, I’m aware that the best method is not always to present them with facts, or a story. Sometimes it’s about creating a social environment; sometimes it’s about simulating an experience or putting people in a situation where they are faced with particular problems (all of which are techniques used by games).

Bradbrook ends with a quote from Jeremy Paxman on journalism’s “first duty” as disclosure. But if you can’t get people to listen to that disclosure then it is purposeless (aside from making the journalist feel superior). That is why journalists write stories, and not research documents. It is why they use case studies and not just statistics.

Games are another way of communicating information. Like all the other methods, they have their limitations as well as strengths. We need to be aware of these, and think about them critically, but to throw out the method entirely would be a mistake, I think.

UPDATE: Some very useful tweets from Mary Hamilton, Si Lumb, Chris Unitt and Mark Sorrell drew my attention to some very useful posts on games and storytelling more generally.

Sorrell’s post Games Good Stories Bad, for example, includes this passage:

“Games can create great stories, don’t get me wrong. But they are largely incapable oftelling great stories. Games are about interaction and agency, about choice and self-determination. One of the points made by fancy-pants French sociologist Roger Caillois when defining what a game is, was that the outcome of a game must be uncertain. The result cannot be known in advance. When you try and tell a story in a game, you must break that rule, you must make the outcome of events pre-determined.”
And while reading Lumb’s blog I came across this post with this point:

” A story as an entity, as a thing doesn’t exist until some event, some imagination, some narrative is constructed, relived, shared or described. It must be told. It is “story telling”, after all. Only at the point that you tell someone about that something does it become real, does it become a story. It is always from your perspective, it is always your interpretation, it is a gift you wish to share and that is how it comes to be.

“In a game you can plant narrative as discoverable, you can have cut scenes, you can have environments and situations and mechanics and toys and rules and delight and wonderful play – and in all of this you hide traditional “stories” from visual and textual creators (until read or viewed they don’t exist) and you have the emergence of events that may indeed become stories when you share with another person.”

And finally, if you just want to explore these issues in a handy diagram, there’s this infographic tweeted by Lumb:

A Model of Play - Dubberly Design Office

A Model of Play - Dubberly Design Office

For more background on games in journalism, see my Delicious bookmarks at http://delicious.com/paulb/gamejournalism

Are Sky and BBC leaving the field open to Twitter competitors?

At first glance, Sky’s decision that its journalists should not retweet information that has “not been through the Sky News editorial process” and the BBC’s policy to prioritise filing “written copy into our newsroom as quickly as possible” seem logical.

For Sky it is about maintaining editorial control over all content produced by its staff. For the BBC, it seems to be about making sure that the newsroom, and by extension the wider organisation, takes priority over the individual.

But there are also blind spots in these strategies that they may come to regret.

Our content?

The Sky policy articulates an assumption about ‘content’ that’s worth picking apart.

We accept as journalists that what we produce is our responsibility. When it comes to retweeting, however, it’s not entirely clear what we are doing. Is that news production, in the same way that quoting a source is? Is it newsgathering, in the same way that you might repeat a lead to someone to find out their reaction? Or is it merely distribution?

The answer, as I’ve written before, is that retweeting can be, and often is, all three.

Writing about a similar policy at the Oregonian late last year, Steve Buttry made the point that retweets are not endorsements. Jeff Jarvis argued that they were “quotes”.

I don’t think it’s as simple as that (as I explain below), but I do think it’s illustrative: if Sky News were to prevent journalists from using any quote on air or online where they could not verify its factual basis, then nothing would get broadcast. Live interviews would be impossible.

The Sky policy, then, seems to treat retweets as pure distribution, and – crucially – to treat the tweet in isolation. Not as a quote, but as a story, consisting entirely of someone else’s content, which has not been through Sky editorial processes but which is branded or endorsed as Sky journalism.

There’s a lot to admire in the pride in their journalism that this shows – indeed, I would like to see the same rigour applied to the countless quotes that are printed and broadcast by all media without being compared with any evidence.
But do users really see retweets in the same way? And if they do, will they always do so?

Curation vs creation

There’s a second issue here which is more about hard commercial success. Research suggests that successful users of Twitter tend to combine curation with creation. Preventing journalists from retweeting  leaves them – and their employers – without a vital tool in their storytelling and distribution.

The tension surrounding retweeting can be illustrated in the difference between two broadcast journalists who use Twitter particularly effectively: Sky’s own Neal Mann, and NPR’s Andy Carvin. Andy retweets habitually as a way of seeking further information. Neal, as he explained in this Q&A with one of my classes, feels that he has a responsibility not to retweet information he cannot verify (from 2 mins in).

Both approaches have their advantages and disadvantages. But both combine curation with creation.

Network effects

A third issue that strikes me is how these policies fit uncomfortably alongside the networked ways that news is experienced now.

The BBC policy, for example, appears at first glance to prevent journalists from diving right into the story as it develops online. Social media editor Chris Hamilton does note, importantly, that they have “a technology that allows our journalists to transmit text simultaneously to our newsroom systems and to their own Twitter accounts”. However, this is coupled with the position that:

“Our first priority remains ensuring that important information reaches BBC colleagues, and thus all our audiences, as quickly as possible – and certainly not after it reaches Twitter.”

This is an interesting line of argument, and there are a number of competing priorities underlying it that I want to understand more clearly.

Firstly, it implies a separation of newsroom systems and Twitter. If newsroom staff are not following their own journalists on Twitter as part of their systems, why not? Sky pioneered the use of Twitter as an internal newswire, and the man responsible, Julian March, is now doing something similar at ITV. The connection between internal systems and Twitter is notable.

Then there’s that focus on “all our audiences” in opposition to those early adopter Twitter types. If news is “breaking news, an exclusive or any kind of urgent update”, being first on Twitter can give you strategic advantages that waiting for the six o’clock – or even typing a report that’s over 140 characters – won’t. For example:

  • Building a buzz (driving people to watch, listen to or search for the fuller story)
  • Establishing authority on Google (which ranks first reports over later ones)
  • Establishing the traditional authority in being known as the first to break the story
  • Making it easier for people on the scene to get in touch (if someone’s just experienced a newsworthy event or heard about it from someone who was, how likely is it that they search Twitter to see who else was there? You want to be the journalist they find and contact)

“When the technology [to inform the newsroom and generate a tweet at the same time] isn’t available, for whatever reason, we’re asking them to prioritise telling the newsroom before sending a tweet.

“We’re talking a difference of a few seconds. In some situations.

“And we’re talking current guidance, not tablets of stone. This is a landscape that’s moving incredibly quickly, inside and outside newsrooms, and the guidance will evolve as quickly.”

Everything at the same time

There’s another side to this, which is evidence of news organisations taking a strategic decision that, in a world of information overload, they should stop trying to be the first (an increasingly hard task), and instead seek to be more authoritative. To be able to say, confidently, “Every atom we distribute is confirmed”, or “We held back to do this spectacularly as a team”.

There’s value in that, and a lot to be admired. I’m not saying that these policies are inherently wrong. I don’t know the full thinking that went into them, or the subtleties of their implementation (as Rory Cellan-Jones illustrates in his example, which contrasts with what can actually happen). I don’t think there is a right and a wrong way to ‘do Twitter’. Every decision is a trade off, because so many factors are in play. I just wanted to explore some of those factors here.

As soon as you digitise information you remove the physical limitations that necessitated the traditional distinctions between the editorial processes of newsgathering, production, editing and distribution.

A single tweet can be doing all at the same time. Social media policies need to recognise this, and journalists need to be trained to understand the subtleties too.

Location, Location, Location

In this guest post, Damian Radcliffe highlights some recent developments in the intersection between hyper-local SoLoMo (social, location, mobile). His more detailed slides looking at 20 developments across the sector during the last two months of 2011 are cross-posted at the bottom of this article.

Facebook’s recent purchase of location-based service Gowalla (Slide 19 below,) suggests that the social network still thinks there is a future for this type of “check in” service. Touted as “the next big thing” ever since Foursquare launched at SXSW in 2009, to date Location Based Services (LBS) haven’t quite lived up to the hype.

Certainly there’s plenty of data to suggest that the public don’t quite share the enthusiasm of many Silicon Valley investors. Yet.

Part of their challenge is that not only is awareness of services relatively low – just 30% of respondents in a survey of 37,000 people by Forrester (Slide 27) – but their benefits are also not necessarily clearly understood.

In 2011, a study by youth marketing agency Dubit found about half of UK teenagers are not aware of location-based social networking services such as Foursquare and Facebook Places, with 58% of those who had heard of them saying they “do not see the point” of sharing geographic information.

Safety concerns may not be the primary concern of Dubit’s respondents, but as the “Please Rob Me” website says: “….on one end we’re leaving lights on when we’re going on a holiday, and on the other we’re telling everybody on the internet we’re not home… The danger is publicly telling people where you are. This is because it leaves one place you’re definitely not… home.”

Reinforcing this concern are several stories from both the UK and the US of insurers refusing to pay out after a domestic burglary, where victims have announced via social networks that they were away on holiday – or having a beer downtown.

For LBS to go truly mass market – and Forrester (see Slide 27) found that only 5% of mobile users were monthly LBS users – smartphone growth will be a key part of the puzzle. Recent Ofcom data reported that:

  • Ownership nearly doubled in the UK between February 2010 and August 2011 (from 24% to 46%).
  • 46% of UK internet users also used their phones to go online in October 2011.

For now at least, most of our location based activity would seem to be based on previous online behaviours. So, search continues to dominate.

Google in a recent blog post described local search ads as “so hot right now” (Slide 22, Sept-Oct 2011 update). The search giant launched hyper-local search ads a year ago, along with a “News Near You” feature in May 2011. (See: April-May 2011 update, Slide 27.)

Meanwhile, BIA/Kelsey forecast that local search advertising revenues in the US will increase from $5.1 billion in 2010 to $8.2 billion in 2015. Their figures suggest by 2015, 30% of search will be local.

The other notable growth area, location based mobile advertising, also offers a different slant on the typical “check in” service which Gowalla et al tend to specialise in. Borrell forerecasts this space will increase 66% in the US during 2012 (Slide 22).

The most high profile example of this service in the UK is O2 More, which triggers advertising or deals when a user passes through certain locations – offering a clear financial incentive for sharing your location.

Perhaps this – along with tailored news and information manifest in services such as News Near You, Postcode Gazette and India’s Taazza – is the way forward.

Jiepang, China’s leading Location-Based Social Mobile App, offered a recent example of how to do this. Late last year they partnered with Starbucks, offering users a virtual Starbucks badge if they “checked-in” at a Starbucks store in the Shanghai, Jiangsu and Zhejiang provinces. When the number of badges issued hit 20,000, all badge holders got a free festive upgrade to a larger cup size. When coupled with the ease of NFC technology deployed to allow users to “check in” then it’s easy to understand the consumer benefit of such a service.

Mine’s a venti gingerbread latte. No cream. Xièxiè.