Tag Archives: BBC

Charging for journalism – crowdfunder SA Mathieson’s experience

SA Mathieson Beacon page

If you assumed that the future of journalism would only be free (or at least advertiser-funded), says SA Mathieson, you’re wrong. In a guest post for OJB Mathieson – who recently successfully crowdfunded his own project to report on the Scottish referendum – explains why the web turns out to be capable of charging for access too.

The Columbia Review of Journalism recently reported that the Financial Times now has nearly twice as many digital subscribers as print ones, having added 99,000 online customers in 2013.

They pay significant amounts for access: the cheapest online subscription to the FT is £5.19 a week. A free registration process does allow access to 8 articles a month – but try to access a ninth and you have to pay.

The FT was earlier than most to charge online, but many publishers have followed suit. Only a few – such as The Times – lock up everything, but titles including the Telegraph, New York Times and Economist all use metering, allowing non-paying readers access to a limited number of articles before a subscription is required. They have been joined by increasing numbers of trade and local publications.

This isn’t just an option for established titles: as a freelance journalist I write for Beacon, a start-up used by more than 100 journalists in more than 30 countries to publish their reporting. It has “more than several thousand” subscribers after five months’ operation, co-founder Adrian Sanders told the New York Times recently.

Continue reading

“I don’t do maths”: how j-schools teach statistics to journalists

stats Image by Simon Cunningham

Image by Simon Cunningham

Teresa Jolley reports from a conference for teaching statistics to journalism students

I am not a great ‘numbers’ person, but even I was surprised by the attitudes that journalism lecturers at the Statistics in Journalism conference reported in their students.

‘I don’t do numbers’ and ‘I hate maths’ were depressingly common expressions, perhaps unsurprisingly. People wanting to study journalism enjoy the use of language and rarely expect that numbers will be vital to the stories they are telling.

So those responsible for journalism education have a tricky task. A bit like providing a sweet covering to a nasty-tasting tablet, it was said that lecturers need to be adept at finding ingenious ways to teach a practical and relevant use of numbers without ever mentioning the M (maths) or S (statistics) words. Continue reading

Lyra McKee: why more journalists are going direct to readers

Lyra looked to crowdfunding when writing a book on the murder of the Reverend Robert Bradford

Lyra looked to crowdfunding when writing a book on the murder of the Reverend Robert Bradford

Lyra McKee* is an investigative journalist in Northern Ireland. In this post, originally published on The Muckrakershe explains why she feels journalists are turning away from traditional outlets in favour of building their own brands while exploring crowdfunding and micropublishing.

When I talk to older journalists (older being over the age of 30), they ask me the same question: who do you write for?

It’s an awkward question. If it was 2009, I’d tell them I’d been published in (or had pieces broadcast on) the Belfast Telegraph, Private Eye, BBC, Sky News – a dozen or so news outlets that took my work back then.

In 2013 the answer is: none.

I’m part of a generation of “digital native” journalists who sell their work directly to readers, bypassing traditional news outlets like newspapers and broadcasters. Increasingly, reporters are using services like BeaconKickstarter and Woopie to raise funds directly from their readers and publish their work.

Why are they doing this? Continue reading

Ethics in data journalism: accuracy

The following is the first in a series of extracts from a draft book chapter on ethics in data journalism. This is a work in progress, so if you have examples of ethical dilemmas, best practice, or guidance, I’d be happy to include it with an acknowledgement.

Data journalism ethics: accuracy

Probably the most basic ethical consideration in data journalism is the need to be accurate, and provide proper context to the stories that we tell. That can influence how we analyse the data, report on data stories, or our publication of the data itself.

In late 2012, for example, data journalist Nils Mulvad finally got his hands on veterinary prescriptions data that he had been fighting for for seven years. But he decided not to publish the data when he realised that it was full of errors. Continue reading

Daily Mail users think it’s less unbiased than Twitter/Facebook

Daily Mail impartiality compared against BBC, Twitter, Facebook and others

Is the Daily Mail less impartial than social media? That’s the takeaway from one of the charts  (shown above) in Ofcom’s latest Communications Market Report.

The report asked website and app users to rate 7 news websites against 5 criteria. The Daily Mail comes out with the lowest proportion of respondents rating it highly for ‘impartiality and unbiased‘, ‘Offers range of opinions‘, and ‘Importance‘.

This is particularly surprising given that two of the other websites are social networks. 28% rated Facebook and Twitter highly on impartiality, compared to 26% for the Daily Mail. Continue reading

Schofield’s list, the mob and a very modern moral panic

Someone, somewhere right now will be writing a thesis, dissertation or journal paper about the very modern moral panic playing out across the UK media.

What began as a story about allegations of sexual abuse by TV and radio celebrity Jimmy Savile turned into a story about that story being covered up, into how the abuse could take place (at the BBC too, in the 1970s, but also in hospitals and schools), then into wider allegations of a paedophile ring involving politicians.

Continue reading

“Genuinely digital content for the first time” – the BBC under Entwistle

For those who missed it, from George Entwistle’s speech to BBC staff this week, a taste of the corporation’s priorities under the new DG:

“It’s the quest for this – genuinely new forms of digital content – that represents the next profound moment of change we need to prepare for if we’re to deserve a new charter.

“As we increasingly make use of a distribution model – the internet – principally characterised by its return path, its capacity for interaction, its hunger for more and more information about the habits and preferences of individual users, then we need to be ready to create content which exploits this new environment – content which shifts the height of our ambition from live output to living output.

“We need to be ready to produce and create genuinely digital content for the first time. And we need to understand better what it will mean to assemble, edit and present such content in a digital setting where social recommendation and other forms of curation will play a much more influential role.”

BBC regional sites to consider including links to hyperlocal blogs

Old BBC North ident

Image from MHP The Ident Zone - click to see in context

The BBC’s social media lead for the English Regions Robin Morley has invited requests from “reputable hyperlocal websites” who want links to their stories included in the BBC’s regional news websites. Continue reading

Tinkering With Scraperwiki – The Bottom Line, OpenCorporates Reconciliation and the Google Viz API

Having got to grips with adding a basic sortable table view to a Scraperwiki view using the Google Chart Tools (Exporting and Displaying Scraperwiki Datasets Using the Google Visualisation API), I thought I’d have a look at wiring in an interactive dashboard control.

You can see the result at BBC Bottom Line programme explorer:

The page loads in the contents of a source Scraperwiki database (so only good for smallish datasets in this version) and pops them into a table. The searchbox is bound to the Synopsis column and and allows you to search for terms or phrases within the Synopsis cells, returning rows for which there is a hit.

Here’s the function that I used to set up the table and search control, bind them together and render them:

    google.load('visualization', '1.1', {packages:['controls']});

    google.setOnLoadCallback(drawTable);

    function drawTable() {

      var json_data = new google.visualization.DataTable(%(json)s, 0.6);

    var json_table = new google.visualization.ChartWrapper({'chartType': 'Table','containerId':'table_div_json','options': {allowHtml: true}});
    //i expected this limit on the view to work?
    //json_table.setColumns([0,1,2,3,4,5,6,7])

    var formatter = new google.visualization.PatternFormat('<a href="http://www.bbc.co.uk/programmes/{0}">{0}</a>');
    formatter.format(json_data, [1]); // Apply formatter and set the formatted value of the first column.

    formatter = new google.visualization.PatternFormat('<a href="{1}">{0}</a>');
    formatter.format(json_data, [7,8]);

    var stringFilter = new google.visualization.ControlWrapper({
      'controlType': 'StringFilter',
      'containerId': 'control1',
      'options': {
        'filterColumnLabel': 'Synopsis',
        'matchType': 'any'
      }
    });

  var dashboard = new google.visualization.Dashboard(document.getElementById('dashboard')).bind(stringFilter, json_table).draw(json_data);

    }

The formatter is used to linkify the two URLs. However, I couldn’t get the table to hide the final column (the OpenCorporates URI) in the displayed table? (Doing something wrong, somewhere…) You can find the full code for the Scraperwiki view here.

Now you may (or may not) be wondering where the OpenCorporates ID came from. The data used to populate the table is scraped from the JSON version of the BBC programme pages for the OU co-produced business programme The Bottom Line (Bottom Line scraper). (I’ve been pondering for sometime whether there is enough content there to try to build something that might usefully support or help promote OUBS/OU business courses or link across to free OU business courses on OpenLearn…) Supplementary content items for each programme identify the name of each contributor and the company they represent in a conventional way. (Their role is also described in what looks to be a conventionally constructed text string, though I didn’t try to extract this explicitly – yet. (I’m guessing the Reuters OpenCalais API would also make light work of that?))

Having got access to the company name, I thought it might be interesting to try to get a corporate identifier back for each one using the OpenCorporates (Google Refine) Reconciliation API (Google Refine reconciliation service documentation).

Here’s a fragment from the scraper showing how to lookup a company name using the OpenCorporates reconciliation API and get the data back:

ocrecURL='http://opencorporates.com/reconcile?query='+urllib.quote_plus("".join(i for i in record['company'] if ord(i)<128))
    try:
        recData=simplejson.load(urllib.urlopen(ocrecURL))
    except:
        recData={'result':[]}
    print ocrecURL,[recData]
    if len(recData['result'])>0:
        if recData['result'][0]['score']>=0.7:
            record['ocData']=recData['result'][0]
            record['ocID']=recData['result'][0]['uri']
            record['ocName']=recData['result'][0]['name']

The ocrecURL is constructed from the company name, sanitised in a hack fashion. If we get any results back, we check the (relevance) score of the first one. (The results seem to be ordered in descending score order. I didn’t check to see whether this was defined or by convention.) If it seems relevant, we go with it. From a quick skim of company reconciliations, I noticed at least one false positive – Reed – but on the whole it seemed to work fairly well. (If we look up more details about the company from OpenCorporates, and get back the company URL, for example, we might be able to compare the domain with the domain given in the link on the Bottom Line page. A match would suggest quite strongly that we have got the right company…)

As @stuartbrown suggeted in a tweet, a possible next step is to link the name of each guest to a Linked Data identifier for them, for example, using DBPedia (although I wonder – is @opencorporates also minting IDs for company directors?). I also need to find some way of pulling out some proper, detailed subject tags for each episode that could be used to populate a drop down list filter control…

PS for more Google Dashboard controls, check out the Google interactive playground…

PPS see also: OpenLearn Glossary Search and OpenLearn LEarning Outcomes Search

Games are just another storytelling device

Whenever people talk about games as a potential journalistic device, there is a reaction against the idea of ‘play’ as a method for communicating ‘serious’ news.

Malcolm Bradbrook’s post on the News:Rewired talk by Newsgames author Bobby Schweizer is an unusually thoughtful exploration of that reaction, where he asks whether the use of games might contribute to the wider tabloidisation of news, the key aspects of which he compares with games as follows:

  1. “Privileging the visual over analysis – I think this is obvious where games are concerned. Actual levels of analysis will be minimal compared to the visual elements of the game
  2. “Using cultural knowledge over analysis – the game will become a shared experience, just as the BBC’s One in 7bn was in October. But how many moved beyond typing in their date of birth to reading the analysis? It drove millions to the BBC site but was it for the acquisition of understanding or something to post on Facebook/Twitter?
  3. “Dehistoricised and fragmented versions of events - as above, how much context can you provide in a limited gaming experience?”

These are all good points, and designers of journalism games should think about them carefully, but I think there’s a danger of seeing games in isolation.

Hooking the user – and creating a market

With the BBC’s One in 7bn interactive, for example, I’d want to know how many users would have read the analysis if there was no interactive at all. Yes, many people will not have gone further than typing in their date of birth – but that doesn’t mean all of them didn’t. 10% of a lot (and that interactive attracted a huge audience) can be more than 100% of few.

What’s more, the awareness driven by that interactive creates an environment for news discussion that wouldn’t otherwise exist. Even if 90% of users (pick your own proportion, it doesn’t matter) never read the analysis directly, they are still more likely to discuss the story with others, some of whom would then be able to talk about the analysis the others missed.

Without that social context, the ‘serious’ news consumer has less opportunity to discuss what they’ve read.

News is multi-purpose

Then there’s the idea that people read the news for “acquisition of understanding”. I’m not sure how much news consumption is motivated by that, and how much by the need to be able to operate socially (discussing current events) or professionally (reacting to them) or even emotionally (being stimulated by them).

As someone who has tried various techniques to help students “acquire understanding”, I’m aware that the best method is not always to present them with facts, or a story. Sometimes it’s about creating a social environment; sometimes it’s about simulating an experience or putting people in a situation where they are faced with particular problems (all of which are techniques used by games).

Bradbrook ends with a quote from Jeremy Paxman on journalism’s “first duty” as disclosure. But if you can’t get people to listen to that disclosure then it is purposeless (aside from making the journalist feel superior). That is why journalists write stories, and not research documents. It is why they use case studies and not just statistics.

Games are another way of communicating information. Like all the other methods, they have their limitations as well as strengths. We need to be aware of these, and think about them critically, but to throw out the method entirely would be a mistake, I think.

UPDATE: Some very useful tweets from Mary Hamilton, Si Lumb, Chris Unitt and Mark Sorrell drew my attention to some very useful posts on games and storytelling more generally.

Sorrell’s post Games Good Stories Bad, for example, includes this passage:

“Games can create great stories, don’t get me wrong. But they are largely incapable oftelling great stories. Games are about interaction and agency, about choice and self-determination. One of the points made by fancy-pants French sociologist Roger Caillois when defining what a game is, was that the outcome of a game must be uncertain. The result cannot be known in advance. When you try and tell a story in a game, you must break that rule, you must make the outcome of events pre-determined.”
And while reading Lumb’s blog I came across this post with this point:

” A story as an entity, as a thing doesn’t exist until some event, some imagination, some narrative is constructed, relived, shared or described. It must be told. It is “story telling”, after all. Only at the point that you tell someone about that something does it become real, does it become a story. It is always from your perspective, it is always your interpretation, it is a gift you wish to share and that is how it comes to be.

“In a game you can plant narrative as discoverable, you can have cut scenes, you can have environments and situations and mechanics and toys and rules and delight and wonderful play – and in all of this you hide traditional “stories” from visual and textual creators (until read or viewed they don’t exist) and you have the emergence of events that may indeed become stories when you share with another person.”

And finally, if you just want to explore these issues in a handy diagram, there’s this infographic tweeted by Lumb:

A Model of Play - Dubberly Design Office

A Model of Play - Dubberly Design Office

For more background on games in journalism, see my Delicious bookmarks at http://delicious.com/paulb/gamejournalism