Tag Archives: BBC

Lyra McKee: why more journalists are going direct to readers

Lyra looked to crowdfunding when writing a book on the murder of the Reverend Robert Bradford

Lyra looked to crowdfunding when writing a book on the murder of the Reverend Robert Bradford

Lyra McKee* is an investigative journalist in Northern Ireland. In this post, originally published on The Muckrakershe explains why she feels journalists are turning away from traditional outlets in favour of building their own brands while exploring crowdfunding and micropublishing.

When I talk to older journalists (older being over the age of 30), they ask me the same question: who do you write for?

It’s an awkward question. If it was 2009, I’d tell them I’d been published in (or had pieces broadcast on) the Belfast Telegraph, Private Eye, BBC, Sky News – a dozen or so news outlets that took my work back then.

In 2013 the answer is: none.

I’m part of a generation of “digital native” journalists who sell their work directly to readers, bypassing traditional news outlets like newspapers and broadcasters. Increasingly, reporters are using services like BeaconKickstarter and Woopie to raise funds directly from their readers and publish their work.

Why are they doing this? Continue reading

Ethics in data journalism: accuracy

The following is the first in a series of extracts from a draft book chapter on ethics in data journalism. This is a work in progress, so if you have examples of ethical dilemmas, best practice, or guidance, I’d be happy to include it with an acknowledgement.

Data journalism ethics: accuracy

Probably the most basic ethical consideration in data journalism is the need to be accurate, and provide proper context to the stories that we tell. That can influence how we analyse the data, report on data stories, or our publication of the data itself.

In late 2012, for example, data journalist Nils Mulvad finally got his hands on veterinary prescriptions data that he had been fighting for for seven years. But he decided not to publish the data when he realised that it was full of errors. Continue reading

Daily Mail users think it’s less unbiased than Twitter/Facebook

Daily Mail impartiality compared against BBC, Twitter, Facebook and others

Is the Daily Mail less impartial than social media? That’s the takeaway from one of the charts  (shown above) in Ofcom’s latest Communications Market Report.

The report asked website and app users to rate 7 news websites against 5 criteria. The Daily Mail comes out with the lowest proportion of respondents rating it highly for ‘impartiality and unbiased‘, ‘Offers range of opinions‘, and ‘Importance‘.

This is particularly surprising given that two of the other websites are social networks. 28% rated Facebook and Twitter highly on impartiality, compared to 26% for the Daily Mail. Continue reading

Schofield’s list, the mob and a very modern moral panic

Someone, somewhere right now will be writing a thesis, dissertation or journal paper about the very modern moral panic playing out across the UK media.

What began as a story about allegations of sexual abuse by TV and radio celebrity Jimmy Savile turned into a story about that story being covered up, into how the abuse could take place (at the BBC too, in the 1970s, but also in hospitals and schools), then into wider allegations of a paedophile ring involving politicians.

Continue reading

“Genuinely digital content for the first time” – the BBC under Entwistle

For those who missed it, from George Entwistle’s speech to BBC staff this week, a taste of the corporation’s priorities under the new DG:

“It’s the quest for this – genuinely new forms of digital content – that represents the next profound moment of change we need to prepare for if we’re to deserve a new charter.

“As we increasingly make use of a distribution model – the internet – principally characterised by its return path, its capacity for interaction, its hunger for more and more information about the habits and preferences of individual users, then we need to be ready to create content which exploits this new environment – content which shifts the height of our ambition from live output to living output.

“We need to be ready to produce and create genuinely digital content for the first time. And we need to understand better what it will mean to assemble, edit and present such content in a digital setting where social recommendation and other forms of curation will play a much more influential role.”

BBC regional sites to consider including links to hyperlocal blogs

Old BBC North ident

Image from MHP The Ident Zone - click to see in context

The BBC’s social media lead for the English Regions Robin Morley has invited requests from “reputable hyperlocal websites” who want links to their stories included in the BBC’s regional news websites. Continue reading

Tinkering With Scraperwiki – The Bottom Line, OpenCorporates Reconciliation and the Google Viz API

Having got to grips with adding a basic sortable table view to a Scraperwiki view using the Google Chart Tools (Exporting and Displaying Scraperwiki Datasets Using the Google Visualisation API), I thought I’d have a look at wiring in an interactive dashboard control.

You can see the result at BBC Bottom Line programme explorer:

The page loads in the contents of a source Scraperwiki database (so only good for smallish datasets in this version) and pops them into a table. The searchbox is bound to the Synopsis column and and allows you to search for terms or phrases within the Synopsis cells, returning rows for which there is a hit.

Here’s the function that I used to set up the table and search control, bind them together and render them:

    google.load('visualization', '1.1', {packages:['controls']});

    google.setOnLoadCallback(drawTable);

    function drawTable() {

      var json_data = new google.visualization.DataTable(%(json)s, 0.6);

    var json_table = new google.visualization.ChartWrapper({'chartType': 'Table','containerId':'table_div_json','options': {allowHtml: true}});
    //i expected this limit on the view to work?
    //json_table.setColumns([0,1,2,3,4,5,6,7])

    var formatter = new google.visualization.PatternFormat('<a href="http://www.bbc.co.uk/programmes/{0}">{0}</a>');
    formatter.format(json_data, [1]); // Apply formatter and set the formatted value of the first column.

    formatter = new google.visualization.PatternFormat('<a href="{1}">{0}</a>');
    formatter.format(json_data, [7,8]);

    var stringFilter = new google.visualization.ControlWrapper({
      'controlType': 'StringFilter',
      'containerId': 'control1',
      'options': {
        'filterColumnLabel': 'Synopsis',
        'matchType': 'any'
      }
    });

  var dashboard = new google.visualization.Dashboard(document.getElementById('dashboard')).bind(stringFilter, json_table).draw(json_data);

    }

The formatter is used to linkify the two URLs. However, I couldn’t get the table to hide the final column (the OpenCorporates URI) in the displayed table? (Doing something wrong, somewhere…) You can find the full code for the Scraperwiki view here.

Now you may (or may not) be wondering where the OpenCorporates ID came from. The data used to populate the table is scraped from the JSON version of the BBC programme pages for the OU co-produced business programme The Bottom Line (Bottom Line scraper). (I’ve been pondering for sometime whether there is enough content there to try to build something that might usefully support or help promote OUBS/OU business courses or link across to free OU business courses on OpenLearn…) Supplementary content items for each programme identify the name of each contributor and the company they represent in a conventional way. (Their role is also described in what looks to be a conventionally constructed text string, though I didn’t try to extract this explicitly – yet. (I’m guessing the Reuters OpenCalais API would also make light work of that?))

Having got access to the company name, I thought it might be interesting to try to get a corporate identifier back for each one using the OpenCorporates (Google Refine) Reconciliation API (Google Refine reconciliation service documentation).

Here’s a fragment from the scraper showing how to lookup a company name using the OpenCorporates reconciliation API and get the data back:

ocrecURL='http://opencorporates.com/reconcile?query='+urllib.quote_plus("".join(i for i in record['company'] if ord(i)<128))
    try:
        recData=simplejson.load(urllib.urlopen(ocrecURL))
    except:
        recData={'result':[]}
    print ocrecURL,[recData]
    if len(recData['result'])>0:
        if recData['result'][0]['score']>=0.7:
            record['ocData']=recData['result'][0]
            record['ocID']=recData['result'][0]['uri']
            record['ocName']=recData['result'][0]['name']

The ocrecURL is constructed from the company name, sanitised in a hack fashion. If we get any results back, we check the (relevance) score of the first one. (The results seem to be ordered in descending score order. I didn’t check to see whether this was defined or by convention.) If it seems relevant, we go with it. From a quick skim of company reconciliations, I noticed at least one false positive – Reed – but on the whole it seemed to work fairly well. (If we look up more details about the company from OpenCorporates, and get back the company URL, for example, we might be able to compare the domain with the domain given in the link on the Bottom Line page. A match would suggest quite strongly that we have got the right company…)

As @stuartbrown suggeted in a tweet, a possible next step is to link the name of each guest to a Linked Data identifier for them, for example, using DBPedia (although I wonder – is @opencorporates also minting IDs for company directors?). I also need to find some way of pulling out some proper, detailed subject tags for each episode that could be used to populate a drop down list filter control…

PS for more Google Dashboard controls, check out the Google interactive playground…

PPS see also: OpenLearn Glossary Search and OpenLearn LEarning Outcomes Search