Tag Archives: cross-post

Do hyperlocal and student websites fall foul of the new press regulator and libel laws?

leveson regulation guidance

The DCMS pubished this image to clarify the definition of “a relevant published” under proposals published in early 2013.

Nick Booth left a Press Recognition Panel consultation under the impression that non profit hyperlocals were going to be exposed by the new regulation system. Then legal experts suggested he’d got it wrong. So which is it? In a special post cross-published from Podnosh, Nick tries to tease out a complex law and ask: ‘when someone sues now, who pays?’.

Last week I spent a couple of hours at a consultation in Birmingham run by the Press Recognition Panel, which is the regulator set up to oversee the creation of (a?) new press regulator(s) following the Leveson Inquiry and the Royal Charter. (I know this has already got a bit “what?”, but stick with me.)

I was there because I’m interested in what it means for hyperlocal websites (which we have helped people set up over a number of years). Especially the implications for those run for the love of their community,  sites like B31voices or WV11 –  not run for the money. Talk About Local has already questioned whether hyperlocals fall within Leveson and I wanted to be clear one way or the other…

So this is how my thinking has evolved…. if you find an asterix next to an assertion I’m not 100% sure this is right. Continue reading

FAQ: a review of 2012 with Data Driven Journalism.net

The Data Driven Journalism website asked me a few questions as part of their end-of-2012 roundup. You can find the article there, but for the sake of archiving, my responses are copied below (without the helpful pictures they added):

What do you do?

I’m a data journalism trainer and Iecturer. I run the MA in Online Journalism at Birmingham City University and am a visiting professor in online journalism at City University London. I’m also the author of Scraping for Journalists.

What was your biggest data driven achievement this year?

An investigation into the allocation of Olympic torchbearer places. The investigation came about as a result of scraping details on torchbearers from the official website. But it was also a great example of collaboration between non-journalists and journalists, as well as a number of techniques outside of core data journalism.

The investigation led to questions in Parliament and international media coverage. In the final week of the Olympic torch relay we published a short ebook about the affair, with all proceeds going to the Brittle Bone Society.

What was your favourite data journalism project this year and why?

I really liked Landportal.info, which is attempting to map land ownership – it’s highlighting a global trend of companies buying up land in Africa which would be easy to overlook by journalists. The New York Times’s multimedia treatment of performance data in three Olympic events across over a century was really well done. And I’m always looking at how data journalism can be used in softer news, where Anna Powell-Smith’s What Size Am I? is a great example of fashion/consumer data journalism.

For sheer significance I can’t avoid mentioning Nate Silver’s work on the US election – that was a watershed for data journalism and an embarrassment for many political pundits.

More broadly – what excites you in this field at the moment? Any interesting developments that you’d like to mention?

There’s a lot of consolidation at the moment, so less of the spectacular developments – but I am excited at how data journalism is being taken on by a wider range of companies. This year I’ve spent a lot more time training staff at consumer magazine publishers, for example.

I’m also excited about some of the new journalism startups based on public data like Rafat Ali’s Skift. In terms of tools, it’s great to see network analysis added to Fusion Tables, and the Knight Digital Media Center’s freeDive makes it very easy indeed to create a public database from a Google Doc.

What about disappointments?

I am constantly disappointed by publishers who say they don’t have the resources to do data journalism. That shows a real lack of imagination and understanding of what data journalism really is. It doesn’t have to be a spectacular interactive data visualisation – it can simply be about getting to better stories more quickly, accurately and more deeply through a few basic techniques.

Any predictions about what the future holds for data journalism in 2013?

I’ve just been training someone from Chile so I’m hoping to see more data journalism there!

Anything else you’d like to share with everyone?

Happy Christmas!

Q&A: 5 questions about the pros and cons of data journalism (Cross-post)

The following Q&A is cross-posted from a post on the Media And Digital Enterprise project of the School of Journalism, Media and Communication at the University of Central Lancashire.

Why do journalists need to learn data skills?

For two key reasons: firstly because information is more widely available, and data skills are one of the few remaining ways for journalists to establish their value in that environment.

And secondly, because data is becoming a very important source of both news and the business case for media organisations. Continue reading

How to teach a journalist programming

Cross-posted from Data Driven Journalism.

Earlier this year I set out to tackle a problem that was bothering me: journalists who had started to learn programming were giving up.

They were hitting a wall. In trying to learn the more advanced programming techniques – particularly those involved in scraping – they seemed to fall into one of two camps:

  • People who learned programming, but were taking far too long to apply it, and so losing momentum – the generalists
  • People who learned how to write one scraper, but could not extend it to others, and so becoming frustrated – the specialists

Continue reading

Two reasons why every journalist should know about scraping (cross-posted)

This was originally published on Journalism.co.uk – cross-posted here for convenience.

Journalists rely on two sources of competitive advantage: being able to work faster than others, and being able to get more information than others. For both of these reasons, I  love scraping: it is both a great time-saver, and a great source of stories no one else has. Continue reading

Model for the 21st Century Newsroom Redux: part 1 on BBC College of Journalism blog

The BBC College of Journalism asked me to revisit my Model for the 21st Century Newsroom 4 years on. You can now read the first part of the results on their blog – with further substantial parts to follow next week. Thoughts welcome.