Tag Archives: scraping

Ethics in data journalism: mass data gathering – scraping, FOI and deception

chicago_crime

Automated mapping of data – ChicagoCrime.org – image from Source

This is the third in a series of extracts from a draft book chapter on ethics in data journalismThe first looked at how ethics of accuracy play out in data journalism projects, and the second at culture clashes, privacy, user data and collaborationThis is a work in progress, so if you have examples of ethical dilemmas, best practice, or guidance, I’d be happy to include it with an acknowledgement.

Mass data gathering – scraping, FOI, deception and harm

The data journalism practice of ‘scraping’ – getting a computer to capture information from online sources – raises some ethical issues around deception and minimisation of harm. Some scrapers, for example, ‘pretend’ to be a particular web browser, or pace their scraping activity more slowly to avoid detection. But the deception is practised on another computer, not a human – so is it deception at all? And if the ‘victim’ is a computer, is there harm? Continue reading

How to think like a computer: 5 tips for a data journalism workflow part 3

This is the final part of a series of blog posts. The first explains how using feeds and social bookmarking can make for a quicker data journalism workflow. The second looks at how to anticipate and prevent problems; and how collaboration can improve data work.

Workflow tip 5. Think like a computer

The final workflow tip is all about efficiency. Computers deal with processes in a logical way, and good programming is often about completing processes in the simplest way possible.

If you have any tasks that are repetitive, break them down and work out what patterns might allow you to do them more quickly – or for a computer to do them. Continue reading

It’s finished! Scraping for Journalists now complete (for now)

Scraping for Journalists book

Last night I published the final chapter of my first ebook: Scraping for Journalists. Since I started publishing it in July, over 40 ‘versions’ of the book have been uploaded to Leanpub, a platform that allows users to receive updates as a book develops – but more importantly, to input into its development.

I’ve been amazed at the consistent interest in the book – last week it passed 500 readers: 400 more than I ever expected to download it. Their comments have directly shaped, and in some cases been reproduced in, the book – something I expect to continue (I plan to continue to update it).

As a result I’ve become a huge fan of this form of ebook publishing, and plan to do a lot more with it (some hints here and here). The format combines the best qualities of traditional book publishing with those of blogging and social media (there’s a Facebook page too).

Meanwhile, there’s still more to do with Scraping for Journalists: publishing to other platforms and in other languages for starters… If you’re interested in translating the book into another language, please get in touch.

 

2 how-tos: researching people and mapping planning applications

Mapping planning applications

Sid Ryan’s planning applications map

Sid Ryan wanted to see if planning applications near planning committee members were more or less likely to be accepted. In two guest posts on Help Me Investigate he shows how to research people online (in this case the councillors), and how to map planning applications to identify potential relationships.

The posts take in a range of techniques including:

  • Scraping using Scraperwiki and the Google Drive spreadsheet function importXML
  • Mapping in Google Fusion Tables
  • Registers of interests
  • Using advanced search techniques
  • Using Land Registry enquiries
  • Using Companies House and Duedil
  • Other ways to find information on individuals, such as Hansard, LinkedIn, 192.com, Lexis Nexis, whois and FriendsReunited

If you find it useful, please let me know – and if you can add anything… please do.

7 laws journalists now need to know – from database rights to hate speech

Law books image by Mr T in DC

Image by Mr T in DC

When you start publishing online you move from the well-thumbed areas of defamation and libel, contempt of court and privilege and privacy to a whole new world of laws and licences.

This is a place where laws you never knew existed can be applied to your work – while other ones can come in surprisingly useful. Here are the key ones:

Continue reading

How-to: Scraping ugly HTML using ‘regular expressions’ in an OutWit Hub scraper

Regular Expressions cartoon on xkcd

Regular Expressions cartoon from xkcd

The following is the first part of an extract from Chapter 10 of Scraping for Journalists. It introduces a particularly useful tool in scraping – regex – which is designed to look for ‘regular expressions’ such as specific words, prefixes or particular types of code. I hope you find it useful. 

This tutorial will show you how to scrape a particularly badly formatted piece of data. In this case, the UK Labour Party’s publication of meetings and dinners with donors and trade union general secretaries.

To do this, you’ll need to install the free scraping tool OutWit Hub. Regex can be used in other tools and programming as well, but this tool is a good way to learn it without knowing any other programming. Continue reading

Two reasons why every journalist should know about scraping (cross-posted)

This was originally published on Journalism.co.uk – cross-posted here for convenience.

Journalists rely on two sources of competitive advantage: being able to work faster than others, and being able to get more information than others. For both of these reasons, I  love scraping: it is both a great time-saver, and a great source of stories no one else has. Continue reading

My first ebook: Scraping For Journalists (and programming too)

Next week I will start publishing my first ebook: Scraping for Journalists.

Although I’ve written about scraping before on the blog, this book is designed to take the reader step by step through a series of tasks (a chapter each) which build a gradual understanding of the principles and techniques for tackling scraping problems. Everything has a direct application for journalism, and each principle is related to their application in scraping for newsgathering.

For example: the first scraper requires no programming knowledge, and is working within 5 minutes of reading.

I’m using Leanpub for this ebook, because it allows you to publish in installments and update the book for users – which suits a book like this perfectly, as I’ll be publishing chapters week by week, Codecademy-style.

If you want to be alerted when the book is ready register on the book’s Leanpub page.

Create a council ward map with Scraperwiki

Mapping council wards

With local elections looming this is a great 20-30 minute project for any journalist wanting to create an interactive Google map of council ward boundaries.

For this you will need: