Monthly Archives: November 2011

The strikes and the rise of the liveblog

Liveblogging the strikes: Twitter's #n30 stream

Liveblogging the strikes: Twitter's #n30 stream

Today sees the UK’s biggest strike in decades as public sector workers protest against pension reforms. Most news organisations are covering the day’s events through liveblogs: that web-native format which has so quickly become the automatic choice for covering rolling news.

To illustrate just how dominant the liveblog has become take a look at the BBCChannel 4 News, The Guardian’s ‘Strikesblog‘ or The TelegraphThe Independent’s coverage is hosted on their own live.independent.co.uk subdomain while Sky have embedded their liveblog in other articles. There’s even a separate Storify liveblog for The Guardian’s Local Government section, and on Radio 5 Live you can find an example of radio reporters liveblogging.

Regional newspapers such as the Chronicle in the north east and the Essex County Standard are liveblogging the local angle; while the Huffington Post liveblog the political face-off at Prime Minister’s Question Time and the PoliticsHome blog liveblogs both. Leeds Student are liveblogging too. And it’s not just news organisations: campaigning organisation UK Uncut have their own liveblog, as do the public sector workers union UNISON and Pensions Justice (on Tumblr).

So dominant so quickly

The format has become so dominant so quickly because it satisfies both editorial and commercial demands: liveblogs are sticky – people stick around on them much longer than on traditional articles, in the same way that they tend to leave the streams of information from Twitter or Facebook on in the background of their phone, tablet or PC – or indeed, the way that they leave on 24 hour television when there are big events.

It also allows print outlets to compete in the 24-hour environment of rolling news. The updates of the liveblog are equivalent to the ‘time-filling’ of 24-hour television, with this key difference: that updates no longer come from a handful of strategically-placed reporters, but rather (when done well) hundreds of eyewitnesses, stakeholders, experts, campaigners, reporters from other news outlets, and other participants.

The results (when done badly) can be more noise than signal – incoherent, disconnected, fragmented. When done well, however, a good liveblog can draw clarity out of confusion, chase rumours down to facts, and draw multiple threads into something resembling a canvas.

At this early stage liveblogging is still a form finding its feet. More static than broadcast, it does not require the same cycle of repetition; more dynamic than print, it does, however, demand regular summarising.

Most importantly, it takes place within a network. The audience are not sat on their couches watching a single piece of coverage; they may be clicking between a dozen different sources; they may be present at the event itself; they may have friends or family there, sending them updates from their phone. If they are hearing about something important that you’re not addressing, you have a problem.

The list of liveblogs above demonstrates this particularly well, and it doesn’t include the biggest liveblog of all: the #n30 thread on Twitter (and as Facebook users we might also be consuming a liveblog of sorts of our friends’ updates).

More than documenting

In this situation the journalist is needed less to document what is taking place, and more to build on the documentation that is already being done: by witnesses, and by other journalists. That might mean aggregating the most important updates, or providing analysis of what they mean. It might mean enriching content by adding audio, video, maps or photography. Most importantly, it may mean verifying accounts that hold particular significance.

Liveblogging: adding value to the network

Liveblogging: adding value to the network

These were the lessons that I sought to teach my class last week when I reconstructed an event in the class and asked them to liveblog it (more in a future blog post). Without any briefing, they made predictable (and planned) mistakes: they thought they were there purely to document the event.

But now, more than ever, journalists are not there solely to document.

On a day like today you do not need to be journalist to take part in the ‘liveblog’ of #n20. If you are passionate about current events, if you are curious about news, you can be out there getting experience in dealing with those events – not just reporting them, but speaking to the people involved, recording images and audio to enrich what is in front of you, creating maps and galleries and Storify threads to aggregate the most illuminating accounts. Seeking reaction and verification to the most challenging ones.

The story is already being told by hundreds of people, some better than others. It’s a chance to create good journalism, and be better at it. I hope every aspiring journalist takes it, and the next chance, and the next one.

Advertisements

How to deal with a PR man who emails like a lawyer

There’s a fascinating case study going on across some skeptics blogs on dealing with legal threats from another country.

The Quackometer and Rhys Morgan have – among others – received emails from Marc Stephens, who claims to “represent” the Burzynski Clinic in Houston, Texas, and threatens them with legal action for libel, among other things.

What is notable is how both have researched both Stephens and the law, and composed their responses accordingly. From Rhys Morgan:

“I have carried out some internet research, and I have not been able to establish whether or not Mr. Stephens is a lawyer; certainly he does not appear to be a member of the California Bar nor the Texas Bar in the light of my visit to the California Bar Association’s and the State Bar of Texas’s websites.”

From Quackometer:

“This foam-flecked angry rant did not look like the work of a lawyer to me. And indeed it is not. Marc Stephens appears to work for Burzynski in the form of PR, marketing and sponsorship.”

There’s plenty more in each post, including reference to case law and the pre-action defamation protocol, which provide plenty of material if you’re ever in a similar situation – or hosting a classroom discussion on libel law.

via Neurobonkers

Accessing and Visualising Sentencing Data for Local Courts

A recent provisional data release from the Ministry of Justice contains sentencing data from English(?) courts, at the offence level, for the period July 2010-June 2011: “Published for the first time every sentence handed down at each court in the country between July 2010 and June 2011, along with the age and ethnicity of each offender.” Criminal Justice Statistics in England and Wales [data]

In this post, I’ll describe a couple of ways of working with the data to produce some simple graphical summaries of the data using Google Fusion Tables and R…

…but first, a couple of observations:

– the web page subheading is “Quarterly update of statistics on criminal offences dealt with by the criminal justice system in England and Wales.”, but the sidebar includes the link to the 12 month set of sentencing data;
– the URL of the sentencing data is http://www.justice.gov.uk/downloads/publications/statistics-and-data/criminal-justice-stats/recordlevel.zip, which does not contain a time reference, although the data is time bound. What URL will be used if data for the period 7/11-6/12 is released in the same way next year?

The data is presented as a zipped CSV file, 5.4MB in the zipped form, and 134.1MB in the unzipped form.

The unzipped CSV file is too large to upload to a Google Spreadsheet or a Google Fusion Table, which are two of the tools I use for treating large CSV files as a database, so here are a couple of ways of getting in to the data using tools I have to hand…

Unix Command Line Tools

I’m on a Mac, so like Linux users I have ready access to a Console and several common unix commandline tools that are ideally suited to wrangling text files (on Windows, I suspect you need to install something like Cygwin; a search for windows unix utilities should turn up other alternatives too).

In Playing With Large (ish) CSV Files, and Using Them as a Database from the Command Line: EDINA OpenURL Logs and Postcards from a Text Processing Excursion I give a couple of examples of how to get started with some of the Unix utilities, which we can crib from in this case. So for example, after unzipping the recordlevel.csv document I can look at the first 10 rows by opening a console window, changing directory to the directory the file is in, and running the following command:

head recordlevel.csv

Or I can pull out rows that contain a reference to the Isle of Wight using something like this command:

grep -i wight recordlevel.csv > recordsContainingWight.csv

(The -i reads: “ignoring case”; grep is a command that identifies rows contain the search term (wight in this case). The > recordsContainingWight.csv says “send the result to the file recordsContainingWight.csv” )

Having extracted rows that contain a reference to the Isle of Wight into a new file, I can upload this smaller file to a Google Spreadsheet, or as Google Fusion Table such as this one: Isle of Wight Sentencing Fusion table.

Isle fo wight sentencing data

Once in the fusion table, we can start to explore the data. So for example, we can aggregate the data around different values in a given column and then visualise the result (aggregate and filter options are available from the View menu; visualisation types are available from the Visualize menu):

Visualising data in google fusion tables

We can also introduce filters to allow use to explore subsets of the data. For example, here are the offences committed by females aged 35+:

Data exploration in Google FUsion tables

Looking at data from a single court may be of passing local interest, but the real data journalism is more likely to be focussed around finding mismatches between sentencing behaviour across different courts. (Hmm, unless we can get data on who passed sentences at a local level, and look to see if there are differences there?) That said, at a local level we could try to look for outliers maybe? As far as making comparisons go, we do have Court and Force columns, so it would be possible to compare Force against force and within a Force area, Court with Court?

R/RStudio

If you really want to start working the data, then R may be the way to go… I use RStudio to work with R, so it’s a simple matter to just import the whole of the reportlevel.csv dataset.

Once the data is loaded in, I can use a regular expression to pull out the subset of the data corresponding once again to sentencing on the Isle of Wight (i apply the regular expression to the contents of the court column:

recordlevel <- read.csv("~/data/recordlevel.csv")
iw=subset(recordlevel,grepl("wight",court,ignore.case=TRUE))

We can then start to produce simple statistical charts based on the data. For example, a bar plot of the sentencing numbers by age group:

age=table(iw$AGE)
barplot(age, main="IW: Sentencing by Age", xlab="Age Range")

R - bar plot

We can also start to look at combinations of factors. For example, how do offence types vary with age?

ageOffence=table(iw$AGE, iw$Offence_type)
barplot(ageOffence,beside=T,las=3,cex.names=0.5,main="Isle of Wight Sentences", xlab=NULL, legend = rownames(ageOffence))

R barplot - offences on IW

If we remove the beside=T argument, we can produce a stacked bar chart:

barplot(ageOffence,las=3,cex.names=0.5,main="Isle of Wight Sentences", xlab=NULL, legend = rownames(ageOffence))

R - stacked bar chart

If we import the ggplot2 library, we have even more flexibility over the presentation of the graph, as well as what we can do with this sort of chart type. So for example, here’s a simple plot of the number of offences per offence type:

require(ggplot2)
#You may need to install ggplot2 as a library if it isn't already installed
ggplot(iw, aes(factor(Offence_type)))+ geom_bar() + opts(axis.text.x=theme_text(angle=-90))+xlab('Offence Type')

GGPlot2 in R

Alternatively, we can break down offence types by age:

ggplot(iw, aes(AGE))+ geom_bar() +facet_wrap(~Offence_type)

ggplot facet barplot

We can bring a bit of colour into a stacked plot that also displays the gender split on each offence:

ggplot(iw, aes(AGE,fill=sex))+geom_bar() +facet_wrap(~Offence_type)

ggplot with stacked factor

One thing I’m not sure how to do is rip the data apart in a ggplot context so that we can display percentage breakdowns, so we could compare the percentage breakdown by offence type on sentences awarded to males vs. females, for example? If you do know how to do that, please post a comment below 😉

PS HEre’s an easy way of getting started with ggplot… use the online hosted version at http://www.yeroon.net/ggplot2/ using this data set: wightCrimRecords.csv; download the file to your computer then upload it as shown below:

yeroon.net/ggplot2

PPS I got a little way towards identifying percentage breakdowns using a crib from here. The following command:
iwp=tapply(iw$Offence_type,iw$sex,function(x){prop.table(table(x))})
generates a (multidimensional) array for the responseVar (Offence) about the groupVar (sex). I don’t know how to generate a single data frame from this, but we can create separate ones for each sex as follows:
iwpMale=data.frame(iwp['Male'])
iwpFemale=data.frame(iwp['Female'])

We can then plot these percentages using constructions of the form:
ggplot(iwp2)+geom_bar(aes(x=Male.x,y=Male.Freq))
What I haven’t worked out how to do is elegantly map from the multidimensional array to a single data.frame? If you know how, please add a comment below…(I also posted a question on Cross Validated, the stats bit of Stack Exchange…)

Maps “in the public interest” now exempt from Google Maps API charge

If you thought you couldn’t use the Google Maps API any more as a journalist, this update to the Google Geo Developers Blog should make you reconsider. From Nieman Journalism Lab:

“Certain web apps will be given blanket exemptions from charging. Here’s Google: “Maps API applications developed by non-profit organisations, applications deemed by Google to be in the public interest, and applications based in countries where we do not support Google Checkout transactions or offer Maps API Premier are exempt from these usage limits.” So nonprofit news orgs look to be in the clear, and Google could declare other news org maps apps to be “in the public interest” and free to run. (It also notes that nonprofits could be eligible for a free Maps API Premier license, which comes with extra goodies around advertising and more.)”

The best piece of Bad Journalism debunking I’ve ever seen

I’ve just stumbled across Neurobonkers’s blog post The worst piece of drugs reporting I have ever read and wanted to share it here.

The post uses an animated Prezi presentation to take the reader through 10 errors in an article in the Hull Daily Mail on the dangers of a “cheap new drug” (notably, the article is no longer online). I won’t add spoilers by revealing what those errors are – but this is a particularly engaging way to teach journalism students not only about accuracy in reporting on stories such as these, but why it’s important.

Enjoy the presentation.

.prezi-player { width: 550px; } .prezi-player-links { text-align: center; }

Finding Common Terms around a Twitter Hashtag

@aendrew sent me a link to a StackExchange question he’s just raised, in a tweet asking: “Anyone know how to find what terms surround a Twitter trend/hashtag?”

I’ve dabbled in this area before, though not addressing this question exactly, using Yahoo Pipes to find what hashtags are being used around a particular search term (Searching for Twitter Hashtags and Finding Hashtag Communities) or by members of a particular list (What’s Happening Now: Hashtags on Twitter Lists; that post also links to a pipe that identifies names of people tweeting around a particular search term.).

So what would we need a pipe to do that finds terms surrounding a twitter hashtag?

Firstly, we need to search on the tag to pull back a list of tweets containing that tag. Then we need to split the tweets into atomic elements (i.e. separate words). At this point, it might be useful to count how many times each one occurs, and display the most popular. We might also need to generate a “stop list” containing common words we aren’t really interested in (for example, the or and.

So here’s a quick hack at a pipe that does just that (Popular words round a hashtag).

For a start, I’m going to construct a string tokeniser that just searches for 100 tweets containing a particular search term, and then splits each tweet up in separate words, where words are things that are separated by white space. The pipe output is just a list of all the words from all the tweets that the search returned:

Twitter string tokeniser

You might notice the pipe also allows us to choose which page of results we want…

We can now use the helper pipe in another pipe. Firstly, let’s grab the words from a search that returns 200 tweets on the same search term. The helper pipe is called twice, once for the first page of results, once for the second page of results. The wordlists from each search query are then merged by the union block. The Rename block relabels the .content attribute as the .title attribute of each feed item.

Grab 200 tweets and check we have set the title element

The next thing we’re going to do is identify and count the unique words in the combined wordlist using the Unique block, and then sort the list accord to the number of times each word occurs.

Preliminary parsing of a wordlist

The above pipe fragment also filters the wordlist so that only words containing alphabetic characters are allowed through, as well as words with four or more characters. (The regular expression .{4,} reads: allow any string of four or more ({4,}) characters of any type (.). An expression .{5,7} would say – allow words through with length 5 to 7 characters.)

I’ve also added a short routine that implements a stop list. The regular expression pattern (?i)b(word1|word2|word3)b says: ignoring case ((?i)),try to match any of the words word1, word2, word3. (b denotes word boundary.) Note that in the filter below, some of the words in my stop list are redundant (the ones with three or fewer characters. Remember, we have already filtered the word list to show only words of length four or more characters.)

Stop list

I also added a user input that allows additional stop terms to be added (they should be pipe (|) separated, with no spaces between them). You can find the pipe here.

Strategies vs tools redux

Yesterday I chaired a panel on ‘UGC and Social Media’ at Birmingham’s Hello Culture event. Determined that it did not descend into the all-too-common obsession with tools that often characterises such discussions, I framed it from the start with the questions “Why should we care? Why should users care?”

The panellists were grateful – and the tactic seemed to work. We talked about the tension between creating content and building relationships; between the urge to ‘get people on our platform’ and going to their platforms instead. We discussed how the experience of designing physical spaces might inform how we approach designing digital ones; and about revisiting strategic priorities as a whole instead of simply trying to ‘find time’ to ‘do the online stuff’.

In other words we talked about people rather than technology, and strategies rather than tools.

So this morning it was good to be brought back down to earth and reminded just how embedded the technology-driven mindset is by Richard Millington.

Richard writes about a ‘State of Branded Online Communities’ report that uses Bravo TV as an example of a “successful” online community. The problem is that by any sensible measure, it isn’t. And I think Richard’s quotes on just how flawed the example is are worth reproducing here at length:

“If simply posting a standardized thread each week and leaving people to their own endeavours is seen as good community management practice, what exactly is bad community management? This is community management by autopilot.

“… You judge a community’s success by it’s stage in the life cycle, the number of interactions it generates, it’s members sense of community and the ROI it offers the organization. ComBlu defines success by what features the platform offers. By that assessment, nearly all of the most successful communities would be considered failures. [They struggle to get more than 10 members participating in a community at any one time.]

“ComBlu credits Bravo with an array of successes which have no impact on the community’s success. Only one suggestion is offered:

“[..] On our Bravo wish list? A better gamification or reputation management system.”

“There are a variety of things the community needs, a better gamification system certainly isn’t one of them.

“How about hiring a community manager to take responsibility for stimulating discussions […]?

“… Content sites branded as communities are still content sites.”

Ah, gamification: I’ll tip that to be next year’s QR code/Facebook page. How about an iPhone app? Everyone else is doing it so why shouldn’t we? Remember when everyone had to have a space in Second Life?

It’s a point I’ve made before in Technology is not a strategy: it’s a tool (and its follow-up), and which is explored at length in my Online Journalism book. Too often in an organisation or in a student project someone decides that they must launch a Facebook page or ‘be on Twitter’.

I recently compared this to someone approaching a TV producer, saying they wanted to make a documentary, and explaining that their strategy would be to “use a camera”.

No producer would accept that, and we need an equally critical attitude to the use of new technology. Otherwise we’re just hammers walking around seeing nails.