As I introduce the day I will be thinking about two pieces of data in particular: research by the Tow Center’s Caitlin Petre into the use of Chartbeat; and Checking, Sharing, Clicking and Linking, a piece of research into consumption. Continue reading
What do you do when you’ve been using a hashtag for some time and another one comes along with the potential to be more popular? Do you jump on board – or do you stick with the hashtag you’ve built up? How do you measure the best hashtag to use for your work?
That’s the question that a team of my undergraduate journalism students at Birmingham City University faced last month. And here’s how they addressed it.
First, some background: in February this year the students launched their election coverage under the hashtag #brumvote.
The hashtag worked well – it took in everything from BuzzFeed-style listicles to hustings liveblogs and data-driven analysis of local MPs’ expenses and voting patterns.
Then last month a similar hashtag appeared: the BBC launched their own youth-targeting election project, with the hashtag #brumvotes.
At this point the students faced 3 choices:
- Keep using the #brumvote hashtag
- Adopt the new #brumvotes hashtag
- Use both
If you want to know how to test what works in social media, the Office for National Statistics have put together one of the best pieces I’ve seen on the topic. Continue reading
Journalism courses often expect students to spend a large part of their final year or semester producing an independent project. Here, for those about to embark on such a project online, or putting together a proposal for one, I list some common pitfalls to watch out for… Continue reading
Most research on news consumption annoys me. Most research on news consumption – like Pew’s State of the News Media – relies on surveys of people self-reporting how they consume news. But surveys can only answer the questions that they ask. And as any journalist with a decent bullshit detector should know: the problem is people misremember, people forget, and people lie.
The most interesting news consumption research uses ethnography: this involves watching people and measuring what they actually do – not what they say they do. To this end AP’s 2008 report A New Model for News is still one of the most insightful pieces of research into news consumption you’ll ever read – because it picks out details like the role that email and desktop widgets play, or the reasons why people check the news in the first place (they’re bored at work, for example).
Now six years on two Dutch researchers have published a paper summarising various pieces of ethnographic and interview-based consumption research (£) over the last decade – providing some genuine insights into just how varied news ‘consumption’ actually is.
Irene Costera Meijer and Tim Groot Kormelink‘s focus is not on what medium people use, or when they use it, but rather on how engaged people are with the news.
To do this they have identified 16 different news consumption practices which they give the following very specific names:
Below is my attempt to summarise those activities, why they’re important for journalists and publishers, and the key issues they raise for the way that we publish. Continue reading
What do my Facebook friends have in common in terms of the things they have Liked, or in terms of their music or movie preferences? (And does this say anything about me?!) Here’s a recipe for visualising that data…
After discovering via Martin Hawksey that the recent (December, 2011) 2.5 release of Google Refine allows you to import JSON and XML feeds to bootstrap a new project, I wondered whether it would be able to pull in data from the Facebook API if I was logged in to Facebook (Google Refine does run in the browser after all…)
Looking through the Facebook API documentation whilst logged in to Facebook, it’s easy enough to find exemplar links to things like your friends list (https://graph.facebook.com/me/friends?access_token=A_LONG_JUMBLE_OF_LETTERS) or the list of likes someone has made (https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS); replacing me with the Facebook ID of one of your friends should pull down a list of their friends, or likes, etc.
(Note that validity of the access token is time limited, so you can’t grab a copy of the access token and hope to use the same one day after day.)
Grabbing the link to your friends on Facebook is simply a case of opening a new project, choosing to get the data from a Web Address, and then pasting in the friends list URL:
Click on next, and Google Refine will download the data, which you can then parse as a JSON file, and from which you can identify individual record types:
If you click the highlighted selection, you should see the data that will be used to create your project:
You can now click on Create Project to start working on the data – the first thing I do is tidy up the column names:
We can now work some magic – such as pulling in the Likes our friends have made. To do this, we need to create the URL for each friend’s Likes using their Facebook ID, and then pull the data down. We can use Google Refine to harvest this data for us by creating a new column containing the data pulled in from a URL built around the value of each cell in another column:
The Likes URL has the form https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS which we’ll tinker with as follows:
The throttle control tells Refine how often to make each call. I set this to 500ms (that is, half a second), so it takes a few minutes to pull in my couple of hundred or so friends (I don’t use Facebook a lot;-). I’m not sure what limit the Facebook API is happy with (if you hit it too fast (i.e. set the throttle time too low), you may find the Facebook API stops returning data to you for a cooling down period…)?
Having imported the data, you should find a new column:
At this point, it is possible to generate a new column from each of the records/Likes in the imported data… in theory (or maybe not..). I found this caused Refine to hang though, so instead I exprted the data using the default Templating… export format, which produces some sort of JSON output…
I then used this Python script to generate a two column data file where each row contained a (new) unique identifier for each friend and the name of one of their likes:
import simplejson,csv writer=csv.writer(open('fbliketest.csv','wb+'),quoting=csv.QUOTE_ALL) fn='my-fb-friends-likes.txt' data = simplejson.load(open(fn,'r')) id=0 for d in data['rows']: id=id+1 #'interests' is the column name containing the Likes data interests=simplejson.loads(d['interests']) for i in interests['data']: print str(id),i['name'],i['category'] writer.writerow([str(id),i['name'].encode('ascii','ignore')])
[I think this R script, in answer to a related @mhawksey Stack Overflow question, also does the trick: R: Building a list from matching values in a data.frame]
I could then import this data into Gephi and use it to generate a network diagram of what they commonly liked:
Rather than returning Likes, I could equally have pulled back lists of the movies, music or books they like, their own friends lists (permissions settings allowing), etc etc, and then generated friends’ interest maps on that basis.
PS dropping out of Google Refine and into a Python script is a bit clunky, I have to admit. What would be nice would be to be able to do something like a “create new rows with new column from column” pattern that would let you set up an iterator through the contents of each of the cells in the column you want to generate the new column from, and for each pass of the iterator: 1) duplicate the original data row to create a new row; 2) add a new column; 3) populate the cell with the contents of the current iteration state. Or something like that…
PPS Related to the PS request, there is a sort of related feature in the 2.5 release of Google Refine that lets you merge data from across rows with a common key into a newly shaped data set: Key/value Columnize. Seeing this, it got me wondering what a fusion of Google Refine and RStudio might be like (or even just R support within Google Refine?)
PPPS this could be interesting – looks like you can test to see if a friendship exists given two Facebook user IDs.