Tag Archives: analytics

Thinking of doing your student project online? Here are 5 mistakes to avoid

Journalism courses often expect students to spend a large part of their final year or semester producing an independent project. Here, for those about to embark on such a project online, or putting together a proposal for one, I list some common pitfalls to watch out for… Continue reading

16 reasons why this research will change how you look at news consumption

Most research on news consumption annoys me. Most research on news consumption – like Pew’s State of the News Mediarelies on surveys of people self-reporting how they consume news. But surveys can only answer the questions that they ask. And as any journalist with a decent bullshit detector should know: the problem is people misremember, people forget, and people lie.

The most interesting news consumption research uses ethnography: this involves watching people and measuring what they actually do – not what they say they do. To this end AP’s 2008 report A New Model for News is still one of the most insightful pieces of research into news consumption you’ll ever read – because it picks out details like the role that email and desktop widgets play, or the reasons why people check the news in the first place (they’re bored at work, for example).

Now six years on two Dutch researchers have published a paper summarising various pieces of ethnographic and interview-based consumption research (£) over the last decade – providing some genuine insights into just how varied news ‘consumption’ actually is.

Irene Costera Meijer and Tim Groot Kormelink‘s focus is not on what medium people use, or when they use it, but rather on how engaged people are with the news.

To do this they have identified 16 different news consumption practices which they give the following very specific names:

  1. Reading
  2. Watching
  3. Viewing
  4. Listening
  5. Checking
  6. Snacking
  7. Scanning
  8. Monitoring
  9. Searching
  10. Clicking
  11. Linking
  12. Sharing
  13. Liking
  14. Recommending
  15. Commenting
  16. Voting

Below is my attempt to summarise those activities, why they’re important for journalists and publishers, and the key issues they raise for the way that we publish. Continue reading

Social Interest Positioning – Visualising Facebook Friends’ Likes With Data Grabbed Using Google Refine

What do my Facebook friends have in common in terms of the things they have Liked, or in terms of their music or movie preferences? (And does this say anything about me?!) Here’s a recipe for visualising that data…

After discovering via Martin Hawksey that the recent (December, 2011) 2.5 release of Google Refine allows you to import JSON and XML feeds to bootstrap a new project, I wondered whether it would be able to pull in data from the Facebook API if I was logged in to Facebook (Google Refine does run in the browser after all…)

Looking through the Facebook API documentation whilst logged in to Facebook, it’s easy enough to find exemplar links to things like your friends list (https://graph.facebook.com/me/friends?access_token=A_LONG_JUMBLE_OF_LETTERS) or the list of likes someone has made (https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS); replacing me with the Facebook ID of one of your friends should pull down a list of their friends, or likes, etc.

(Note that validity of the access token is time limited, so you can’t grab a copy of the access token and hope to use the same one day after day.)

Grabbing the link to your friends on Facebook is simply a case of opening a new project, choosing to get the data from a Web Address, and then pasting in the friends list URL:

Google Refine - import Facebook friends list

Click on next, and Google Refine will download the data, which you can then parse as a JSON file, and from which you can identify individual record types:

Google Refine - import Facebook friends

If you click the highlighted selection, you should see the data that will be used to create your project:

Google Refine - click to view the data

You can now click on Create Project to start working on the data – the first thing I do is tidy up the column names:

Google Refine - rename columns

We can now work some magic – such as pulling in the Likes our friends have made. To do this, we need to create the URL for each friend’s Likes using their Facebook ID, and then pull the data down. We can use Google Refine to harvest this data for us by creating a new column containing the data pulled in from a URL built around the value of each cell in another column:

Google Refine - new column from URL

The Likes URL has the form https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS which we’ll tinker with as follows:

Google Refine - crafting URLs for new column creation

The throttle control tells Refine how often to make each call. I set this to 500ms (that is, half a second), so it takes a few minutes to pull in my couple of hundred or so friends (I don’t use Facebook a lot;-). I’m not sure what limit the Facebook API is happy with (if you hit it too fast (i.e. set the throttle time too low), you may find the Facebook API stops returning data to you for a cooling down period…)?

Having imported the data, you should find a new column:

Google Refine - new data imported

At this point, it is possible to generate a new column from each of the records/Likes in the imported data… in theory (or maybe not..). I found this caused Refine to hang though, so instead I exprted the data using the default Templating… export format, which produces some sort of JSON output…

I then used this Python script to generate a two column data file where each row contained a (new) unique identifier for each friend and the name of one of their likes:

import simplejson,csv

writer=csv.writer(open('fbliketest.csv','wb+'),quoting=csv.QUOTE_ALL)

fn='my-fb-friends-likes.txt'

data = simplejson.load(open(fn,'r'))
id=0
for d in data['rows']:
	id=id+1
	#'interests' is the column name containing the Likes data
	interests=simplejson.loads(d['interests'])
	for i in interests['data']:
		print str(id),i['name'],i['category']
		writer.writerow([str(id),i['name'].encode('ascii','ignore')])

[I think this R script, in answer to a related @mhawksey Stack Overflow question, also does the trick: R: Building a list from matching values in a data.frame]

I could then import this data into Gephi and use it to generate a network diagram of what they commonly liked:

Sketching common likes amongst my facebook friends

Rather than returning Likes, I could equally have pulled back lists of the movies, music or books they like, their own friends lists (permissions settings allowing), etc etc, and then generated friends’ interest maps on that basis.

[See also: Getting Started With The Gephi Network Visualisation App – My Facebook Network, Part I and how to visualise Google+ networks]

PS dropping out of Google Refine and into a Python script is a bit clunky, I have to admit. What would be nice would be to be able to do something like a “create new rows with new column from column” pattern that would let you set up an iterator through the contents of each of the cells in the column you want to generate the new column from, and for each pass of the iterator: 1) duplicate the original data row to create a new row; 2) add a new column; 3) populate the cell with the contents of the current iteration state. Or something like that…

PPS Related to the PS request, there is a sort of related feature in the 2.5 release of Google Refine that lets you merge data from across rows with a common key into a newly shaped data set: Key/value Columnize. Seeing this, it got me wondering what a fusion of Google Refine and RStudio might be like (or even just R support within Google Refine?)

PPPS this could be interesting – looks like you can test to see if a friendship exists given two Facebook user IDs.

Content or design? Using analytics to identify your problem

editorial analytics

As an industry, online publishing has gone through a series of obsessions. From ‘Content is King’ to information architecture (IA), SEO (search engine optimisation) to SMO (social media optimisation).

Most people’s view of online publishing is skewed towards one of these areas. For journalists, it’s likely to be SEO; for designers or developers, it’s probably user experience (UX). As a result, we’re highly influenced by fashion when things aren’t going smoothly, and we tend to ignore potential solutions outside of our area.

Content agency Contentini are blogging about the way they use analytics to look at websites and identify which of the various elements above might be worth focusing on. It’s a useful distillation of problems around sites and equally useful as a prompt for jolting yourself out of falling into the wrong ways to solve them.

The post is worth reading in full, and probably pinning to a wall. But here are the bullet points:

  • If you have a high bounce rate and people spend little time on your site, it might be an information architecture problem.
  • If people start things but don’t finish them on your site, it’s probably a UX problem.
  • If people aren’t sharing your content, it may be a content issue. (Image above. This part of their framework could do with fleshing out)
  • If you’re getting less than a third of your traffic from search engines, you need to look at SEO

Solutions in the post itself. Anything you’d add to them?