Tag Archives: analytics

How to: analyse your Twitter or Facebook analytics for the best days or times to post

Twitter’s analytics service is a useful tool for journalists to understand which tweets are having the biggest impact. The dashboard at analytics.twitter.com provides a general overview under tabs like ‘tweets’ and ‘audiences’, and you can download raw data for any period then sort it in a spreadsheet to see which tweets performed best against a range of metrics.

However, if you want to perform any deeper analysis, such as finding out which days are best for tweeting or which times perform best — you’ll need to get stuck in. Here’s how to do it. Continue reading

Snapchat for journalists (part 4): sharing and measuring your story

In the previous parts of this series I covered different types of stories, tools, and thinking about narrative. In this extract from the ebook Snapchat for Journalists I cover the practicalities of storing, sharing and measuring your Snapchat stories.

Snapchat book cover

You can read more in the ebook (also available in the Kindle Store)

Sharing your Snapchat Story

Each snap in a story only lasts for 24 hours, so it’s worth making sure you share them as early as possible, and regularly before they have finished.

You cannot share a link to your Snapchat story: people need to be following you on Snapchat and checking it for notifications. Whenever you add a new snap to your story, they will receive a subtle notification within Snapchat.

To share it you have a number of options: Continue reading

What they said: analytics, bots and devices

When you see a complex issue summed up in a few tweets, it’s worth saving. So I’m doing just that below: via Rasmus Kleis Nielsen, Mary Hamilton, Neil Thackray and Steffen Konrath.

Continue reading

Metrics and the media: we can measure it – but can we manage it?

Today I will be chairing the ‘Data Strategy’ track of talks at the Monetising Media conference: individuals in every part of the industry talking about how metrics now inform not just content strategy but revenue, advertising, and customer relations.

As I introduce the day I will be thinking about two pieces of data in particular: research by the Tow Center’s Caitlin Petre into the use of Chartbeat; and Checking, Sharing, Clicking and Linking, a piece of research into consumption. Continue reading

Tips on choosing the right Twitter hashtag: a tale of 5 hashtags

brumvote related tags

What do you do when you’ve been using a hashtag for some time and another one comes along with the potential to be more popular? Do you jump on board – or do you stick with the hashtag you’ve built up? How do you measure the best hashtag to use for your work?

That’s the question that a team of my undergraduate journalism students at Birmingham City University faced last month. And here’s how they addressed it. 

First, some background: in February this year the students launched their election coverage under the hashtag #brumvote.

The hashtag worked well – it took in everything from BuzzFeed-style listicles to hustings liveblogs and data-driven analysis of local MPs’ expenses and voting patterns.

Then last month a similar hashtag appeared: the BBC launched their own youth-targeting election project, with the hashtag #brumvotes.

At this point the students faced 3 choices:

  1. Keep using the #brumvote hashtag
  2. Adopt the new #brumvotes hashtag
  3. Use both

Changing hashtag would involve changing dozens of posts from previous coverage, but would the clout of the BBC mean missing out on a potentially more successful hashtag? Continue reading

Thinking of doing your student project online? Here are 5 mistakes to avoid

Journalism courses often expect students to spend a large part of their final year or semester producing an independent project. Here, for those about to embark on such a project online, or putting together a proposal for one, I list some common pitfalls to watch out for… Continue reading

16 reasons why this research will change how you look at news consumption

Most research on news consumption annoys me. Most research on news consumption – like Pew’s State of the News Mediarelies on surveys of people self-reporting how they consume news. But surveys can only answer the questions that they ask. And as any journalist with a decent bullshit detector should know: the problem is people misremember, people forget, and people lie.

The most interesting news consumption research uses ethnography: this involves watching people and measuring what they actually do – not what they say they do. To this end AP’s 2008 report A New Model for News is still one of the most insightful pieces of research into news consumption you’ll ever read – because it picks out details like the role that email and desktop widgets play, or the reasons why people check the news in the first place (they’re bored at work, for example).

Now six years on two Dutch researchers have published a paper summarising various pieces of ethnographic and interview-based consumption research (£) over the last decade – providing some genuine insights into just how varied news ‘consumption’ actually is.

Irene Costera Meijer and Tim Groot Kormelink‘s focus is not on what medium people use, or when they use it, but rather on how engaged people are with the news.

To do this they have identified 16 different news consumption practices which they give the following very specific names:

  1. Reading
  2. Watching
  3. Viewing
  4. Listening
  5. Checking
  6. Snacking
  7. Scanning
  8. Monitoring
  9. Searching
  10. Clicking
  11. Linking
  12. Sharing
  13. Liking
  14. Recommending
  15. Commenting
  16. Voting

Below is my attempt to summarise those activities, why they’re important for journalists and publishers, and the key issues they raise for the way that we publish. Continue reading

Social Interest Positioning – Visualising Facebook Friends’ Likes With Data Grabbed Using Google Refine

What do my Facebook friends have in common in terms of the things they have Liked, or in terms of their music or movie preferences? (And does this say anything about me?!) Here’s a recipe for visualising that data…

After discovering via Martin Hawksey that the recent (December, 2011) 2.5 release of Google Refine allows you to import JSON and XML feeds to bootstrap a new project, I wondered whether it would be able to pull in data from the Facebook API if I was logged in to Facebook (Google Refine does run in the browser after all…)

Looking through the Facebook API documentation whilst logged in to Facebook, it’s easy enough to find exemplar links to things like your friends list (https://graph.facebook.com/me/friends?access_token=A_LONG_JUMBLE_OF_LETTERS) or the list of likes someone has made (https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS); replacing me with the Facebook ID of one of your friends should pull down a list of their friends, or likes, etc.

(Note that validity of the access token is time limited, so you can’t grab a copy of the access token and hope to use the same one day after day.)

Grabbing the link to your friends on Facebook is simply a case of opening a new project, choosing to get the data from a Web Address, and then pasting in the friends list URL:

Google Refine - import Facebook friends list

Click on next, and Google Refine will download the data, which you can then parse as a JSON file, and from which you can identify individual record types:

Google Refine - import Facebook friends

If you click the highlighted selection, you should see the data that will be used to create your project:

Google Refine - click to view the data

You can now click on Create Project to start working on the data – the first thing I do is tidy up the column names:

Google Refine - rename columns

We can now work some magic – such as pulling in the Likes our friends have made. To do this, we need to create the URL for each friend’s Likes using their Facebook ID, and then pull the data down. We can use Google Refine to harvest this data for us by creating a new column containing the data pulled in from a URL built around the value of each cell in another column:

Google Refine - new column from URL

The Likes URL has the form https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS which we’ll tinker with as follows:

Google Refine - crafting URLs for new column creation

The throttle control tells Refine how often to make each call. I set this to 500ms (that is, half a second), so it takes a few minutes to pull in my couple of hundred or so friends (I don’t use Facebook a lot;-). I’m not sure what limit the Facebook API is happy with (if you hit it too fast (i.e. set the throttle time too low), you may find the Facebook API stops returning data to you for a cooling down period…)?

Having imported the data, you should find a new column:

Google Refine - new data imported

At this point, it is possible to generate a new column from each of the records/Likes in the imported data… in theory (or maybe not..). I found this caused Refine to hang though, so instead I exprted the data using the default Templating… export format, which produces some sort of JSON output…

I then used this Python script to generate a two column data file where each row contained a (new) unique identifier for each friend and the name of one of their likes:

import simplejson,csv

writer=csv.writer(open('fbliketest.csv','wb+'),quoting=csv.QUOTE_ALL)

fn='my-fb-friends-likes.txt'

data = simplejson.load(open(fn,'r'))
id=0
for d in data['rows']:
	id=id+1
	#'interests' is the column name containing the Likes data
	interests=simplejson.loads(d['interests'])
	for i in interests['data']:
		print str(id),i['name'],i['category']
		writer.writerow([str(id),i['name'].encode('ascii','ignore')])

[I think this R script, in answer to a related @mhawksey Stack Overflow question, also does the trick: R: Building a list from matching values in a data.frame]

I could then import this data into Gephi and use it to generate a network diagram of what they commonly liked:

Sketching common likes amongst my facebook friends

Rather than returning Likes, I could equally have pulled back lists of the movies, music or books they like, their own friends lists (permissions settings allowing), etc etc, and then generated friends’ interest maps on that basis.

[See also: Getting Started With The Gephi Network Visualisation App – My Facebook Network, Part I and how to visualise Google+ networks]

PS dropping out of Google Refine and into a Python script is a bit clunky, I have to admit. What would be nice would be to be able to do something like a “create new rows with new column from column” pattern that would let you set up an iterator through the contents of each of the cells in the column you want to generate the new column from, and for each pass of the iterator: 1) duplicate the original data row to create a new row; 2) add a new column; 3) populate the cell with the contents of the current iteration state. Or something like that…

PPS Related to the PS request, there is a sort of related feature in the 2.5 release of Google Refine that lets you merge data from across rows with a common key into a newly shaped data set: Key/value Columnize. Seeing this, it got me wondering what a fusion of Google Refine and RStudio might be like (or even just R support within Google Refine?)

PPPS this could be interesting – looks like you can test to see if a friendship exists given two Facebook user IDs.