Tag Archives: facebook

Location, Location, Location

In this guest post, Damian Radcliffe highlights some recent developments in the intersection between hyper-local SoLoMo (social, location, mobile). His more detailed slides looking at 20 developments across the sector during the last two months of 2011 are cross-posted at the bottom of this article.

Facebook’s recent purchase of location-based service Gowalla (Slide 19 below,) suggests that the social network still thinks there is a future for this type of “check in” service. Touted as “the next big thing” ever since Foursquare launched at SXSW in 2009, to date Location Based Services (LBS) haven’t quite lived up to the hype.

Certainly there’s plenty of data to suggest that the public don’t quite share the enthusiasm of many Silicon Valley investors. Yet.

Part of their challenge is that not only is awareness of services relatively low – just 30% of respondents in a survey of 37,000 people by Forrester (Slide 27) – but their benefits are also not necessarily clearly understood.

In 2011, a study by youth marketing agency Dubit found about half of UK teenagers are not aware of location-based social networking services such as Foursquare and Facebook Places, with 58% of those who had heard of them saying they “do not see the point” of sharing geographic information.

Safety concerns may not be the primary concern of Dubit’s respondents, but as the “Please Rob Me” website says: “….on one end we’re leaving lights on when we’re going on a holiday, and on the other we’re telling everybody on the internet we’re not home… The danger is publicly telling people where you are. This is because it leaves one place you’re definitely not… home.”

Reinforcing this concern are several stories from both the UK and the US of insurers refusing to pay out after a domestic burglary, where victims have announced via social networks that they were away on holiday – or having a beer downtown.

For LBS to go truly mass market – and Forrester (see Slide 27) found that only 5% of mobile users were monthly LBS users – smartphone growth will be a key part of the puzzle. Recent Ofcom data reported that:

  • Ownership nearly doubled in the UK between February 2010 and August 2011 (from 24% to 46%).
  • 46% of UK internet users also used their phones to go online in October 2011.

For now at least, most of our location based activity would seem to be based on previous online behaviours. So, search continues to dominate.

Google in a recent blog post described local search ads as “so hot right now” (Slide 22, Sept-Oct 2011 update). The search giant launched hyper-local search ads a year ago, along with a “News Near You” feature in May 2011. (See: April-May 2011 update, Slide 27.)

Meanwhile, BIA/Kelsey forecast that local search advertising revenues in the US will increase from $5.1 billion in 2010 to $8.2 billion in 2015. Their figures suggest by 2015, 30% of search will be local.

The other notable growth area, location based mobile advertising, also offers a different slant on the typical “check in” service which Gowalla et al tend to specialise in. Borrell forerecasts this space will increase 66% in the US during 2012 (Slide 22).

The most high profile example of this service in the UK is O2 More, which triggers advertising or deals when a user passes through certain locations – offering a clear financial incentive for sharing your location.

Perhaps this – along with tailored news and information manifest in services such as News Near You, Postcode Gazette and India’s Taazza – is the way forward.

Jiepang, China’s leading Location-Based Social Mobile App, offered a recent example of how to do this. Late last year they partnered with Starbucks, offering users a virtual Starbucks badge if they “checked-in” at a Starbucks store in the Shanghai, Jiangsu and Zhejiang provinces. When the number of badges issued hit 20,000, all badge holders got a free festive upgrade to a larger cup size. When coupled with the ease of NFC technology deployed to allow users to “check in” then it’s easy to understand the consumer benefit of such a service.

Mine’s a venti gingerbread latte. No cream. Xièxiè.

Social Interest Positioning – Visualising Facebook Friends’ Likes With Data Grabbed Using Google Refine

What do my Facebook friends have in common in terms of the things they have Liked, or in terms of their music or movie preferences? (And does this say anything about me?!) Here’s a recipe for visualising that data…

After discovering via Martin Hawksey that the recent (December, 2011) 2.5 release of Google Refine allows you to import JSON and XML feeds to bootstrap a new project, I wondered whether it would be able to pull in data from the Facebook API if I was logged in to Facebook (Google Refine does run in the browser after all…)

Looking through the Facebook API documentation whilst logged in to Facebook, it’s easy enough to find exemplar links to things like your friends list (https://graph.facebook.com/me/friends?access_token=A_LONG_JUMBLE_OF_LETTERS) or the list of likes someone has made (https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS); replacing me with the Facebook ID of one of your friends should pull down a list of their friends, or likes, etc.

(Note that validity of the access token is time limited, so you can’t grab a copy of the access token and hope to use the same one day after day.)

Grabbing the link to your friends on Facebook is simply a case of opening a new project, choosing to get the data from a Web Address, and then pasting in the friends list URL:

Google Refine - import Facebook friends list

Click on next, and Google Refine will download the data, which you can then parse as a JSON file, and from which you can identify individual record types:

Google Refine - import Facebook friends

If you click the highlighted selection, you should see the data that will be used to create your project:

Google Refine - click to view the data

You can now click on Create Project to start working on the data – the first thing I do is tidy up the column names:

Google Refine - rename columns

We can now work some magic – such as pulling in the Likes our friends have made. To do this, we need to create the URL for each friend’s Likes using their Facebook ID, and then pull the data down. We can use Google Refine to harvest this data for us by creating a new column containing the data pulled in from a URL built around the value of each cell in another column:

Google Refine - new column from URL

The Likes URL has the form https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS which we’ll tinker with as follows:

Google Refine - crafting URLs for new column creation

The throttle control tells Refine how often to make each call. I set this to 500ms (that is, half a second), so it takes a few minutes to pull in my couple of hundred or so friends (I don’t use Facebook a lot;-). I’m not sure what limit the Facebook API is happy with (if you hit it too fast (i.e. set the throttle time too low), you may find the Facebook API stops returning data to you for a cooling down period…)?

Having imported the data, you should find a new column:

Google Refine - new data imported

At this point, it is possible to generate a new column from each of the records/Likes in the imported data… in theory (or maybe not..). I found this caused Refine to hang though, so instead I exprted the data using the default Templating… export format, which produces some sort of JSON output…

I then used this Python script to generate a two column data file where each row contained a (new) unique identifier for each friend and the name of one of their likes:

import simplejson,csv

writer=csv.writer(open('fbliketest.csv','wb+'),quoting=csv.QUOTE_ALL)

fn='my-fb-friends-likes.txt'

data = simplejson.load(open(fn,'r'))
id=0
for d in data['rows']:
	id=id+1
	#'interests' is the column name containing the Likes data
	interests=simplejson.loads(d['interests'])
	for i in interests['data']:
		print str(id),i['name'],i['category']
		writer.writerow([str(id),i['name'].encode('ascii','ignore')])

[I think this R script, in answer to a related @mhawksey Stack Overflow question, also does the trick: R: Building a list from matching values in a data.frame]

I could then import this data into Gephi and use it to generate a network diagram of what they commonly liked:

Sketching common likes amongst my facebook friends

Rather than returning Likes, I could equally have pulled back lists of the movies, music or books they like, their own friends lists (permissions settings allowing), etc etc, and then generated friends’ interest maps on that basis.

[See also: Getting Started With The Gephi Network Visualisation App – My Facebook Network, Part I and how to visualise Google+ networks]

PS dropping out of Google Refine and into a Python script is a bit clunky, I have to admit. What would be nice would be to be able to do something like a “create new rows with new column from column” pattern that would let you set up an iterator through the contents of each of the cells in the column you want to generate the new column from, and for each pass of the iterator: 1) duplicate the original data row to create a new row; 2) add a new column; 3) populate the cell with the contents of the current iteration state. Or something like that…

PPS Related to the PS request, there is a sort of related feature in the 2.5 release of Google Refine that lets you merge data from across rows with a common key into a newly shaped data set: Key/value Columnize. Seeing this, it got me wondering what a fusion of Google Refine and RStudio might be like (or even just R support within Google Refine?)

PPPS this could be interesting – looks like you can test to see if a friendship exists given two Facebook user IDs.

Choosing a strategy for content: 4 Ws and a H

Something interesting happened to journalism when it moved from print and broadcast to the web. Aspects of the process that we barely thought about started to be questioned: the ‘story’ itself seemed less than fundamental. Decisions that you didn’t need to make as a journalist – such as what medium you would use – were becoming part of the job.

In fact, a whole raft of new decisions now needed to be made.

For those launching a new online journalism project, these questions are now increasingly tackled with a content strategy, a phrase and approach which, it seems to me, began outside of the news industry (where the content strategy had been settled on so long ago that it became largely implicit) and has steadily been rediscovered by journalists and publishers.

‘Web first’, for example, is a content strategy; the Seattle Times’s decision to focus on creation, curation and community is a content strategy. Reed Business Information’s reshaping of its editorial structures is, in part, a content strategy:

Why does a journalist need a content strategy?

I’ve written previously about the style challenge facing journalists in a multi platform environment: where before a journalist had few decisions to make about how to treat a story (the medium was given, the formats limited, the story supreme), now it can be easy to let old habits restrict the power, quality and impact of reporting.

Below, I’ve tried to boil down these new decisions into 4 different types – and one overarching factor influencing them all. These are decisions that often have to be made quickly in the face of changing circumstances – I hope that fleshing them out in this way will help in making those decisions quicker and more effectively.

1. Format (“How?”)

We’re familiar with formats: the news in brief; the interview; the profile; the in-depth feature; and so on. They have their conventions and ingredients. If you’re writing a report you know that you will need a reaction quote, some context, and something to wrap it up (a quote; what happens next; etc.). If you’re doing an interview you’ll need to gather some colour about where it takes place, and how the interviewee reacts at various points.

Formats are often at their most powerful when they are subverted: a journalist who knows the format inside out can play with it, upsetting the reader’s expectations for the most impact. This is the tension between repetition and contrast that underlies not just journalism but good design, and even music.

As online journalism develops dozens of new formats have become available. Here are just a few:

  • the liveblog;
  • the audio slideshow;
  • the interactive map;
  • the app;
  • the podcast;
  • the explainer;
  • the portal;
  • the aggregator;
  • the gallery

Formats are chosen because they suit the thing being covered, its position in the publisher’s news environment, and the resources of the publisher.

Historically, for example, when a story first broke for most publishers a simple report was the only realistic option. But after that, they might commission a profile, interview, or deeper feature or package – if the interest and the resources warranted that.

The subject matter would also be a factor. A broadcaster might be more inclined to commission a package on a story if colourful characters or locations were involved and were accessible. They might also send a presenter down for a two-way.

These factors still come into play now we have access to a much wider range of formats – but a wider understanding of those formats is also needed.

  • Does the event take place over a geographical area, and users will want to see the movement or focus on a particular location? Then a map might be most appropriate.
  • Are things changing so fast that a traditional ‘story’ format is going to be inadequate? Then a liveblog may work better.
  • Is there a wealth of material out there being produced by witnesses? A gallery, portal or aggregator might all be good choices.
  • Have you secured an interview with a key character, and a set of locations or items that tell their own story? Is it an ongoing or recurring story? An audio slideshow or video interview may be the most powerful choice of format.
  • Are you on the scene and raw video of the event is going to have the most impact? Grab your phone and film – or stream.

2. Medium (“What?”)

Depending on what format has been chosen, the medium may be chosen for you too. But a podcast can be audio or video; a liveblog can involve text and multimedia; an app can be accessed on a phone, a webpage, a desktop widget, or Facebook.

This is not just about how you convey information about what’s going on (you’ll notice I avoid the use of ‘story’, as this is just one possible choice of format) but how the user accesses it and uses it.

A podcast may be accessed on the move; a Facebook app on mobile, in a social context; and so on. These are factors to consider as you produce your content.

3. Platform (“Where?”)

Likewise, the platforms where the content is to be distributed need careful consideration.

A liveblog’s reporting might be done through Twitter and aggregated on your own website. A map may be compiled in a Google spreadsheet but published through Google Maps and embedded on your blog.

An audioboo may have subscribers on iTunes or on the Audioboo app itself, and its autoposting feature may attract large numbers of listeners through Twitter.

Some call the choice of platform a choice of ‘channel’ but that does not do justice to the interactive and social nature of many of these platforms. Facebook or Twitter are not just channels for publishing live updates from a blog, but a place where people engage with you and with each other, exchanging information which can become part of your reporting (whether you want it to or not).

(Look at these tutorials for copy editors on Twitter to get some idea of how that platform alone requires its own distinct practices)

Your content strategy will need to take account of what happens on those platforms: which tweets are most retweeted or argued with; reacting to information posted in your blog or liveblog comments; and so on.

[UPDATE, March 25: This video from NowThisNews's Ed O'Keefe explains how this aspect plays out in his organisation]

4. Scheduling (“When?”)

The choice of platform(s) will also influence your choice of timing. There will be different optimal times for publishing to Facebook, Twitter, email mailing lists, blogs, and websites.

There will also be optimal times for different formats (as the Washington Post found). A short news report may suit morning commuters; an audio slideshow or video may be best scheduled for the evening. Something humorous may play best on a Friday afternoon; something practical on a Wednesday afternoon once the user has moved past the early week slog.

Infographic: The Best Times To Post To Twitter & Facebook

This webcast on content strategy gives a particular insight into how they treat scheduling – not just across the day but across the week.

5. “Why?”

Print and broadcast rest on objectives so implicit that we barely think about them. The web, however, may have different objectives. Instead of attracting the widest numbers of readers, for example, we may want to engage users as much as possible.

That makes a big difference in any content strategy:

  • The rapid rise of liveblogs and explainers as a format can be partly explained by their stickiness when compared to traditional news articles.
  • Demand for video content has exceeded supply for some publishers because it is possible to embed advertising with content in a way which isn’t possible with text.
  • Infographics have exploded as they lend themselves so well to viral distribution.

Distribution is often one answer to ‘why?’, and introduces two elements I haven’t mentioned so far: search engine optimisation and social media optimisation. Blogs as a platform and text as a medium are generally better optimised for search engines, for example. But video and images are better optimised for social network platforms such as Facebook and Twitter.

And the timing of publishing might be informed by analytics of what people are searching for, updating Facebook about, or tweeting about right now.

The objective(s), of course, should recur as a consideration throughout all the stages above. And some stages will have different objectives: for distribution, for editorial quality, and for engagement.

Just to confuse things further, the objectives themselves are likely to change as the business models around online and multiplatform publishing evolve.

If I’m going to sum up all of the above in one line, then, it’s this: “Take nothing for granted.”

I’m looking for examples of content strategies for future editions of the book – please let me know if you’d like yours to be featured.

Choosing a strategy for content: 4 Ws and a H

Choosing a strategy for content: Format, Medium, Platform, Scheduling - and objectives

For this content I chose to write text accompanied by some images and video, published on a blog at a particular moment, for the objective of saving time and gaining feedback.

Something interesting happened to journalism when it moved from print and broadcast to the web. Aspects of the process that we barely thought about started to be questioned: the ‘story’ itself seemed less than fundamental. Decisions that you didn’t need to make as a journalist – such as what medium you would use – were becoming part of the job.

In fact, a whole raft of new decisions now needed to be made.

For those launching a new online journalism project, these questions are now increasingly tackled with a content strategy, a phrase and approach which, it seems to me, began outside of the news industry (where the content strategy had been settled on so long ago that it became largely implicit) and has steadily been rediscovered by journalists and publishers.

‘Web first’, for example, is a content strategy; the Seattle Times’s decision to focus on creation, curation and community is a content strategy. Reed Business Information’s reshaping of its editorial structures is, in part, a content strategy:

Why does a journalist need a content strategy?

I’ve written previously about the style challenge facing journalists in a multi platform environment: where before a journalist had few decisions to make about how to treat a story (the medium was given, the formats limited, the story supreme), now it can be easy to let old habits restrict the power, quality and impact of reporting.

Below, I’ve tried to boil down these new decisions into 4 different types – and one overarching factor influencing them all. These are decisions that often have to be made quickly in the face of changing circumstances – I hope that fleshing them out in this way will help in making those decisions quicker and more effectively. Continue reading

New Facebook news apps: bring the news to your users, or invite users to your news?

There’s a salient quote in Journalism.co.uk’s report on Facebook’s  “new class of news apps” launched today:

“As we worked with different news organisations there were two camps: people that wanted to bring the social experience onto their sites, like Yahoo [News] and the Independent; and those that wanted the social news experience on Facebook, like Guardian, the Washington Post and the Daily,” director of Facebook’s platform partnerships Christian Hernandez told Journalism.co.uk.

So which is better? An initial play with the apps of The Independent and The Guardian appears to demonstrate the difference well. Here, for example, is the Facebook app widget as it appears on The Independent – or rather, as it almost appears: various other editorial and commercial choices push it onto the fold:

The Independent's new Facebook App in action

The Guardian app, meanwhile, hands over editorial control to the users in a customarily clean design:

Guardian Facebook app

But hold on, what’s this in my news/activity/information overload stream next to The Guardian’s article?

The Guardian news app with Independent stories in the user's news stream

It appears that The Independent app takes the news to the users as well. Continue reading

20 recent hyperlocal developments (June-August 2011)

Ofcom’s Damian Radcliffe produces a regular round-up of developments in hyperlocal publishing. In this guest post he cross-publishes his latest presentation for this summer, as well as the background to the reports.

Ofcom’s 2009 report on Local and Regional Media in the UK identified the increasing role that online hyperlocal media is playing in the local and regional media ecology.

New research in the report identified that

“One in five consumers claimed to use community websites at least monthly, and a third of these said they had increased their use of such websites over the past two years.”

That was two years ago, and since then, this nascent sector has continued to evolve, with the web continuing to offer a space and platform for community expression, engagement and empowerment.

The diversity of these offerings is manifest in the Hyperlocal Voices series found on this website, as well as Talk About Local’s Ten Questions feature, both of which speak to hyperlocal practitioners about their work.

For a wider view of developments in this sector, you may want to look at the bi-monthly series of slides I publish on SlideShare every two months.

Each set of slides typically outlines 20 recent hyperlocal developments; usually 10 from the UK and 10 from the US.

Topics in the current edition include Local TV, hyperlocal coverage of the recent England riots, the rise of location based deals and marketing, as well as the FCC’s report on The Information Needs of Communities.

Feedback and suggestions for future editions – including omissions from current slides – are actively welcomed.

When will we stop saying “Pictures from Twitter” and “Video from YouTube”?

Image from YouTube

Image from YouTube

Over the weekend the BBC had to deal with the embarrassing ignorance of someone in their complaints department who appeared to believe that images shared on Twitter were “public domain” and “therefore … not subject to the same copyright laws” as material outside social networks.

A blog post, from online communities adviser Andy Mabbett, gathered thousands of pageviews in a matter of hours before the BBC’s Social Media Editor Chris Hamilton quickly responded:

“We make every effort to contact people, as copyright holders, who’ve taken photos we want to use in our coverage.

“In exceptional situations, ie a major news story, where there is a strong public interest in making a photo available to a wide audience, we may seek clearance after we’ve first used it.”

(Chris also published a blog post yesterday expanding on some of the issues, the comments on which are also worth reading)

The copyright issue – and the existence of a member of BBC staff who hadn’t read the Corporation’s own guidelines on the matter – was a distraction. What really rumbled through the 170+ comments – and indeed Andy’s original complaint – was the issue of attribution.

Continue reading

Can we go beyond ‘Share on Facebook’?

ProPublica have created a rather wonderful news app around education data. As Nieman reports:

“The app invites both macro and micro analysis, with an implicit focus on personal relevance: You can parse the data by state, or you can drill down to individual schools and districts — the high school you went to, or the one that’s in your neighborhood. And then, even more intriguingly, you can compare schools according to geographical proximity and/or the relative wealth and poverty of their student bodies.”

This is exactly what data journalism is great at.

What’s more, the Nieman article talks breathlessly about ProPublica aiming to make data “more social”. What they describe is basically an embedded ‘Share this’ text box (admittedly nicely seamless) and a hashtag. But the news app page actually has a lot more to it: for example, once you’ve given it permission to access your Facebook account, it tells you how many friends have used the app, and appears to try to connect you to schools in your profile. This is how that’s presented on the homepage:

This came as a refreshing relief, because the ‘share this’ strategy reminds me of organisations who say their social media strategy is to ‘get everyone on Twitter’.

Still, it made me think of the range of challenges that Facebook and other social media platforms present. For example, if you land on one of the comparison pages, the offering isn’t so compelling: the reason to install the Facebook app is just “Share this”.

As I’ve written before, technology is a tool, not a strategy, so here are some other opportunities that might be explored:

  1. Publish your school’s scores to Facebook graphically, not just the generic link. Images work particularly well in news feeds, and would be much better than the dry list of names that is generated by the ‘Share this’ button.
  2. Turn conventional news values on their head: be positive. This is a curious one: positive headlines seem to get shared more on social media, so could users celebrate their school’s ratings as much as bemoan them? Could they generate a virtual report card with a ‘Try harder!’ line? Imagine a Facebook editor who asks “Where can we put the exclamation mark?” Yes, I know, it makes me feel uncomfortable too – but I also hear Yoda’s voice saying “You must unlearn what you have learned…”
  3. Build on where they’ve come from: if a friend has used the app to send them to a comparison page, can you build on that in the way you invite the user to connect through Facebook? Could they add something to what the friend has done, and correspond back and forth?
  4. A Facebook-based quiz which sees how well you guess where your school rates on different scales. Perhaps you could compete against your current or former classmates…
  5. A campaigning tool that would allow people to use data on their local school to petition for more support -
  6. Or a collaboration tool to help parents and students raise money, or organise provision.

Competition, fun, campaigning, conversation, collaborating – those are genuinely social applications of technology. It would be interesting to start a discussion about what else might suit a news app’s integration with Facebook. Any ideas?

What I learned from the Facebook Page experiment – and what happens next

Paul Bradshaw Facebook page

Cross-posted from the BBC College of Journalism blog:

Last week my experiment in running a blog entirely through a Facebook Page quietly came to the end of its allotted four weeks. It’s been a useful exercise, and I’m going to adapt the experiment slightly. Here’s what I’ve learned:

It suits emotive material

The most popular posts during that month were simple links that dealt with controversy – Isle of Wight council talking about withdrawing accreditation if a blogger refused to pre-moderate comments; and the wider issue of being denied access to public documents or meetings on the basis of blogging.

This isn’t a shock – research into Facebook tends to draw similar conclusions about the value of ‘social’ content.

That said, it’s hard to draw firm conclusions because the Insights data only gives numbers on posts after June 9 (when I posted a book chapter as a series of Notes), and the network effects will have changed as the page accumulated Likes.

UPDATE: Scrolling down the page each update does have impressions and interaction data on it in light grey – I’m not sure why these are not included in the Insights data (perhaps that service only kicks in after a certain number of Likes). But they do confirm that links get much higher traffic than Notes.

It requires more effort than most blogs

With most blogging it’s quite easy to ‘just do it’ and then figure out the bells and whistles later. With a Facebook Page I think a bit of preparation goes a long way – especially to avoid problems later on.

Firstly, there’s the choice whether to start one from scratch or convert an existing Facebook account into a Page.

Secondly, there’s the page name itself: at first you can edit this, but after 100 Likes you can’t. That leaves my ‘Paul Bradshaw’s Online Journalism Blog on FB for 1 month‘ looking a bit silly 5 weeks later. (It would be nice if Facebook warned you that this was happening)

Thirdly, if you want write more than 420 characters, you’ll need to use Notes (ideally, when logged on as the Page itself, which will result in the Note being auto-posted to the wall). And if you want to link phrases without leaving littering the note with ugly URLs, you’ll need to use HTML code.

Next, there’s integration with other online presences. Here are the apps I used:

  1. RSS Graffiti (for auto-posting RSS feeds from elsewhere)
  2. Slideshare (adds a new tab for your presentations on that site)
  3. Cueler YouTube (pulls new updates from your YouTube account)
  4. Tweets to Pages (pulls from your Twitter account into a new tab)

There’s also Smart Twitter for Pages which publishes page updates to Twitter; or you can use Facebook’s own Twitter page to link pages to Twitter.

Finally, I was thankful that I had used a Feedburner account for the Online Journalism Blog RSS feed. That allowed me to change the settings so that subscribers to the blog would still receive updates from the Facebook page (which also has an RSS feed) – and change it back afterwards.

It’s not suited for anything you might intend to find later

Although Vadim Lavrusik pointed out that you can find the Facebook page through Google or Facebook’s own search, individual posts are rather more difficult to track down.

The lack of tags and categories also make it difficult to retrieve updates and notes – and highlight the problems for search engine optimisation.

This created a curious tension: on the one hand, short term traffic to individual posts was probably higher than I would normally get on the blog outside Facebook. On the other, there was little opportunity for long term traffic: there was no footprint of inbound links for Google to follow.

This may not be a problem for local, hard news organisations which have a rapid turnover of content, no need to rank in Google News, and little value in the archives.

But there are too many drawbacks for most to move (as Rockville Central’s blog recently did) completely to Facebook. It simply leaves you too isolated, too ephemeral, and too vulnerable to changes in Facebook’s policies.

Part of a network strategy

So in short, while it’s great for short term traffic, it’s bad for traffic long-term. It’s better for ongoing work and linking than more finished articles. It shouldn’t be viewed in isolation from the rest of the web, but rather as one more prong in a distributed strategy: just as I tweet some things, Tumblelog others, and just share or bookmark others, Facebook Pages fit in somewhere amidst all of that.

Now I just need to keep on working out exactly how.

Which blog platform should I use? A blog audit

When people start out blogging they often ask what blogging platform they should use – WordPress or Blogger? Tumblr or Posterous? It’s impossible to give an answer, because the first questions should be: who is going to use it, how, and what and who for?

To illustrate how the answers to those questions can help in choosing the best platform, I decided to go through the 35 or so blogs I have created, and why I chose the platforms that they use. As more and more publishing platforms have launched, and new features added, some blogs have changed platforms, while new ones have made different choices to older ones. Continue reading