A new research from Indiana University showed that 54% of URL requests had no referrals. That means that most of the time, people do not click on links. They merely pick a site in their favorites or type in an URL in the address bar. A mere 5% of URL requests came from search engines.
The figures can hardly be doubted. The study monitored 100,000 users over 9 months – the largest yet. What is more, the number of URL requests without referrals actually increased over the course of the study.
Users seem less Google-prone than what is often claimed. They spend little time surfing and prefer to go directly to destinations they know.
The share of users that asked Google for “bbc.co.uk” actually rose over the years, meaning that web-literacy did not increase. Some users still do not understand the difference between the address bar and a search engine. As internet penetration continues to grow, late-adopters are forced to go online. Their behavior is far from the gorgeous image constantly pictured by geeky web consultants.
Should we jump to conclusions and withdraw all the theories on network building and SEO? Rich Gordon, from the Readership Institute, argues for instance that the answer for news outlets lies in building destinations, not bridges nesting deeper into the user’s head so as to see her come back more regularly.
Even though the figure of 54% of URL requests without referrals is impressive (and growing), it does not imply that web users are stuck in some 1995-like behavior. Asked by e-mail about the discrepancy between what some webmasters report, with Google traffic being the main concern, and his research, Mark Meiss offers several answers.
Quite surprised himself, he admits first that his experimental design considered requests from AJAX pages or RSS readers as having no referral. He also stressed the difference between some heavily visited websites, such as Facebook, and those on the news market. Users looking for news can be dwarfed by the new usages that emerged recently and that focus on a few websites (read: social networking). The full interview can be read here.
Although users do not adopt the newest surfing technologies as fast as the geek elite would like them to, the strategic visions centered on the power of the link should not be dismissed. AOL locked-in system failed years ago, and so will Facebook’s.
Journalists should not consider this research as a confirmation that they the ultimate destination for news, as some of them would like to think. Success lies in information flow, not in puddles of still articles.
I’m glad you found the Indiana research as interesting as I did.
But I think you may have misinterpreted one aspect of my post.
I am definitely not arguing for “building destinations, not bridges.” In fact, I still believe that “build a network, not a destination” is the right strategy for online media.
I do think it’s interesting, though, that so much online usage seems to be driven by people visiting the same sites over and over, typing in URL’s or clicking on bookmarks.
As you probably know, research (for instance, by Matthew Hindman of Arizona State) has shown that online media usage is even more concentrated than usage of traditional media — which is counter-intuitive considering that there is so much more choice online.
Research by network theorists has suggested this concentration of online traffic is due to the link-based network structure of the Web, and I still believe that’s a big part of the explanation. But so is the fact that people go back to the same sites over and over again — presumably the ones they find most interesting, relevant and useful. Web publishers need to figure out how to build this kind of repeat usage.
Keep up the good work …
As I read a comment or a blog entry, I figure that embedded links are references. If a claim is made and is hyperlinked but it sounds mildly plausible, I go on my merry way. If it sounds fishy, I’ll click on it. I click all of the links of few blogs.
With the advent of the feed reader, and considering most of my online time is spent reading blogs, I find that the only times I really click a refer link is on those sparing occasions where I scan the headlines at Google News or, more commonly, when a blogger specifically links a Link Of Interest that actually interests me.
For example, I was referred here by Dr. Scott McLeod’s blog, on his “recent comments” widget. As someone with a journalism degree and a fetish for the online, the Online Journalism Blog caught my Interest. Worth noting: the vast majority of links don’t.
Anecdotally, I know that I’m much more likely to click on a link at the end of a posted comment, rather than a link embedded in the text of a comment or that embedded in the name of the commenter.
It’s a great way to advertise one’s Web site, at the very least.
I’ve corrected my mistake, hope this new wording is more in line with what you had in mind!
Regarding web usages, some figures also indicate that more websites are visited. If we take facebook and myspace out of the picture, I think we’ll see a tail getting much longer, but not fatter, than offline.
For commoditized products that are in huge demand, there’s no reason to have several suppliers (eg. bbc.co.uk is by very, very far, UK’s #1 online news brand, even for people who prefer reading Guardian or Times offline). In the other hand, the web still excels at providing niche content.
From what I’ve research and what I see, I think there’s just no room on the web for mid-sized players. It’d be interesting how news outlets that are not competing for the top spot will manage to split up in a network no less valuable than the sum of their parts.
Pingback: Web-surfing- do imaginado ao real «
Very interesting topic. Thanks for bringing this up. I do would like to play with the data. I assume that, if we filter out the 1000 big websites, the results are different.
I assume that for ‘big’ things, people have default websites. As an example:
social network: myspace
These websites will be often visited, and their address will be entered by hand. The concept (news) and target (bbc) will be explicitly related in a cognitive model: thinking of news will generate the address.
For ‘small’ things, the place to go is less or not clear. There, people have to rely on search engines (for specific things they want to know) or websites that link to a page (for things that suddenly seem interesting). These pages will be less often visited, and together make up the long tail.
Cutting off the beginning of the tail (bbc, myspace & last.fm, in this example), the situation should (dramatically) change concerning origin of visitor (direct or via a referrer). Hence, I would love to see this data with more parameters to play with, such as ‘topic of the website’ and ‘size of the website’. Otherwise, small websites incorrectly start believing that the most important source of traffic is people who enter the url directly, which is only true for very big websites.
What are the thoughts of others on this?
You’re totally right. I’m doing my dissertation on the fragmentation of online news brands and the tail is definitely getting longer – loads of data back this up.
Now, a distinction should be made between content and service websites. There’s no reason why more than a few brands should coexist on the market for services.
There aren’t an indefinite number of ways you can differentiate a social network. MySpace is for pre-teens, Facebook for students, Skyblogs for the French and Orkut for Brazilians.
Now, when it comes to content, there are as many ways you can be unique as there are users. No two people have the same view on things – that’s why I’m arguing that we’re heading for a world of one-person brands.
On the content-side, that is.
Max Headroom here we come
I’d agree with the conclusions about networks of referral and so on, but on search traffic I think the study is unconvincing at best. They say that Search traffic is marginal (5%), yet their introduction includes the statement:
“In particular, ranking Web pages and sites is one of, if not the most critical task of any search engine.”
1 – How can optimising for 5% be “critical”.
2 – That sort of statement should be in a conclusion. In the introduction it becomes an assumption.
On the traffic figures, their analysis flies in the face of (for example) consistent work by Hitwise.
On a further note, the “representative” study sample:
>We report on our analysis of Web traffic from a large and representative sample of real users over an extended period of time.
It turns out to be based on internal traffic in a US university (Indiana):
>all traffic passing between the eight campuses of Indiana University and both Internet2 and the commodity Internet, representing the combined Internet traffic of about 100,000 users.
Needs a rigorous critique, but I don’t have time at present.
I think they are overstating their scope.
z867sC Blogs rating, add your blog to be rated for free:
Pretty interesting blog you’ve got here. Thank you for it. I like such themes and everything that is connected to them. I would like to read more on that blog soon.
For My opinion
On reading youtr artcile, i was questioning myslef why the price difference si big?