Tag Archives: research

Summary of "Magazines and their websites" – Columbia Journalism Review study by Victor Navasky and Evan Lerner

The first study (PDF) of magazines and their various approaches to websites, undertaken by Columbia Journalism Review, found publishers are still trying to work out how best to utilise the online medium.

There is no general standard or guidelines for magazine websites and little discussion between industry leaders as to how they should most effectively be approached.

Following the responses to the multiple choice questionnaire and the following open-ended questions –

  • What do you consider to be the mission of your website, does this differ from the mission of your print magazine?
  • What do you consider to be the best feature of aspect of your website?
  • What feature of your website do you think most needs improvement or is not living up to its potential?

– the researchers called for a collective, informed and contemporary approach to magazine websites with professional body support.

The findings were separated into the following 6 categories: Continue reading

Newspaper bias: just another social network

Profit maximising slant

There’s a fascinating study on newspaper bias by University of Chicago professors Matthew Gentzkow and Jesse Shapiro which identifies the political bias of particular newspapers based on the frequency with which certain phrases appear.

The professors then correlate that placement with the political leanings of the newspaper’s own markets, and find

“That the most important variable is the political orientation of people living within the paper’s market. For example, the higher the vote share received by Bush in 2004 in the newspaper’s market (horizontal axis below), the higher the Gentzkow-Shapiro measure of conservative slant (vertical axis).”

Interestingly, ownership is found to be statistically insignificant once those other factors are accounted for.

James Hamilton, blogging about the study, asks:

“How slant gets implemented at the ground level by individual reporters. My guess is that most reporters know that they are introducing some slant in the way they’ve chosen to frame and report a story, but are unaware of the full extent to which they do so because they are underestimating the degree to which the other sources from which they get their information and beliefs have all been doing a similar filtering. The result is social networks that don’t recognize that they have developed a groupthink that is not centered on the truth.” [my emphasis]

In other words, the ‘echo chamber’ argument (academics would call it a discourse) that we’ve heard made so many times about the internet.

It’s nice to be reminded that social networks are not an invention of the web, but rather the other way around.

h/t Azeem Azhar

Research: news execs still think they have a monopoly

Statistics from the American Press Institute paint a strong picture of the disconnect between news executives and readers that covers

  • how much content is valued by execs and readers,
  • how easy the two camps think it is to find alternative sources of news; and
  • where readers would go if the website was turned off. That last question shows the biggest disconnect,

As reproduced below, an incredible 75% news execs think switching off their websites will drive people to their newspapers. Readers, however, are saying they would go to another local website, with other prominent alternatives including regional and national websites, TV and radio (note that news execs also feel that ‘local media sites’ will benefit but users disagree): Continue reading

Online video viewing has no ‘peak times’, says research

“Unlike television consumption, which mostly happens during hours of 8 pm to 11 pm, people across all demographics are watching online videos consistently throughout the day and night, with the exception of dinnertime… this fundamental shift in consumer behavior opens up opportunities… [to] leverage online video to reach target audiences more often than just once a week.”

Full post with statistics here.

Did Michael Jackson’s kids make the Daily Mail the most visited UK newspaper site in June?

The Daily Mail surprisingly overtook the Telegraph and Guardian in the June ABCes – with more unique visitors than any other UK newspaper (this is a cross-post of my original June ABCe analysis on my blog).

However it was only 4th in terms of UK visitors. Figures from Compete.com, which tracks Americans’ internet use, show that, of the 4.7 million unique users the Mail added from May to June, 1.2 million were from the USA. American and other foreign visitors searching for Michael Jackson’s kids – the Mail tops google.com for a search on this – drove this overseas growth.

US traffic to UK newspaper sites

Of the big three UK newspaper sites this is what happened to their US traffic from May to June:

This dramatic increase in traffic, compared to its rivals, from May to June helps explains how the Mail leapfrogged the Guardian and Telegraph.

compete-mail-traffic

Google.com was the main referrer to the Mail – responsible for 22.7% of its traffic. More on this below. Next up was drudgereport.com (a large US news aggregation site), followed by Yahoo.com and Facebook.com.

What was behind this rise in US traffic?

So what led to this sudden increase for the Mail? Compete also shows you the main search terms that lead US visitors to sites. Continue reading

Even “heavy newspaper readers” spend a quarter of their media time online

Some research from The Media Audit makes a pretty strong point about how quickly media consumers’ behaviour is changing:

“The Internet now represents 32.5% of the typical “media day” for all U.S. adults when compared to daily exposure to newspaper, radio, TV and outdoor advertising.

“Even those who are considered heavy newspaper readers spend about as much time online today as the typical U.S. adult. According to the report, heavy newspaper readers, those who spend more than an hour per day reading, currently spend 3.7 hours per day online. In 2006 the Internet represented only 18.4% of a heavy newspaper reader’s “media day,” but today it represents 28.4%.”

But there’s good news for some US newspapers who have made the most of their online presence to achieve an impressive reach “of 80% or more when the past 30-day website visitor figure is combined with the past month print readership figure.”

It will be interesting to see how paywall experiments might result in quite different reach stats for other newspapers in the coming months.

More at MediaPost.

Do blogs make reporting restrictions pointless?

The leaked DNA test on 13-year-old alleged dad Alfie Patten has revealed a big problem with court-ordered reporting restrictions in the internet age. (NB This is a cut down version of a much longer original post on blogging and reporting restrictions that was featured on the Guardian).

Court orders forbidding publication of certain facts apply only to people or companies who have been sent them. But this means there is nothing to stop bloggers publishing material that mainstream news organisations would risk fines and prison for publishing.

Even if a blogger knows that there is an order, and so could be considered bound by it, an absurd catch 22 means they can’t found out the details of the order – and so they risk contempt of court and prison.

Despite the obvious problem the Ministry of Justice have told me they have no plans to address the issue. Continue reading

The services of the ‘semantic web’

Many of the services that are being developed as part of the ‘semantic web’ are necessarily works in progress, but they all contribute to extending the success of this burgeoning area of technology. There are plenty more popping up all the time, but for the purposes of this post I have loosely grouped some prominent sites into specialities – social networking, search and browsing – before briefly explaining their uses.

Continue reading

The next step to the ‘semantic web’

There are billions of pages of unsorted and unclassified information online, which make up millions of terabytes of data with almost no organisation.  It is not necessarily true that some of this information is valuable whilst some is worthless, that’s just a judgement for who desires it.  At the moment, the most common way to access any information is through the hegemonic search engines which act as an entry point.

Yet, despite Google’s dominace of the market and culture, the methodology of search still isn’t satisfactory.  Leading technologists see the next stage of development coming, where computers will become capable of effectively analysing and understanding data rather than just presenting it to us.  Search engine optimisation will eventually be replaced by the ‘semantic web’.

Continue reading