With so much content online, what criteria are users employing to judge article relevance from an abundance of results?My post on user behaviour on e-journal sites noted the problem faced by e-journal users in filtering through the vast amounts of web content available to them (particularly given that articles are easily the most important media sought by researchers, above books, conference proceedings or datasets). As sites move from a journal-based model to an article-based model (where articles are simply published as and when, rather than waiting for a new print volume to be issued), these sort of issues are only likely to intensify. Given that, it seems worth looking at what criteria users employ for judging article relevance from an abundance of results. 



The study from Carol Tenopir for the Center of Information and Communication Studies concludes that journal reputation or brand may well be a distinguishing characteristic, noting that a paper published in a journal with a high impact factor receives approximately twice as many citations as the same paper published in a low impact journal. The results found show:

>  Topic of the article was ranked by all demographic groups as the most important characteristic that helps in choosing an article to read.

>  After topic, the next most important characteristics selected were online accessibility and source of article.

>  Author(s), type of publisher, and author(s)’ institution were consistently ranked last.

>  Articles from known top authors or unknown authors are more likely to be read than those by known, but weak authors. Articles from top-tier peer reviewed journals or lower-tier peer reviewed journals are more likely to be read. Readers are less likely to read an article from a non-peer reviewed journal than from a non-journal source.

>  The highest rated conjoint profile was "Written by an author I recognize as a top scholar, in a top-tier peer-reviewed journal, and available online at no [personal] cost"; while the lowest profile was "Written by an author I recognize as a weaker scholar, in a journal that is not peer-reviewed, and available online at some cost".

The usual caveat should be applied to this sort of survey-based research; it looks at what users say rather than what they do, and the probability is that many of the survey's respondents might have behaved quite differently when faced with an actual Google Scholar search results page, compared to what they report their behaviour to be. However, it does give a good starting point for the sort of quality cues that need to be prioritised on article pages.

One small point leaps out to me: when users were asked to feed back on other issues they had with journal sites, issues with page readability came back as a top item, above such considerations as 'impact factor'. While academic articles obviously tend to be much longer than content published on other sites, I wouldn't expect this sort of issue to be raised anywhere near as prominently for content rich sites in other fields (e.g. newspapers), which suggests that more attention needs to be given to this issue.