We have just added a free Primo code extension to the CodeShare area in EL Commons, the Ex Libris collaborative platform. It enables libraries using Primo to embed altmetrics from altmetric.com into the search results. I am very excited about this new feature. Read more about my take on altmetrics below the picture.
The altmetric.com score on Primo:
Let’s start with a brief description for those who are not so familiar with altmetrics. Altmetrics stands for alternative metrics. Generally speaking, these metrics are counts of different usage events for scholarly articles, that are taken from various sites. Such event counts, for a specific article, may include the number of times it was cited from Scopus, CrossRef and Wikipedia; the number of bookmarks on Mendeley and CiteuLike; the number of downloads of PDF or HTML views on a publisher’s website; and the number of comments on Twitter. A splendid example of such metrics can be found on PLoS where every article shows a metrics tab. Further examples and details can be found in The Altmetrics Collection by Jason Priem, Paul Groth, Dario Taraborelli (2012) and on the ImpactStory website
To better understand the significance of altmetrics, it’s important to look at the wider context. Scholarly communication is undergoing dramatic changes in our web-driven world. Open access policies are increasingly being endorsed by institutions and funding agencies, and there is a growing demand for indicators showing the impact of research output. Social media play an ever more important role in how research is disseminated. Through social networks, users post comments, reviews, and conversations, and provide feedback immediately after or even before publication. Stacy Konkiel and Bob Noel state in their excellent presentation Altmetrics and Librarians: How Changes in Scholarly Communication will affect our Profession (2012) that the “feedback loop is shortened, accelerating research”. Other interesting material evaluating use of social media by scholars include Carolyn Hank’s The Scholar Blogs of Today, Tomorrow: Practices and Perceptions of Value, Impact and Stewardship (2012) and John Conway’s Blogs, Twitter, Wikis and other on-linetools (2011). With the rapidly multiplying numbers of available scholarly publications, many of which are open access, readers need ways that help with sifting this material. Traditionally, the Impact Factor played a key role in researchers’ decisions about what to read and cite. But as Cameron Neylon and Shirley Wu state in their article Article-Level Metrics and the Evolution of Scientific Impact “… the impact factor … is simply not designed to capture qualities of individual papers.” With the rise of the web, measures based on usage and social media have become available that capture impact in a different and more immediate way. Such measures are derived from the usage by a far bigger community than the group the Impact Factor is based on (which only relies on scholars who publish and thus cite) and are applicable on article level. The impact of
research and scholarly publications is difficult to define since it can take different shapes and forms. The UK Research Excellence Framework (REF) states in their 2011 report Decisions on assessing research impact that “The impact element will include all kinds of social, economic and cultural benefits and impacts beyond academia”. In short, measuring, qualifying and quantifying impact and using this for recognition (tenure, grants and promotions) and selection of material is very important but equally a difficult and multifaceted task.
Since articles are used and reviewed in many different places, altmetrics are also created at many different sites. A key benefit of looking at a variety of altmetrics is that they can provide a picture of usage, beyond just an individual measure. Also, they start appearing immediately with or before formal publication and the values they provide are usually quite transparent. Altmetrics cannot replace any qualitative peer review but often point to it, for example in the form of comments on social media or reviews on F1000. Altmetrics enhance rather than replace other evaluation methods, and provide a rich picture as is suitable for today’s rich and diverse environment. Jonathan Eisen, Prof. at UC Davis, gives an interesting example for the use of altmetrics on his blog post Playing with Impact Story to look at Alt Metrics for my papers, data, etc. Altmetrics do have their weaknesses: Firstly usage data is subject to gaming, that is, manipulation by an individual or a machine. There are preventive measures sites can take but it cannot entirely be ruled out; however, since usage comes from different places, it is very unlikely that all numbers are manipulated in the same way. In addition, usage is distributed and not all places provide altmetrics. Hence, lack of altmetrics or low numbers do not necessarily mean that an article is not used. And finally, usage is subject to interpretation: the intention of a bookmark, tweet, html or pdf view cannot be measured at this time; they do however indicate explicit interest for an item. Even with these caveats there is no question in my mind that altmetrics provide a significant enrichment of the user experience. In future, we will add our own metrics from the bX Usage-Based Services to further enhance the usage picture for an article with the usage data that we gather from hundreds of institutions around the world.