Research Metrics: Altmetrics

by | Mar 19, 2019

Reflecting impact, or just empty buzz?

In the ‘altmetrics manifesto‘, Priem et al. pose the following binary, “Researchers must ask if altmetrics really reflect impact, or just empty buzz.” This moment of self-reflection in the founding document of this metric is an interesting departure from the style of Hirsch and Garfield, the midwives of the h-index and journal impact factor (JIF) respectively. While the earlier authors certainly raise the limitations of these metrics, they never question their own validity in such a way.

This difference in attitude is perhaps reflective of the radical shift in focus represented by altmetrics. These metrics are focused not just on the academy, but also on the reaction of the community outside of the academy. This type of interaction is made possible by a world where increasingly research is being made available to the general public through open access arrangements.

As one might imagine, a class of metrics that spans both the academy and the general community contains an astounding number of data types. An exhaustive list of these is well beyond the scope of this post. To complicate matters, there are many ways to class them. For the purposes of this post however, altmetrics will be split into three broad classes:

  1. Direct interactions with the manuscript
  2. Non-traditional academic citations.
  3. Discussion of the manuscript

Before the altmetrics manifesto was penned, it had been long suggested that metrics based on direct interaction with the manuscript, such as article downloads and views, might give a faster reflection of an article’s impact, than citation count. It has indeed been shown that after a lag period of around three years, article downloads correlate very well with citation count. This implies that download count does indeed provide a faster read on an article’s scholarly impact, by simply measuring the same underlying variable as citation count.

Priem et al. imagined altmetrics working as a ‘filter to select the most relevant and significant sources from the rest’. Given the above results it would seem that download count might be a metric on which a useful filter could be built. But perhaps there is a better way. It has been shown in some cases, citation count poorly reflects academic ratings of ‘significance’. If it can be assumed that in this manuscript filter, academics want to catch more ‘significant’ papers, then perhaps raw view count might not be the most useful metric.

In this context ‘non-traditional academic citations’ might be more appropriate. This category includes metrics such as, citations in teaching materials or academic recommendations on sites such as f1000. It might also be expected that citations of this type might represent a deeper level of engagement with a manuscript than simple citation, and thus better track ‘significance’. However, since these metrics are as yet in their infancy, the research surrounding them is sparse.

Whether the final category of altmetric, ‘discussion of the manuscript’ can currently be used to help create useful filters for readers is much more problematic. These metrics involve the tracking of article mentions on various media platforms. The conclusions that can be drawn from such metrics are contested. In 2013 the percentage of published manuscripts that were mentioned on social media was found to be between 15-24%. This makes drawing conclusions about correlation between social media mentions and ‘impact’ quite challenging.

It has been shown that the number of tweets related to an article correlates very poorly with peer assessment of it. It would seem then, that the metric ‘number of tweets’ is measuring something quite different from ‘citation count’. It has been argued that this metric might correlate to the attention garnered by the article within the broader community, however intuitively it would seem that a more fine grained metric than raw mentions might be required to assess this. Twitter is by no means the only media platform that is measured by altmetrics providers. However, it is perhaps the most well researched, and as such might serve as a proxy for other platforms like facebook, blogs or traditional media.

An attempt to quantify altmetrics scores is made by providers such as Altmetric.com, Impactstory and Plum metrics. The first two focus on the ‘discussion of the manuscript’, and plum metrics on all three categories. Altmetric.com and Impactstory track, collate, and weight references to a work in news sites, facebook, twitter and blogs, among other forums. The front pages of these websites offer variously to ‘tell the story of your research’, to find out ‘who is talking about your research’, or to ‘discover the online impact of your research’. These different approaches are worthy of a post of their own. Tellingly however, so far none of these service providers are offering a useful filter for readers in the vein of the altmetrics manifesto.

Altmetrics are an extraordinarily broad category of metric. They are so disparate that the term itself resists useful definition. The term was born in a moment where, the web suddenly allowed tracking of the interactions between manuscripts and the community in ways that could never have been done before. At least three new families of metrics were born, each measuring different variables. As these metrics expand and develop in the coming years, it will be fascinating to see how they are interpreted and used.

 

See our two previous articles in the “Research Metrics” series: Journal Impact Factors and the h-index.

ASN Weekly

Sign up for our weekly newsletter and receive the latest science news.

Related posts: