Abstract

If scholarly communication is broken, how will we fix it? At Impactstory—a non-profit devoted to helping scholars gather and share evidence of their research impact by tracking online usage of scholarship via blogs, Wikipedia, Mendeley, and more—we believe that incentivizing web-native research via altmetrics is the place to start. In this article, we describe the current state of the art in altmetrics and its effects on publishing, we share Impactstory’s plan to build an open infrastructure for altmetrics, and describe our company’s ethos and actions.


 
“Scholarly communication is broken.” We’ve heard this refrain for close to twenty years now, but what does it mean?

Academic publishing is still mostly a slow, arduous, and closed process. Researchers have little incentive to experiment with new forms of scholarly communication or make their research freely available at the speed of science, since they’re mainly recognized for publishing journal articles and books: a narrow, very traditional form of scholarly impact.

Most arguments attribute academic publishing’s problems to a system that benefits corporate interests or to perverse incentives for tenure and promotion. The solution? Open up research and update our incentive systems accordingly.

For too long now, academic publishing has relied on a closed infrastructure that was architected to serve commercial interests. Researchers who attempt to practice open science can find it difficult to get recognition for the impact of open access (OA) publications and research products beyond the journal article, products that include scientific software, data, and so on.

Some have already imagined a better future for scholarly communication, one where OA is the norm and a new, open infrastructure serves the diverse needs of scholars throughout the research lifecycle. The decoupled journal is slowly becoming a reality,[1] OA publications continue to gain a market share,[2] and measuring impact of a diverse set of scholarly outputs through altmetrics is becoming an increasingly common practice for scholars.[3]

We founded Impactstory with this future in mind. Impactstory is a non-profit, open source web application that helps researchers gather, understand, and share with others the impact of all their scholarly outputs. We believe that Impactstory and other services that serve scholarly communication are essential to the future of academia.

In this article, we’ll describe the current state of the art in altmetrics and its effects on publishing, share our plan to build an open infrastructure for altmetrics, and describe our company’s ethos and actions.

The current publishing ecosystem—and why it needs to be changed

Altmetrics—sometimes called “alternative metrics” and defined by Priem, Piwowar, & Hemminger as social media-based metrics for scholarly works[4]—are having a major effect on traditional scholarly publishing, but not for all of the reasons you might expect.

Traditional academic publishers are masters of vertical integration. Once a manuscript is submitted to a traditional journal for publication, that journal coordinates peer-review, copy-edits, publishes, markets, manages copyright for, and provides scores of other services[5] for the published article.

In general, this system has done its job relatively well to date—publishing pay-to-read journals. But it has also resulted in a publishing ecosystem that can be harmful to scholars and the public[6]: toll access journals with exorbitant subscription fees (as the for-profit publishers seek to expand their ever-widening profit margin[7]) and journal impact factors being used as a proxy for the quality of a published article when evaluating scholars’ work (not the fault of the publishers, to be sure, but they nonetheless contribute to the problem by promoting and sustaining JIF hype).

What if we imagined a web-native publishing ecosystem that functioned in an open, networked manner, similar to how much research itself is conducted nowadays? What if we decoupled the services that many journals provide from the journal itself, and had scores of businesses that could provide many of the essential services that authors need, like peer-review, copy editing, marketing—with less overhead and greater transparency?

Such a system has the opportunity to foster a scholarly communication environment that benefits scholars and the public, freeing the literature via Open Access publishing, improving the literature through open and post-publication peer review, and understanding the literature’s impact through article-level metrics and altmetrics.

Luckily, that new system is in the process of being built. Every day, game-changing publishing services like Publons and Rubriq (stand-alone peer-review services[8]), Annotum and PressForward (publishing platforms), Dryad and Figshare (data-sharing platforms), and Kudos (an article marketing service) are debuted. And altmetrics services like Impactstory, Altmetric, PlumX, and PLOS ALMs are also starting to be widely adopted, by both publishers and scholars alike.

The rise of altmetrics

Altmetrics are a solution to a problem that increasingly plagues scholars: even in situations where scholarship may be best served by a publishing a dataset, blog post, or other web-native scholarly product, one’s own career is often better served by instead putting that effort into traditional article-writing. If we want to move to a more efficient, web-native science, we must make that dilemma disappear: what is good for scholarship must become good for the scholar. Instead of assessing only paper-native articles, books, and proceedings, we must build a new system where all types of scholarly products are evaluated and rewarded.

The key to this new reward system is altmetrics: a broad suite of online impact indicators that goes beyond traditional citations to measure impacts of diverse products, in diverse platforms, on diverse groups of people.[9] Altmetrics leverage the increasing centrality of the Web in scholarly communication, mining evidence of impact across a range of online tools and environments:

These and other altmetrics promise to bridge the gap between the potential of web-native scholarship and the limitations of the paper-native scholarly reward system. A growing body of research supports the validity and potential usefulness of altmetrics.[10] [11] [12] [13] Eventually, these new metrics may power not only research evaluation, but also web-native filtering and recommendation tools.[14][15] [16]

However, this vision of efficient, altmetrics-powered, and web-native scholarship will not occur accidentally. It requires advocacy to promote the value of altmetrics and web-native scholarship, online tools to demonstrate the immediate value of altmetrics as an assessment approach today, and an open data infrastructure to support developers as they create a new, web-native scholarly ecosystem. This is where Impactstory comes in.

Impactstory

A vibrant, altmetrics-powered world of web-native scholarship requires early guidance and public infrastructure. The market is not providing this. ImpactStory aims to fill that gap.

Impactstory is a mission-driven non-profit. We incorporated as such in 2012 because we recognized the need to keep altmetrics Open . Ourselves freed—from the need to turn a profit for stockholders—we believe we’re able to focus on building a better product that meets users’ needs rather than aims towards profitability. To date we’ve been funded by the Open Knowledge Foundation, JISC, Alfred P. Sloan Foundation, and the National Science Foundation.

As a non-profit, we’re governed by a Board of Directors. We’ve been fortunate to have some of the best and brightest minds working in the areas of Open Science on our Board: Cameron Neylon (PLOS), Heather Joseph (SPARC), John Wilbanks (Sage Open), and Ethan White (University of Utah).

We’re far from the only altmetrics provider. The Public Library of Science’s Article Level Metrics (ALMs)[17] webapp was the first altmetrics service to gain traction in 2009, backed by the non-profit’s push to reduce academia’s dependence on the journal impact factor.[18] Two years later, the first commercial altmetrics providers, Altmetric.com[19] and Plum Analytics[20] were founded, and Impactstory also began operating under the name “Total-Impact.” Altmetrics and bibliometrics researchers have also created a number of apps over the years, including ScienceCard,[21] ReaderMeter,[22] and PaperCritic,[23] many of which have since been deprecated.

All services to date have provided unique insights into research impact. Some highlights include:

  • Impactstory: citations, downloads and page views, and altmetrics for a broad array of web-native research products, in a profile format designed to meet the needs of individual researchers.
  • PLOS ALMs: citations, downloads and page views, and altmetrics for all PLOS articles, relative to the performance of other articles in their corpus.
  • Altmetric.com: mentions of articles in mainstream media and policy documents, and a number of other metrics for publications that are displayed via a platform. Designed to provide business intelligence primarily for institutions and publishers.
  • PlumX (powered by Plum Analytics): Worldcat holdings for books and institutional repository downloads & page views for all scholarly outputs, alongside other metrics for a variety of research outputs. Designed to give insights primarily to funders and institutions.

What sets Impactstory and Plum Analytics apart from most other providers is that our aim is to provide altmetrics for web-native research products, beyond journal articles and preprints.

Why we’re building the altmetrics commons

Existing for-profit providers have approached altmetrics data as a commodity to be sold. This stance supports a relatively straightforward business model, and so is understandably attractive to investor-backed startups. It leads, however, to a negative externality: a fragmented landscape of tightly guarded silos containing mutually incompatible data (an outcome we have already seen in the citation database market). It is an approach on the wrong side of history.

The real value of altmetrics data, like other Big Data streams, is not in the numbers themselves; the value is in what the community can build on top of the data: a new generation of web-native assessment, filtering, and evaluation tools.[24] In this way, open altmetrics data is quite like the network of open protocols and providers behind the World Wide Web: it is essential infrastructure for a revolutionized communication ecosystem. Impactstory is designed to build and sustain this infrastructure—to create an altmetrics data commons. Our work takes a two-pronged approach, focusing on advocacy and an open source altmetrics webapp, with an eye towards future improvements, including building an open altmetrics data platform.

Advocacy

Our advocacy efforts to date have had two aims: to help scholars and librarians understand web-native scholarship and its importance, and to help them also understand the benefits of altmetrics.

With these goals in mind, we have given talks at universities and conferences, led workshops, published articles in high-profile journals like Nature, and pursued research describing and validating altmetrics. More importantly, though, our advocacy helps shape the altmetrics movement, keeping openness and interoperability central to emerging conversations.

Webapp

The impactstory.org webapp provides users with an online, metrics-driven CV that has several key features.

Diverse metrics for diverse products, with context

Few researchers create only papers in the course of research. They collect data, maybe write a script to parse and analyze the data, present their findings at a conference using a slide deck, put a preprint up on ArXiv to get some feedback on their an initial draft of a paper describing their findings, and then (finally) publish their paper in a peer-reviewed journal. These outputs all have impacts that leave traces on the web—other researchers will reuse a script, favorite the slide deck for later review, comment on the preprint, and possibly cite the published paper. Yet, impact is usually only measured in citations of the published paper.

We think that by measuring only citations, academics are missing the fuller, richer picture. That’s why Impactstory is designed to capture impacts for all research outputs, at all stages of research.

Our webapp finds and displays a variety of metrics across a number of services where web-native research outputs live: Dryad, Figshare, Mendeley, CrossRef, Scopus, GitHub, Slideshare, Vimeo, and more. We report on recommendations, citations, discussions, saves, and views for data, software, papers (both published and unpublished), slide decks, and videos.

The Impactstory webapp also provides important context to raw metrics. We do this by sorting metrics by engagement type (recommendations, citations, discussions, saves, or views) and audience (public or scholars). We also use percentiles: metrics on each product are compared to those of all products of the same age and type across the profiles of all Impactstory users, as in the pop-up box on the illustration above.

Automatic updates

Users can connect their Impactstory profile to third-party services like ORCID, Figshare, and Slideshare so that any time a new product is added to any of those services, it will be automatically imported to their Impactstory profile. This important feature takes the pain out of updating your CV—less time spent hunting down and formatting the citations of all the scholarly products you created over the past year.

Ability to download and reuse data

Users can download and reuse the data we provide in Impactstory profiles, to the extent allowed by the data providers’ terms of service. Users can download in .CSV and JSON formats, and they can also embed their Impactstory profile into other websites just by copying and pasting a few lines of code.

Notifications

Impactstory users can also receive notification emails, which alert them to whether their research products have seen any activity within the past week. These updates include a number of “cards” that highlight something unique about their new metrics for that week: if they are in a top percentile related to other papers published that year, or if their PLOS paper has topped 1000 views or gotten new Mendeley readers. Users get a card for each type of new metric one of their products receives.

Over time, we are building a webapp that will offer scholars a powerful replacement for their online CVs. In providing this “altmetrics CV,” we hope we will support broad, grassroots adoption of altmetrics from working researchers. Hearing about open altmetrics is one thing— but seeing one’s own altmetrics, with the ability to freely download them, is far more powerful.

Future work

More profile-level summarization

How many citations did all of your papers receive last year? What’s the total number of GitHub forks you’ve gotten in your career? How often were your datasets recommended in the previous year?

We’re aiming to provide compelling, author-level statistics for our users via profile-level statistics. And we’re intending to make these metrics useful to profile viewers as well, without venturing into the dangerous “one metric to rule them all” territory that’s plagued academics for years in the form of journal impact factors on CVs.

More complex modelling

Researchers are starting to understand the various “flavors” of impact (what citations mean for the impact of a paper versus what a Mendeley bookmark means; how “forks” on the collaborative software coding website, GitHub, influence software’s impact; and so on). As we explained on the Impactstory blog earlier this year,[25] soon, researchers and altmetrics providers will begin to provide:

more network-awareness (who tweeted or cited your paper? how authoritative are they?), more context mining (is your work cited from methods or discussion sections?), more visualization (show me a picture of all my impacts this month), more digestion (are there three or four dimensions that can represent my “scientific personality?”), more composite indices (maybe high Mendeley plus low Facebook is likely to be cited later, but high on both not so much).

Recently, altmetrics researchers have also recognized a need for qualitative research that gets at the motivations behind particular events associated with the use of scholarship (why did this researcher cite this paper? what prompts lay people to post about a study on Facebook?). A notable new company working in this space is SocialCite, which allows readers to indicate whether a citation is appropriate and high-quality, as well as why an article was cited (for evidence, assertions, methods, and so on).

Uncovering the impacts of software

The traditional “coin of the realm” that measures the impact of articles has been citations. What is the coin of the realm for software? Is there more than one coin?

We recently received an NSF EAGER grant to study how automatically-gathered impact metrics can improve the reuse of research software. Over the course of the two-year grant, we’ll improve Impactstory’s ability to track and display the impact of research software. Our webapp will soon include tools to uncover where and how software is downloaded, installed, extended, and used, and present this information in an easy-to-understand dashboard that researchers can share.

We’ll also use quantitative and qualitative approaches to see if this impact data helps promote actual software reuse among researchers.

The long-term goal of the project is big: we want to transform the way the research community values software products. This is in turn just one part in the larger transformation of scholarly communication, from a paper-native system to a web-native one.

An open altmetrics data platform

Finally, an improved ImpactStory API will form the hub of an open data infrastructure connecting dozens of diverse data providers (like Mendeley, Twitter, or Dryad) with a constellation of application developers. Applications include impact-aware PDF readers, institutional repository usage widgets, literature search tools, enhanced citation indexes, faculty profile collections, funding databases, institutional and regional impact assessments, expert identification systems, post-publication peer-review platforms, and recommendation engines—in fact, we’ve had requests for data from projects in each of these categories already. As we improve its scalability, our open API will support an ecosystem in which impact data flows like water among these and other diverse applications, with Impactstory as the “village well” supplying a shared, open, always-on stream of impact data.

Our non-profit is dedicated to promoting open science by building the tools that will provide incentives for researchers who practice it. We are also committed to building an open infrastructure for altmetrics, to keep altmetrics data open and verifiable, allowing for innovative services to be built that meet researchers’ needs.


Stacy Konkiel is the Director of Marketing & Research at Impactstory. A former academic librarian, Stacy has written and spoken most often about the potential for altmetrics in academic libraries.

Stacy has been an advocate for Open Scholarship since the beginning of her career, but credits her time at Public Library of Science (PLOS) with sparking her interest in altmetrics and other revolutions in scientific communication. Prior, she earned her dual master’s degrees in Information Science and Library Science at Indiana University (2008). You can connect with Stacy on Twitter at @skonkiel.

Heather Piwowar is a cofounder of Impactstory and a leading researcher in research data availability and data reuse. She wrote one of the first papers measuring the citation benefit of publicly available research data, has studied patterns in data archiving, patterns of data reuse, and the impact of journal data sharing policies.

Heather has a bachelor’s and master’s degree from MIT in electrical engineering, 10 years of experience as a software engineer, and a Ph.D. in Biomedical Informatics from the University of Pittsburgh. She is a frequent speaker on research data archiving, writes a well-respected research blog, and is active on twitter (@researchremix).

Jason Priem is a cofounder of Impactstory and a doctoral student in information science (currently on leave of absence) at the University of North Carolina-Chapel Hill. Since coining the term “altmetrics,” he’s remained active in the field, organizing the annual altmetrics workshops, giving invited talks, and publishing peer-reviewed altmetrics research.

Jason has contributed to and created several open-source software projects, including Zotero and Feedvis, and has experience and training in art, design, and information visualization. Sometimes he writes on a blog and tweets.

References

  • Eysenbach, G. (2012). Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. Journal of Medical Internet Research, 13(4). doi:10.2196/jmir.2012
  • Habib, M. (2013). Expectations by researchers [Lightning talk]. NISO Altmetrics Initiative meeting, San Francisco, CA. 9 Oct., 2013. http://www.slideshare.net/BaltimoreNISO/niso-lightning-mchabibv3
  • Haustein, S., & Siebenlist, T. (2011). Applying social bookmarking data to evaluate journal usage. Journal of Informetrics, 5(3), 457–446. Retrieved from http://dx.doi.org/10.1016/j.joi.2011.04.002
  • Laakso, M., & Björk, B.C. (2012). Anatomy of open access publishing: a study of longitudinal development and internal structure. BMC Medicine, 10(1), 124. doi:10.1186/1741-7015-10-124
  • Li, X., Thelwall, M., & Giustini, D. (2011). Validating online reference managers for scholarly impact measurement. Scientometrics, 91(2), 1–11. doi:10.1007/s11192-011-0580-x
  • Morrison, H. (2014). “Elsevier STM publishing profits rise to 39%.” Imaginary Journal of Poetic Economics. 14 March 2014. http://poeticeconomics.blogspot.com/2014/03/elsevier-stm-publishing-profits-rise-to.html
  • Neylon, C., & Wu, S. (2009). Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biol, 7(11).
  • Nielsen, F. (2007). Scientific citations in Wikipedia. First Monday, 12(8). Retrieved from http://arxiv.org/pdf/0705.2106
  • Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). Altmetrics: a manifesto. Retrieved October 26, 2010, from http://altmetrics.org/manifesto/
  • Priem, J., Piwowar, H., & Hemminger, B. (2011). Altmetrics in the wild: An exploratory study of impact metrics based on social media. Presented at the Metrics 2011: Symposium on Informetric and Scientometric Research, New Orleans, LA, USA.
  • Priem, J., & Hemminger, B. M. (2012). Decoupling the scholarly journal. Frontiers in Computational Neuroscience, 6. Retrieved from http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2012.00019/full
  • Taraborelli, D. (2008). Soft peer review: Social software and distributed scientific evaluation. In Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP ’08). Carry-le-Rouet, France. Retrieved from http://eprints.ucl.ac.uk/8279/
  • Wilbanks, J. (2011). Openness as infrastructure. Journal of Cheminformatics, 3(1), 36. doi:10.1186/1758-2946-3-36

Notes

    1. Priem, Jason, and Bradley M. Hemminger. “Decoupling the Scholarly Journal .” Frontiers in Computational Neuroscience 6 (2012). doi:10.3389/fncom.2012.00019 return to text

    2. Laakso, Mikael, and Bo-Christer Björk. 2012. “Delayed Open Access – an Overlooked High-Impact Category of Openly Available Scientific Literature.” Journal of the American Society for Information Science and Technology (preprint). http://hanken.halvi.helsinki.fi/portal/files/1311951/laakso_bj_rk_delay_preprint.pdf.return to text

    3. Habib, M. (2013). “Expectations by researchers” Lightning talk at the NISO Altmetrics Initiative meeting, San Francisco, CA. 9 Oct., 2013. http://www.slideshare.net/BaltimoreNISO/niso-lightning-mchabibv3return to text

    4. Priem, Jason, Heather A Piwowar, and Bradley H Hemminger. “Altmetrics in the Wild: An Exploratory Study of Impact Metrics Based on Social Media.” Metrics 2011: Symposium on Informetric and Scientometric Research. New Orleans, LA, USA. http://jasonpriem.org/self-archived/PLoS-altmetrics-sigmetrics11-abstract.pdf return to text

    5. Anderson, Kent. 2013. “UPDATED — 73 Things Publishers Do (2013 Edition) | The Scholarly Kitchen on WordPress.com.” Scholarly Kitchen Blog. http://scholarlykitchen.sspnet.org/2013/10/22/updated-73-things-publishers-do-2013-edition/.return to text

    6. Piwowar, Heather A., and Jason Priem. 2014. “Keeping Metrics Free.” Impactstory Blog. Accessed July 22. http://blog.impactstory.org/24638498595/.return to text

    7. Morrison, H. (2014). “Elsevier STM publishing profits rise to 39%.” Imaginary Journal of Poetic Economics. 14 March 2014. http://poeticeconomics.blogspot.com/2014/03/elsevier-stm-publishing-profits-rise-to.htmlreturn to text

    8. Priem, Jason. “List: standalone peer review services.” https://docs.google.com/document/d/1HD-BEaVeDdFjjCNFkb0j3pvwe7MrP3PtE-bWHkkdq7Q/edit#heading=h.uhoilqhqulp8 return to text

    9. Priem, Jason, Dario Taraborelli, Paul Groth, and Cameron Neylon. 2010. “Alt-Metrics: A Manifesto.” http://altmetrics.org/manifesto/.return to text

    10. Eysenbach, Gunther. 2006. “Citation Advantage of Open Access Articles.” PLoS Biology 4 (5) (May): e157. doi:10.1371/journal.pbio.0040157. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1459247&tool=pmcentrez&rendertype=abstract.return to text

    11. Haustein, Stefanie, and Tobias Siebenlist. 2011. “Applying Social Bookmarking Data to Evaluate Journal Usage.” Journal of Informetrics 5 (3) (May): 457–446. http://dx.doi.org/10.1016/j.joi.2011.04.002.return to text

    12. Li, Xuemei, Mike Thelwall, and Dean Giustini. 2011. “Validating Online Reference Managers for Scholarly Impact Measurement.” Scientometrics 91 (2) (December 21): 1–11. doi:10.1007/s11192-011-0580-x. http://dl.acm.org/citation.cfm?id=2205928.2205953.return to text

    13. Nielsen, FÅ. 2007. “Scientific Citations in Wikipedia.” First Monday 12 (8). http://arxiv.org/pdf/0705.2106.return to text

    14. Neylon, Cameron, and Shirley Wu. 2009. “Article-Level Metrics and the Evolution of Scientific Impact.” PLoS Biol 7 (11) (November).return to text

    15. Priem, Jason, and Bradley M. Hemminger. 2012. “Decoupling the Scholarly Journal .” Frontiers in Computational Neuroscience 6. http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2012.00019/full.return to text

    16. Taraborelli, Dario. 2008. “Soft Peer Review: Social Software and Distributed Scientific Evaluation.” In Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP ’08). Carry-le-Rouet, France. http://eprints.ucl.ac.uk/8279/.return to text

    17. http://article-level-metrics.plos.org/ return to text

    18. Smith, Richard. 2009. “Richard Smith: The beginning of the end for impact factors and journals.” The BMJ Blog. http://blogs.bmj.com/bmj/2009/11/02/richard-smith-the-beginning-of-the-end-for-impact-factors-and-journals/ return to text

    19. http://www.altmetric.comreturn to text

    20. http://www.plumanalytics.com/return to text

    21. More information available at http://50.17.213.175. [Formerly http://www.sciencecard.org/]return to text

    22. http://www.dcc.ac.uk/resources/external/readermeterreturn to text

    23. http://www.papercritic.com/ return to text

    24. Wilbanks, John. 2011. “Openness as infrastructure.” Journal of Cheminformatics, 3(1), 36. doi:10.1186/1758-2946-3-36 return to text

    25. Priem, Jason and Heather A. Piwowar. 2014. “Top 5 altmetrics trends to watch in 2014.” Impactstory blog. http://blog.impactstory.org/top-5-altmetrics-trends-to-watch-2014/return to text