The arXiv of the future will not look like the arXiv

The arXiv is the most popular preprint repository in the world. Since its inception in 1991, the arXiv has allowed researchers to freely share publication-ready articles prior to formal peer review. The growth and the popularity of the arXiv emerged as a result of new technologies that made document creation and dissemination easy, and cultural practices where collaboration and data sharing were dominant. The arXiv represents a unique place in the history of research communication and the Web itself, however it has arguably changed very little since its creation. Here we look at the strengths and weaknesses of arXiv in an effort to identify what possible improvements can be made based on new technologies not previously available. Based on this, we argue that a modern arXiv might in fact not look at all like the arXiv of today.


Introduction
The arXiv, pronounced "archive", is the most popular preprint repository in the world. Started in 1991 by physicist Paul Ginsparg, the arXiv allows researchers to freely share publication-ready articles prior to formal peer review and publication. Today, the arXiv publishes over 10,000 articles each month from high-energy physics, computer science, quantitative biology, statistics, quantitative finance, and others (see Fig 1). The early success of arXiv stems from the introduction of new technological advances paired to a well-developed culture of collaboration and sharing. Indeed, before the arXiv even existed, physicists were already physically sharing recently finished manuscripts via mail, first, and by email, later. To understand the success of the arXiv it is important to understand the history of the arXiv. Below we highlight a brief history of technology, services, and cultural norms that predate the arXiv and were integral to its early and continued success.

The history of the arXiv
Prior to the arXiv, "the photocopy machine was a prime component of the distribution system" (Ginsparg, 2011a) and preprints were only exchanged to personal contacts and/or mailing lists (Elizalde, 2017). Institutional repositories, such as the SPIRES-HEP database (Stanford Physics Information REtrieval System-High Energy Physics) at the Stanford Linear Accelerator Center (SLAC) and the Document Server at CERN only acted as bibliographic services, helping scientists to keep track of publication information. But while SPIRES greatly improved the flow of metadata, it was still hard to retrieve the full manuscript. A new typesetting system would soon emerge and change this. TeX, pronounced "tech", was developed by Donald Knuth in the late 70's as a way for researchers to write and typeset articles programmatically. Soon after the introduction of TeX, Leslie Lamport set a standard for TeX formatting, called LaTeX, which made it very easy for all researchers to professionally typeset their own documents. This system made sharing papers easier and cheaper than ever before. Indeed, many, if not most, researchers at the time relied upon secretaries or typists to write their work, which then had to be photocopied in order to be sent via mail to a handful of other researchers. TeX allowed researchers to write their documents in a lightweight format that could be emailed and then downloaded and compiled without the need for physical mail.
Researchers began to exchange emails containing preprints, quickly hitting their strict disk space allocation limits (Ginsparg, 2011b). To address this problem, an automated email server initially called xxx.lanl.gov was set up on August 14th, 1991. This service would allow researchers to automatically request preprints via email as needed. It would soon become one of the world's first web servers and, renamed arXiv in 1998, today still serves as one of the most open and efficient forms of research communication in the world.
The arXiv was a leader in utilizing new technology when it was launched, however it has arguably changed very little since its inception, despite a wealth of new technologies now available. Here we look at the strengths and weaknesses of the arXiv in an effort to identify what possible improvements can be made based on new technologies and tools and propose that a modern arXiv might in fact not look at all like the arXiv of today, a development that will likely occur with or without arXiv.

The strengths of the arXiv
The arXiv has since day one provided researchers with one of the easiest and most powerful ways to disseminate their research. It is a free way for authors to rapidly share findings directly with the research community, and a free way for the public to access it. The arXiv is home to some of the world's most important work, like the proof of the Poincaré conjecture (Perelman, 2002(Perelman, , 2003a and the discovery of the Higgs Boson (Chatrchyan et al., 2012;Aad et al., 2012). The free exchange of information has been without equal in most other fields for nearly two decades until very recently with the launch of numerous arXiv clones in new disciplines (see Figure 1). The ease of use and the utility of arXiv is both a function of the community it serves-technically advanced researchers with a long-standing tradition of sharing and collaboration-as well as the simplicity of the site. Below we highlight key pieces of technology as well as cultural influences that contributed to the success of arXiv. We then underline in the next section how such pieces may be a limitation to new, and better, practices.

Typesetting with LaTeX
The vast majority of papers on arXiv are authored in LaTeX. LaTeX allows researchers to easily typeset and share their documents. Such a solution was available to all researchers at the outset, however it was only adopted by the exact community it served, namely physicists and mathematicians, who needed to write  Figure 1: Volume of preprints posted in hard sciences (arXiv) and life sciences, from 1991 to 2017. In this time window, the total number of arXiv (Life sciences) preprints submitted was 1,263,265 (32,284). The inset shows the recent, rapid growth of preprint submissions in the Life sciences (including submissions to "arXiv q-bio", "Nature Preceedings", "F1000Research", "PeerJ Preprints", "bioRxiv", "The Winnower", "preprints.org" and "Wellcome Open Research"). The data and the code required to reproduce this figure is available in the web version of this article by clicking on the data and code icons. Unfortunately, these research products could not be included in the PDF version of the article due to the inherent limitations of this format. Data Source: arXiv and Pre-Pubmed. equation-intensive documents. Thus, LaTeX was crucial to the early success of preprints and peer-to-peer sharing. Today it continues to be used by physicists, mathematicians, computer scientists, and others as it offers the best solution for rendering complex mathematical notation.

A tech-savvy community
The serendipitous arrival of new technology in a community that both knew how to benefit from it and was willing to take advantage of it (Physics), helped the arXiv to flourish from the very first day. Other fields, like chemistry and biomedicine, while increasingly highly collaborative in nature (Fanelli and Larivière, 2016), may have lacked the early knowledge and interest to write in LaTeX and set up and run email and web servers, two necessary aspects to the foundation of the arXiv.
3 The weaknesses of the arXiv The immediate and sustained success of the arXiv since its inception is due to its willingness to utilize new technology (LaTeX, email, web servers) in a community naturally tech-savvy, collaborative, and open to sharing practices. However, the arXiv has failed to improve and rethink itself over time, to match the ever changing landscape of technology and community practices in science. What is the single most important factor that has prevented the arXiv to quickly innovate? We believe it is LaTeX. The same technological advancement that has allowed the arXiv to flourish, is also, incredibly, its most important shortcoming. Indeed, the reliance of the arXiv on LaTeX is the source of all the weaknesses listed below.

Limitation to a single community
Most researchers outside of physics, and consequently outside of the arXiv world, write their manuscripts in Microsoft Word or other WYSIWYG editors. Using LaTeX penetration rates in its most popular fields (mathematics, statistics, physics, astronomy, computer science) it is possible to estimate the total percentage of scholarly articles written in LaTeX to be around 18% (Pepe, 2016). Not only does LaTeX have a steep learning curve; its interface, language, and modus operandi are foreign to anyone who does not program or to anyone who has only ever used WYSIWYG word processors.

A printer-centric "PDF dump"
When you upload a LaTeX file, the arXiv compiles it and creates a PDF document. This is a standard procedure. In academia, for decades manuscripts have been exchanged and read in Postscript or PDF format. PDFs are an efficient, portable format for printing manuscripts. But the PDF is not a format fit for sharing, discussing, and reading on the web. PDFs are (mostly) static, 2-dimensional and non-actionable objects. It is not a stretch to say that a PDF is merely a digital photograph of a piece of paper.

Low discoverability
The research products hosted by the arXiv are PDFs. A title, abstract, and author list are provided by the authors upon submission as metadata, which is posted alongside the PDF, and is rendered in HTML to aid article discoverability. While search engines are getting better at text mining PDFs, the chances that any current or future search engine will meaningfully extract and interpret text from a dense 2-column paper are low. Importantly, it is a futile exercise of reverse engineering. Why are we locking content in a format that is not machine-readable?

Data
Data sharing has become a fundamental practice across all scholarly disciplines. Simply, if a published research paper is built on data, the authors have to provide access to the minimal set of resources (data and code) upon which their research is based. But sharing data in arXiv's "LaTeX to PDF" paradigm is not possible. A pilot to support data deposit alongside papers which was run at the arXiv from 2010 to 2013 (Mayernik et al., 2012), failed to gain traction. While the project had to face an unexpected cut in government support, we believe that part of its failure can be associated with the fact that the papers and the data were deposited as separate entities. How do people share data today? They use kludgy strategies. A growing trend in astronomy and physics, for example, is to link the dataset in the published or preprinted paper. This practice allows authors to make their data more visible and get credit for it, as it is linked inside the papers, but recent work shows that links rot quickly with time (Pepe et al., 2014).

How the arXiv would look like if we built it today
A useful exercise when attempting to imagine the arXiv of the future is to envision what it would look like if we could rebuild it today. We would like to consider the weaknesses listed above as opportunities rather than challenges, and in doing so, we offer here some ideas for a better arXiv.

Web-native and web-first
There is growing consensus in scholarly communication circles that academic publishing needs to move "beyond the PDF" (see the manifesto of Force 11), and we strongly believe that the paper of the future will be web-native (Goodman et al., 2016). As such, the arXiv of the future will have to enable creation and/or ingestion of papers in HTML format. Moving scholarly papers to HTML is the first step towards paving the way for the scholarly repository of the future. The paper you are currently reading, whether you are reading a PDF or HTML version of it, is web-first. An open, web version can be found here. The arXiv of the future will host web-native manuscripts.

Multi-format and format-neutral
The ArXiv relies heavily on LaTeX. The article you are reading was authored on Authorea by three authors using a combination of LaTeX and Rich Text. LaTeX was just a format used to insert mathematical notation, equations, tables, but not to typeset and format the entire manuscript. LaTeX can be a time consuming way to typeset manuscripts (Brischoux and Legagneux, 2009), most importantly it also locks the document into a format which doesn't allow for the flexibility offered by modern technologies (e.g. semantic parsing and embedding into a knowledge network that facilitates discoverability, hence impact) . The arXiv of the future is format-neutral and separates format from content.

Digital object identifiers
A digital object identifier (DOI) is a persistent identifier used in scholarly publishing to identify and link to a piece of work. DOIs are considered by numerous journals to be mandatory for citation and can be assigned to datasets, preprints, research articles, websites, and other scholarly works. Since the practice of preprinting is quickly on the rise across all disciplines (Berg et al., 2016), and since funding bodies are finally realizing the importance of preprinting (here's a recent example by the NIH), it is crucial that preprints get identified by a reliable standard: the DOI. The article you are reading was written on Authorea and it was preprinted with DOI (https://dx.doi.org/10.22541/au.149693987.70506124). The arXiv of the future is a database of preprints identified by DOI.

Built for Open Data and Open Research
The repository of the future is more than a collection of PDFs with text and images. The repository of the future is a database of papers that integrate data, code and all the resources needed to reproduce scientific results. The only way to solve the ongoing reproducibility crisis is by making papers data-driven. The paper you are reading has one figure and we have made the data "behind the figure" available to all readers. If you read the online version of the article, you will be able to click on the Data flag associated with Fig. 1 and visualize, download, and peruse the data presented in the chart, as well as the code (in the form of a Jupyter Notebook) which we wrote to analyze and visualize such chart. The arXiv of the future will host data and code alongside papers.

Comments and open peer review
The arXiv does not currently allow comments by its readers and authors. The idea is that the arXiv is not peer-reviewed -peer-review happens elsewhere, at the journal level. Thus, a comment and reviewing system is hard to maintain and run, and not useful. Yet, preprints offer an unprecedented opportunity to first, open up, and second, increase the quantity of reviews and comments that manuscripts go through. We do not advocate to replace traditional peer-review, but to complement it with open review of preprints. We believe that (1) more scholars should participate in the peer review of an article, and (2) peer review should be done in the open, so that the review itself becomes a crucial component of the published (or pre-printed) research. This seems to be not only natural, but also necessary given the ever increasing number of publications and average number of authors per paper, which renders the current peer-review paradigm unsustainable. The authors of this document welcome public comments and ideas from its readers, at the online version of this article. The arXiv of the future will allow open comments and reviews in addition to traditional peer review.

Alternative metrics
In scholarship, currently the only reputable metric to assess the impact of a research paper is citations (or any other metric built around citations). The arXiv does not publish information about alternative metrics (alternative to citations), e.g. how many times a paper has been downloaded, tweeted, or blogged. One important, yet dubious, case against publishing these altmetrics is that they can be easily gamed. And if these metrics gain traction and become a reputable system to determine the standing of a researcher, then we are confronted with an easily-gamed system. We believe that these metrics provide important value in assessing the impact of research work, in addition, not as a replacement, to traditional metrics. Importantly, it has been shown that there is a very strong correlation between how many times a paper is downloaded and tweeted and how many times it is subsequently cited Ginsparg, 2009, 2010;Shuai et al., 2012). The arXiv of the future will be transparent and it will publish information about alternative metrics that may determine the true impact of a research paper. 4.7 Discoverable, structured, semantic, machine-readable One final but very important advantage that is tightly linked to having a repository of natively web-basedrather than PDF -papers is discoverability. The entire full text of papers -not just the title and abstract -will be indexed by search engines and scholarly repositories, boosting content visibility. Moreover, web-based articles have a well-defined semantic structure, making them fully machine-readable objects. The arXiv of the future will rethink papers as APIs that access semantically structured content.

Conclusion
We have looked at the history of the arXiv, identifying a number of possible reasons that determined its success as the most popular online preprint repository. We argue that one of the reasons why the arXiv flourished is because it catered to technically savvy researchers with a long standing tradition of sharing and collaboration. The simplicity of the site and its LaTeX-centric submission process, secured rapid growth within communities that were already used to taking full control of the typesetting process and needed to write equation-intensive documents.
We argued that while the arXiv was quick to adopt technology early on, it has changed very little since its launch. This unwillingness or inability to foster new technology and practices are an impediment to better research communication practices currently available.
We suggest that the arXiv of the future will be web-native and web-first, multi-format and format-neutral so as to include the whole research community. In order to foster transparency and reproducibility, it will be built for open data and open research, also allowing for commenting and open peer review. The arXiv of the future will be a database of preprints identified by a Digital Object Identifier, with a well-defined semantic structure that will make them fully machine-readable and easily discoverable. It will be transparent and publish all the information about alternative metrics that may determine the true impact of the research it hosts.
We believe that should the arXiv continue to remain stagnant, it will be eclipsed by other services, just like the arXiv itself did to its predecessors. We encourage researchers to demand more out of the platform and believe that in the era of the web, sharing research via PDF must inevitably come to an end. Let us embrace new technologies and practices, just like the arXiv did nearly 30 years ago so that we might create a better way to share research.