“Today we launched a reimplementation of our search system. As part of our broader strategy for arXiv-NG, we are incrementally decoupling components from the classic arXiv codebase, and replacing them with more modular services developed in Python. Our goal was to replace the aging Lucene search backend, achieve feature-parity with the classic search system, and give the search interface an opportunistic face-lift. While the frontend may not look terribly different from the old search interface, we hope that you’ll notice some improvements in functionality. The most important win for us in this milestone is that the new backend lays the groundwork for more dramatic improvements to search, our APIs, and other components targeted for reimplementation in arXiv-NG. Here’s a rundown of some of the things that changed, and where we plan to go from here….”
“In my last post on the lack of accessibility of Gold Open Access for early career researchers (ECRs), I mentioned that in my opinion Green Open Access was a very imperfect solution – in fact, hardly a solution at all. I expand here on why that is the case, and why a focus on green OA presents new challenges for publication practices which compound the – already many – challenges of moving towards a greater accessibility of research. Not all OA initiatives are equal. Green Open Access, by far the commonest kind, refers to the depositing of a non-final version of the published manuscript into a research repository – generally either an institutional repository (managed by the university with which the researcher is affiliated), a subject-specific repository (such as ArXiv/SocArXiv), an academic networking website such as Academia.edu, ResearchGate, or Mendeley, or a personal website. Various publishers have rules on what version can be posted where and when, with the most common being that accepted manuscripts (after peer-review, but before proofreading and typesetting) can be made public in repositories after an embargo period, while the “version of record” – the published version – may not be shared publicly for free. The published article remains accessible only with paid access (with publishers either explicitly authorizing (SAGE) or tacitly tolerating the private sharing of full articles.”
“It is not easy to have a paper published in the Lancet, so Wakefield’s paper presumably underwent a stringent process of peer review. As a result, it received a very strong endorsement from the scientific community. This gave a huge impetus to anti-vaccination campaigners and may well have led to hundreds of preventable deaths. By contrast, the two mathematics preprints were not peer reviewed, but that did not stop the correctness or otherwise of their claims being satisfactorily established.
An obvious objection to that last sentence is that the mathematics preprints were in fact peer-reviewed. They may not have been sent to referees by the editor of a journal, but they certainly were carefully scrutinized by peers of the authors. So to avoid any confusion, let me use the phrase “formal peer review” for the kind that is organized by a journal and “informal peer review” for the less official scrutiny that is carried out whenever an academic reads an article and comes to some sort of judgement on it. My aim here is to question whether we need formal peer review. It goes without saying that peer review in some form is essential, but it is much less obvious that it needs to be organized in the way it usually is today, or even that it needs to be organized at all.
What would the world be like without formal peer review? One can get some idea by looking at what the world is already like for many mathematicians. These days, the arXiv is how we disseminate our work, and the arXiv is how we establish priority. A typical pattern is to post a preprint to the arXiv, wait for feedback from other mathematicians who might be interested, post a revised version of the preprint, and send the revised version to a journal. The time between submitting a paper to a journal and its appearing is often a year or two, so by the time it appears in print, it has already been thoroughly assimilated. Furthermore, looking a paper up on the arXiv is much simpler than grappling with most journal websites, so even after publication it is often the arXiv preprint that is read and not the journal’s formatted version. Thus, in mathematics at least, journals have become almost irrelevant: their main purpose is to provide a stamp of approval, and even then one that gives only an imprecise and unreliable indication of how good a paper actually is….
An alternative system would almost certainly not be perfect, but to insist on perfection, given the imperfections of the current system, is nothing but status quo bias. To guard against this, imagine that an alternative system were fully established and see whether you can mount a convincing argument for switching to what we have now, where all the valuable commentary would be hidden away and we would have to pay large sums of money to read each other’s writings. You would be laughed out of court.”
“In the spirit of empowering the community, I’ve decided to start showcasing some of the cool arXiv-based projects that we’ve found around the internet. If you’ve found an app, service, widget, visualization, or anything else that uses arXiv content in interesting ways, please let us know about it! You can get in touch via the arXiv-API Google group, or at email@example.com.”
“An Economics section of the scientific repository arXiv is opening this month. arXiv is internationally acknowledged as a pioneering open access preprint repository. It has transformed the scholarly communication infrastructure of multiple fields of physics and plays an increasingly prominent role in mathematics, computer science, quantitative biology, quantitative finance, and statistics. arXiv is an essential component of scientific communication for many researchers worldwide in order to rapidly and widely disseminate their findings, establish priority of their discoveries, and seek feedback to help improve their work. It is hosted by the Cornell University Library with additional funding from 220 members libraries and several scientific foundations including the Simons Foundation.”
“Looking back at arXiv’s 25 years (and forward to Open Repositories, next week!), I read through all the old arXiv news on the arXiv.org website, as well as Paul Ginsparg’s recent “Preprint Déjà Vu: an FAQ,” and put together this timeline (PDF). It’s really interesting to note the start up of additional subject area services at other institutions (and their later consolidation to LANL), the addition of new subject areas, the start up and decommissioning of mirrors, and a lot of other arXiv milestones. It’s an idiosyncratic summary, but it was fun to put together. Enjoy.”
“We harvest content from across platforms like PubMed Central, arXiv, SciELO and bring it all together in one place
One of the main features of ScienceOpen is that we are a research aggregator. We don’t select what we index based on discipline, publisher, or geography, as that just creates another silo. Enough of those exist already. What we need, and what we do, is to bring together research articles from across publishers and other platforms and into one space, where it is all treated in exactly the same way….”
“Since 2010, Cornell’s sustainability planning initiative has aimed to reduce arXiv’s financial burden and dependence on a single institution, instead creating a broad-based, community-supported resource. arXiv’s funding and governance for the current operation (Classic arXiv) is based on a membership program engaging libraries and research laboratories worldwide that represent the repository’s heaviest institutional users. As of February 2017, we have 206 members representing 25 countries. arXiv’s sustainability plan is founded on and presents a business model for generating revenues and a set of governance, editorial, and financial principles. Cornell University Library (CUL), the Simons Foundation, and a global collective of institutional members support arXiv financially. The financial model for 2013–2017 entails three sources of revenues:
CUL provides a cash subsidy of $75,000 per year in support of arXiv’s operational costs. In addition, CUL makes an in-kind contribution of all indirect costs, which currently represents 37% of total operating expenses.
The Simons Foundation contributes $100,000 per year ($50,000 prior to 2016) in recognition of CUL’s stewardship of arXiv. In addition, the Foundation matches $300,000 per year of the funds generated through arXiv membership fees.
Each member institution pledges a five-year funding commitment to support arXiv. Based on institutional usage ranking, the annual fees are set in four tiers from $1,500 to $3,000.
In 2016, Cornell raised approximately $515,000 through membership fees from 201 institutions and the total revenue (including CUL, Simons Foundation direct contributions, and online fundraising) is around $1,015,000. We remain grateful for the support from the Simons Foundation that encouraged long-term community support by lowering arXiv membership fees and making participation affordable to a broad range of institutions. This model aims to ensure that the ultimate responsibility for sustaining arXiv remains with the research communities and institutions that benefit from the service most directly.”