Whereas a comprehensive overview of the history of preprint servers indicates their explosive growth over recent decades, empirical findings also show that, in the almost frictionless market of preprint publishing, concentration and convergence dynamics are at play.
“In this blog post, I will report on some major progress I believe have been made in the push for open citations
Firstly, the recent announcement by Elsevier followed by ACS that they will finally support open citations is pretty earthshaking news as they are among some of the biggest hold outs among publishers
Secondly, I continue to report on the emerging ecosystem of tools that are building upon open citations (from both publsher/Crossref derived sources and via other crawled sources).
Lastly, even if all major publishers pledge to support open citations, we will always have a lot of items that will not be available in Crossref with references either because the items are old, the publishers lack the resources to extract and deposit the references or the items are not given DOIs. …”
Abstract: Preprint is a version of a scientific paper that is publicly distributed preceding formal peer review. Since the launch of arXiv in 1991, preprints have been increasingly distributed over the Internet as opposed to paper copies. It allows open online access to disseminate the original research within a few days, often at a very low operating cost. This work overviews how preprint has been evolving and impacting the research community over the past thirty years alongside the growth of the Web. In this work, we first report that the number of preprints has exponentially increased 63 times in 30 years, although it only accounts for 4% of research articles. Second, we quantify the benefits that preprints bring to authors: preprints reach an audience 14 months earlier on average and associate with five times more citations compared with a non-preprint counterpart. Last, to address the quality concern of preprints, we discover that 41% of preprints are ultimately published at a peer-reviewed destination, and the published venues are as influential as papers without a preprint version. Additionally, we discuss the unprecedented role of preprints in communicating the latest research data during recent public health emergencies. In conclusion, we provide quantitative evidence to unveil the positive impact of preprints on individual researchers and the community. Preprints make scholarly communication more efficient by disseminating scientific discoveries more rapidly and widely with the aid of Web technologies. The measurements we present in this study can help researchers and policymakers make informed decisions about how to effectively use and responsibly embrace a preprint culture.
“In 2020 the Wellcome Open Research (WOR) publishing platform reached a significant milestone when it became the single most used venue for Wellcome-funded researchers to share their research findings.
In this blog post, Robert Kiley, Head of Open Research, Wellcome, and Michael Markie, Publishing Director, F1000, provide an analysis of publishing activity on the WOR platform and preview some of the initiatives we have planned for 2021….
Speed of publication remains one of the platform’s unique selling points. Table 3, below, shows that most articles are published within 26 days of being submitted and receive the first peer review report some 21 days later. Once an article has received two “approved” statuses from reviewers (or one “approved” and two “approved with reservation” statuses) articles are submitted for indexing in PubMed, Scopus and other bibliographic databases….”
“Journal articles downloaded from Sci-Hub, an illegal site of pirated materials, were cited nearly twice as many times as non-downloaded articles, reports a new paper published online in the journal, Scientometrics….
Correa and colleagues could have added either one of these sources of usage data to their model to verify whether the Sci-Hub indicator continued to independently predict future citations. That would have confirmed whether Sci-Hub was a cause of — instead of merely associated with — future citations. Without such a control, the authors may have fumbled both their analysis and conclusion.
Sci-Hub may indeed lead to more article citations, although it is impossible to reach that conclusion from this study….”
“Aries Systems Corporation, a leading technology workflow solutions provider to the scholarly publishing community, and scite, a platform for discovering and evaluating scientific articles via Smart Citations, are pleased to announce their partnership to facilitate the veracity of scientific references….”
“Publish or Perish is a software program that retrieves and analyzes academic citations. It uses a variety of data sources (incl. Google Scholar and Microsoft Academic Search) to obtain the raw citations, then analyzes these and presents the following metrics:
Total number of papers and total number of citations
Average citations per paper, citations per author, papers per author, and citations per year
Hirsch’s h-index and related parameters
The contemporary h-index
Three variations of individual h-indices
The average annual increase in the individual h-index
The age-weighted citation rate
An analysis of the number of authors per paper….”
Abstract: In April 2008, the National Institutes of Health (NIH) implemented the Public Access Policy (PAP), which mandated that the full text of NIH-supported articles be made freely available on PubMed Central – the NIH’s repository of biomedical research. This paper uses 600,000 NIH articles and a matched comparison sample to examine how the PAP impacted researcher access to the biomedical literature and publishing patterns in biomedicine. Though some estimates allow for large citation increases after the PAP, the most credible estimates suggest that the PAP had a relatively modest effect on citations, which is consistent with most researchers having widespread access to the biomedical literature prior to the PAP, leaving little room to increase access. I also find that NIH articles are more likely to be published in traditional subscription-based journals (as opposed to ‘open access’ journals) after the PAP. This indicates that any discrimination the PAP induced, by subscription-based journals against NIH articles, was offset by other factors – possibly the decisions of editors and submission behaviour of authors.
“Software is essential to research, and is regularly an element of the work described in scholarly articles. However, these articles often don’t properly cite the software, leading to problems finding and accessing it, which in turns leads to problems with reproducibility, reuse, and proper credit for the software’s developers. In response, the FORCE11 Software Citation Implementation Working Group, comprised of scholarly communications researchers, representatives of nineteen major journals, publishers, and scholarly infrastructures (Crossref, DataCite), have proposed a set of customizable guidelines to clearly identify the software and credit its developers and maintainers. This follows the earlier development of a set of Software Citation Principles. To realize their full benefit, we are now urging publishers to adapt and adopt these guidelines to implement the principles and to meet their communities’ particular needs….”
Abstract: Predatory journals are Open Access journals of highly questionable scientific quality. Such journals pretend to use peer review for quality assurance, and spam academics with requests for submissions, in order to collect author payments. In recent years predatory journals have received a lot of negative media. While much has been said about the harm that such journals cause to academic publishing in general, an overlooked aspect is how much articles in such journals are actually read and in particular cited, that is if they have any significant impact on the research in their fields. Other studies have already demonstrated that only some of the articles in predatory journals contain faulty and directly harmful results, while a lot of the articles present mediocre and poorly reported studies. We studied citation statistics over a five-year period in Google Scholar for 250 random articles published in such journals in 2014 and found an average of 2.6 citations per article, and that 56% of the articles had no citations at all. For comparison, a random sample of articles published in the approximately 25,000 peer reviewed journals included in the Scopus index had an average of 18, 1 citations in the same period with only 9% receiving no citations. We conclude that articles published in predatory journals have little scientific impact. View Full-Text