Invitation to participate in a new project: Help open journals’ deep backfiles | Everybody’s Libraries

“As I’ve noted here previously, there’s a wealth of serial content published in the 20th century that’s in the public domain, but not yet freely available online, often due to uncertainty about its copyright (and the resulting hesitation to digitize it).  Thanks to IMLS-supported work we did at Penn, we’ve produced a complete inventory of serials from the first half of the 20th century that still have active copyright renewals associated with them. And I’ve noted that there was far more serial material without active copyright, as late as the 1960s or even later.  We’ve also produced a guide to determining whether particular serial content you may be interested in is in the public domain.

Now that we’ve spent a lot of time surveying what is still in copyright though, it’s worth turning more focused attention to serial content that isn’t in copyright, but still of interest to researchers.  One way we can identify journals whose older issues (sometimes known as their “deep backfiles”) are still of interest to researchers and libraries is to see which ones are included in packages that are sold or licensed to libraries.   Major vendors of online journals publish spreadsheets of their backfile offerings, keyed by ISSN.  And now, thanks to an increasing amount of serial information in Wikidata (including links to our serials knowledge base) it’s possible to systematically construct inventories of serials in these packages that include, or might include, public domain and other openly accessible content….”

Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature | bioRxiv

Abstract:  Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader’s ability to independently interpret data and reproduce findings. In this observational study, we compared random samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. We found that peer-reviewed articles had, on average, higher quality of reporting than preprints, although this difference was small. We found larger differences favoring PubMed in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Interestingly, an exploratory analysis showed that preprints with figures and legends embedded within text had reporting scores similar to PubMed articles. These differences cannot be directly attributed to peer review or editorial processes, as manuscripts might already differ before submission due to greater uptake of preprints by particular research communities. Nevertheless, our results show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions. An ongoing second phase of the project is comparing preprints to their own published versions in order to more directly assess the effects of peer review.

 

 

Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature | bioRxiv

Abstract:  Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader’s ability to independently interpret data and reproduce findings. In this observational study, we compared random samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. We found that peer-reviewed articles had, on average, higher quality of reporting than preprints, although this difference was small. We found larger differences favoring PubMed in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Interestingly, an exploratory analysis showed that preprints with figures and legends embedded within text had reporting scores similar to PubMed articles. These differences cannot be directly attributed to peer review or editorial processes, as manuscripts might already differ before submission due to greater uptake of preprints by particular research communities. Nevertheless, our results show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions. An ongoing second phase of the project is comparing preprints to their own published versions in order to more directly assess the effects of peer review.

 

 

Comparing quality of reporting between preprints and peer-reviewed articles – first results are in! – ASAPbio

“A while ago, we blogged about our crowdsourced project to compare quality of reporting between preprints and peer-reviewed articles. Almost a year later, we are happy to announce that our first results are now published in bioRxiv – in large part thanks to the people who joined us after reading our previous post!

This first part of the project aimed to study whether quality of reporting – measured by an objective questionnaire on the reporting of specific items within methods and results section – was different between samples of preprints from bioRxiv and peer-reviewed articles from PubMed. A second one, still ongoing, aims to compare preprints to their own published versions – and if you are interested in participating, keep reading to learn how to join!….”

Comparing quality of reporting between preprints and peer-reviewed articles – first results are in! – ASAPbio

“A while ago, we blogged about our crowdsourced project to compare quality of reporting between preprints and peer-reviewed articles. Almost a year later, we are happy to announce that our first results are now published in bioRxiv – in large part thanks to the people who joined us after reading our previous post!

This first part of the project aimed to study whether quality of reporting – measured by an objective questionnaire on the reporting of specific items within methods and results section – was different between samples of preprints from bioRxiv and peer-reviewed articles from PubMed. A second one, still ongoing, aims to compare preprints to their own published versions – and if you are interested in participating, keep reading to learn how to join!….”

Kindness, Culture, and Caring: The Open Science Way | HASTAC

“There are lots of ways that the rational, logical, hyper-competitive, winner-take-all, zero-sum, prisoner’s dilemma, nice-guys-finish-last, single-bottom-line, annual-productivity ratchet?—?or add your adjective here?—?mindset is just wrong for sustaining the academy and bad for science. For decades now, the same neo-liberal economic schemes that have been used to reshape how governments budget their funds have also made dramatic and disturbing inroads into university budgets and governance. Open science can show how that trend is a race to the bottom for universities. What do you say, we turn around and go another way?…”

[1902.02534] Crowdsourcing open citations with CROCI — An analysis of the current status of open citations, and a proposal

Abstract:  In this paper, we analyse the current availability of open citations data in one particular dataset, namely COCI (the OpenCitations Index of Crossref open DOI-to-DOI citations; this http URL) provided by OpenCitations. The results of these analyses show a persistent gap in the coverage of the currently available open citation data. In order to address this specific issue, we propose a strategy whereby the community (e.g. scholars and publishers) can directly involve themselves in crowdsourcing open citations, by uploading their citation data via the OpenCitations infrastructure into our new index, CROCI, the Crowdsourced Open Citations Index.