REPEAT (Reproducible Evidence: Practices to Enhance and Achieve Transparency)

“Replication is a cornerstone of the scientific method. Historically, public confidence in the validity of healthcare database research has been low. Drug regulators, patients, clinicians, and payers have been hesitant to trust evidence from databases due to high profile controversies with overturned and conflicting results. This has resulted in underuse of a potentially valuable source of real-world evidence.?…

Division of Phamacoepidemiology & Pharmacoeconomics [DoPE]

Brigham & Women’s Hospital and Harvard Medical School.”

Contracting in the Age of Open Access Publications. A Systematic Analysis of Transformative Agreements | Ouvrir la Science

The “socioeconomics of scientific publication” Project, Committee for Open Science

Final report – 17 December 2020 Contract No. 206-150

Quentin Dufour (CNRS Postdoctoral fellow) David Pontille (CNRS senior researcher) Didier Torny (CNRS senior researcher)

Mines ParisTech, Center for the Sociology of Innovation • PSL University

Supported by the Ministry of Higher Education, Research and Innovation

Summary

This study focuses on one of the contemporary innovations linked to the economy of academic publishing: the so-called transformative agreements, a relatively circumscribed object within the relations between library consortia and academic publishers, and temporally situated between 2015 and 2020. The stated objective of this type of agreement is to organise the transition from the traditional model of subscription to journals (often proposed by thematic groupings or collections) to that of open access by reallocating the budgets devoted to it.

Our sociological analysis work constitutes a first systematic study of this object, based on a review of 197 agreements. The corpus thus constituted includes agreements characterised by the co-presence of a subscription component and an open access publication component, even minimal (publication “tokens” offered, reduction on APCs, etc.). As a result, agreements that only concern centralised funding for open access publishing were excluded from the analysis, whether with publishers that only offer journals with payment by the author (PLOS, Frontiers, MDPI, etc.) or publishers whose catalogue includes open access journals. The oldest agreement in our corpus was signed in 2010, the most recent ones in 2020 – agreements starting only in 2021, even announced during the study, were not retained.

Several results emerge from our analysis. First of all, there is a great diversity of actors involved with 22 countries and 39 publishers, even if some consortia (Netherlands, Sweden, Austria, Germany) and publishers (CUP, Elsevier, RSC, Springer) signed many more than others. Secondly, the duration of the agreements, ranging from one to six years, reveals a very unequal distribution, with more than half of the agreements (103) signed for 3 years, and a small proportion for 4 years or more (22 agreements). Finally, despite repeated calls for transparency, less than half of the agreements (96) have an accessible text at the time of this study, with no recent trend towards greater availability.

Of the 96 agreements available, 47 of which were signed in 2020, 62 have been analysed in depth. To our knowledge, this is the first analysis on this scale, on a type of material that was not only unpublished, but which was previously subject to confidentiality clauses. Based on a careful reading, the study describes in detail their properties, from the materiality of the document to the financial formulas, including their morphology and all the rights and duties of the parties. We therefore analysed the content of the agreements as a collection, looking for commonalities and variations through an explicit coding of their characteristics. The study also points out some uncertainties, in particular their “transitional” character, which remains strongly debated.

From a morphological point of view, the agreements show a great diversity in size (from 7 to 488 pages) and structure. Nevertheless, by definition, they both articulate two essential objects: on the one hand, the conditions for carrying out a reading of journal articles, in the form of a subscription, combining concerns of access and security; on the other hand, the modalities of open access publication, articulating the management of a new type of workflow with a whole series of possible options. These options include the scope of the journals considered (hybrid and/or open access), the licences available, the degree of obligation to publish, the eligible authors or the volume of publishable articles.

One of the most important results of this in-depth analysis is the discovery of an almost complete decoupling, within the agreements themselves, between the subscription object and the publication object. Of course, subscription is systematically configured in a closed world, subject to payment, which triggers series of identification of legitimate circulations of both information content and users. In particular, it insists on prohibitions on the reuse or even copying of academic articles. On the other hand, open access publishing is attached to a world governed by free access to content, which leads to concerns about workflow management and accessibility modalities. Moreover, the different elements that make up these contractual objects are not interconnected: on one side, the readers are all members of the subscribing institutions, on the other, only the corresponding authors are concerned; the lists of journals accessible to the reader and those reserved for open access publication are usually distinct; the workflows have totally different

New Open Access Business Models – What’s Needed to Make Them Work? – The Scholarly Kitchen

“The third CHORUS Forum meeting, held last week, is a relatively new entrant into the scholarly communication meeting calendar. The meeting has proven to be a rare opportunity to bring together publishers, researchers, librarians, and research funders. I helped organize and moderated a session during the Forum, on the theme of “Making the Future of Open Research Work.” You can watch my session, which looked at new models for sustainable and robust open access (OA) publishing, along with the rest of the meeting in the video below.

The session focuses on the operationalization of the move to open access and the details of what it takes to experiment with a new business model. The model the community has the most experience with, the individual author paying an article-processing-charge (APC), works really well for some authors, in some subject areas, in some geographies. But it is not a universal solution to making open access work and it creates new inequities as it resolves others….

Some of the key takeaways for me were found in the commonalities across all of the models. The biggest hurdle that each organization faced in executing its plans was gathering and analyzing author data. As Sara put it, “Data hygiene makes or breaks all of these models.” For PLOS and the ACM, what they’re asking libraries to support is authorship – the model essentially says “this many papers had authors from your institution and what you pay will largely be based on the volume of your output.” But disambiguating author identity, and especially identifying which institutions each represents, remains an enormous problem. While we do have persistent identifiers (PIDs) like ORCID, and the still-under-development ROR, their use is not universal, and we still lack a unifying mechanism to connect the various PIDs into a simple, functional tool to support this type of analysis.

One solution would be requiring authors to accurately identify their host institutions from a controlled vocabulary, but this runs up against most publishers’ desire to streamline the article submission process. There’s a balance to be struck, but probably one that’s going to ask authors to provide more accurate and detailed information….

[M]oving beyond the APC is essential to the long-term viability of open access, and there remains much experimentation to be done….”

Building a service to support cOAlition S’s Price & Service Transparency Frameworks: an Invitation to Tender | Plan S

“The European Science Foundation (ESF), on behalf of the cOAlition S members, is seeking to contract with a supplier to build a secure, authentication-managed web-based service which will enable:

academic publishers to upload data, in accord with one of the approved cOAlition S Price and Service Transparency Frameworks;
approved users to be able to login to this service and for a given journal, determine what services are provided and at what price;
approved users to be able to select several journals and compare the services offered and prices charged by the different journals selected;
the Journal Checker Tool (JCT) – via an API call – to determine whether a journal has (or has not) provided data in line with one of the approved Price and Service Transparency Frameworks.

Given that some of the data that will be made accessible through this service is considered sensitive, it is imperative that suppliers can build a secure service such that data uploaded by a publisher, and intended by them for approved users only, cannot be accessed by any other publisher.

This service must be functional – in terms of allowing publishers to upload their “Framework Reports” by 1st of December 2021.  The service must be accessible to all approved users – including the JCT – by the 1st June 2022….”

Increasing transparency through open science badges

“Authors who adopt transparent practices for an article in Conservation Biology are now able to select from 3 open science badges: open data, open materials, and preregistration. Badges appear on published articles as visible recognition and highlight these efforts to the research community. There is an emerging body of literature regarding the influences of badges, for example, an increased number of articles with open data (Kidwell et al 2016) and increased rate of data sharing (Rowhani?Farid et al. 2018). However, in another study, Rowhani?Farid et al. (2020) found that badges did not “noticeably motivate” researchers to share data. Badges, as far as we know, are the only data?sharing incentive that has been tested empirically (Rowhani?Farid et al. 2017).

Rates of data and code sharing are typically low (Herold 2015; Roche et al 2015; Archmiller et al 2020; Culina et al 2020). Since 2016, we have asked authors of contributed papers, reviews, method papers, practice and policy papers, and research notes to tell us whether they “provided complete machine and human?readable data and computer code in Supporting Information or on a public archive.” Authors of 31% of these articles published in Conservation Biology said they shared their data or code, and all authors provide human?survey instruments in Supporting Information or via a citation or online link (i.e., shared materials)….”

Open Research Transparency

“Currently, innovative ideas are abundant in science, yet we are still short of practical tools to implement these ideas in everyday practice. A tool is practical if it can achieve its aim without requiring too much or no extra effort from the user. The consideration of user experience, efficiency, and user-friendliness is still weak in the development of scientific tools. In this workshop, three early career researchers will present their innovations that aim to improve scientific practice in an efficient way and we invite the audience to a discussion to formalise our thinking about the development of new tools….”

Association Science2 (Science for Science)

“Our objectives are:

to promote the dissemination of high-quality research without private intermediaries, primarily through the creation of top-level open access journals with low article-processing charges (€500/article + VAT);
to prioritize standards of excellence and complete transparency in the process of open dissemination of science;
to promote the training of early career scientists from around the world, prioritizing excellence….”

Toward assessing clinical trial publications for reporting transparency – ScienceDirect

“Highlights

 

• We constructed a corpus of RCT publications annotated with CONSORT checklist items.

• We developed text mining methods to identify methodology-related checklist items.

• A BioBERT-based model performs best in recognizing adequately reported items.

• A phrase-based method performs best in recognizing infrequently reported items.

• The corpus and the text mining methods can be used to address reporting transparency….”

 

Assessment of transparency indicators across the biomedical literature: How open is open?

Abstract:  Recent concerns about the reproducibility of science have led to several calls for more open and transparent research practices and for the monitoring of potential improvements over time. However, with tens of thousands of new biomedical articles published per week, manually mapping and monitoring changes in transparency is unrealistic. We present an open-source, automated approach to identify 5 indicators of transparency (data sharing, code sharing, conflicts of interest disclosures, funding disclosures, and protocol registration) and apply it across the entire open access biomedical literature of 2.75 million articles on PubMed Central (PMC). Our results indicate remarkable improvements in some (e.g., conflict of interest [COI] disclosures and funding disclosures), but not other (e.g., protocol registration and code sharing) areas of transparency over time, and map transparency across fields of science, countries, journals, and publishers. This work has enabled the creation of a large, integrated, and openly available database to expedite further efforts to monitor, understand, and promote transparency and reproducibility in science.

 

 

Guest Post – Putting Publications into Context with the DocMaps Framework for Editorial Metadata – The Scholarly Kitchen

“Trust in academic journal articles is based on similar expectations. Journals carry out editorial processes from peer review to plagiarism checks. But these processes are highly heterogeneous in how, when, and by whom they are undertaken. In many cases, it’s not always readily apparent to the outside observer that they take place at all. And as new innovations in peer review and the open research movement lead to new experiments in how we produce and distribute research products, understanding what events take place is an increasingly important issue for publishers, authors, and readers alike.

With this in mind, the DocMaps project (a joint effort of the Knowledge Futures Group, ASAPbio, and TU Graz, supported by the Howard Hughes Medical Institute), has been working with a Technical Committee to develop a machine-readable, interoperable and extensible framework for capturing valuable context about the processes used to create research products such as journal articles. This framework is being designed to capture as much (or little) contextual data about a document as desired by the publisher: from a minimum assertion that an event took place, to a detailed history of every edit to a document….”