Are Publishers Learning from Their Mistakes? – The Scholarly Kitchen

“At the STM Association Annual Meeting in “virtual Frankfurt” last week, much of the focus was on how scholarly publishers are responding to the COVID crisis. Publishing executives reported how they have accelerated their editorial and peer review processes for COVID submissions, rightly taking pride in the contributions they have made to fighting the pandemic. They also emphasized again and again that they want to be more trusted. This is a formidable challenge in light of some recent failures. To achieve their objectives, publishers need to become more comfortable talking about their mistakes to prove convincingly that they are learning from them….

At the same time, I would encourage publishers to balance their celebrations with self-reflection. Scholarly publishers wish to see themselves as stewards of the scholarly record and of the transition to open science. To do so in a way that is compelling to all stakeholders, they must continuously increase the quality and rigor of their work, probe their processes for weaknesses, and make their work ever more resilient against potential points of failure. …

Today, the scholarly publishing sector looks to reestablish itself as a steward of the scholarly record and a trusted party to lead the transition to open science, and we need it in this type of role more than ever. Being entrusted with this role requires that publishers identify problems honestly and with humility, since trust is earned, or squandered, at a sector-wide level. The sector does not need triumphalism from leaders that enables their organizations to downplay festering problems. And, it does not need its boosters to selectively amplify concerns with preprints — when publishers should focus on their own shortcomings. The sector needs not only to ask for trust but also to make sure that it is continuously earning it every day.”

Critical Lessons From Last Week’s Retraction of Two COVID-19 Papers | MedPage Today

“According to an investigative report in The Guardian, Sapan Desai had been previously linked to highly ambitious (and dubious) claims. In 2008, he promoted a “next generation human augmentation device” called Neurodynamics Flow, which he said “can help you achieve what you never thought was possible,” claiming that “with its sophisticated programming, optimal neural induction points, and tried and true results, Neurodynamics Flow allows you to rise to the peak of human evolution.”

It is important to realize that concerns about the existence and validity of the Surgisphere databases surfaced only after the paper on hydroxychloroquine was published. The earlier NEJM paper on inhibitors of the renin-angiotensin system was never criticized, even though Surgisphere was the primary data and analytical source.

Why? The NEJM paper included data from 8,910 patients treated at 169 hospitals across three continents (Asia, Europe and North America), a database that may have seemed credible — even though Surgisphere had no track record of publications. In contrast, the Lancet paper cited data from 96,032 patients treated at 671 hospitals from six continents. It seems that the decision by the authors to include data from Australia and Africa represented a fatal strategic error, since these could be far more easily matched up with public records. When the data from these two regions failed to make sense, the paper unraveled. Conceivably, if the authors had not overreached and if they had merely confined their analysis to three continents, it is likely that the Lancet paper would have survived….

The possibility that fraudulent data would have been accepted — if it had not been for the excessive ambitions of the authors — is distressing beyond words. The implications for medical research are profound….

Many have criticized preprint servers because they allow the dissemination of data and information that has not been peer-reviewed. But can we continue to denigrate papers lacking peer review if the process failed us at this critical time? Some might still argue that peer review was highly effective in the two COVID-19 retractions; it simply occurred following (rather than prior to) publication. However, even the staunchest advocates of journals as gatekeepers must concede that the post-publication examination and analysis can occur whether the information is presented in a top-tier journal or on a preprint server….”

The Pandemic Claims New Victims: Prestigious Medical Journals

Two major study retractions in one month have left researchers wondering if the peer review process is broken.

Self-correction of science: a comparative study of negative citations and post-publication peer review

Abstract:  This study investigates whether negative citations in articles and comments posted on post-publication peer review platforms are both equally contributing to the correction of science. These 2 types of written evidence of disputes are compared by analyzing their occurrence in relation to articles that have already been retracted or corrected. We identi-fied retracted or corrected articles in a corpus of 72,069 articles coming from the Engineer-ing field, from 3 journals (Science, Tumor Biology, Cancer Research) and from 3 authors with many retractions to their credit (Sarkar, Schön, Voinnet). We used Scite to retrieve contradicting citations and PubPeer to retrieve the number of comments for each article, and then we considered them as traces left by scientists to contest published results. Our study shows that contradicting citations are very uncommon and that retracted or corrected articles are not more contradicted in scholarly articles than those that are neither retracted nor corrected but they do generate more comments on Pubpeer, presumably because of the possibility for contributors to remain anonymous. Moreover, post-publication peer review platforms, although external to the scientific publication process contribute more to the correction of science than negative citations. Consequently, post-publication peer review venues, and more specifically the comments found on it, although not contributing to the scientific literature, are a mechanism for correcting science. Lastly, we introduced the idea of strengthening the role of contradicting citations to rehabilitate the clear expression of judgment in scientific papers.

Impact Factor vs Integrity Factor: Which Siren Should Be Our Guide?

“Rather than being focused on the “Impact Factor,” perhaps authors should focus on the other “IF” or the “Integrity Factor.” I propose that we begin calculating the “Integrity Factor” for journals and perhaps this should be the number of retractions in, say, a 5? or 10?year period divided by the number of original research papers published. Authors could then pick a journal based upon its integrity rather than the impact….”

Retracted Science and the Retraction Index

Abstract:  Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or they are found to violate ethical guidelines. Using a novel measure that we call the “retraction index,” we found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor. Although retractions are relatively rare, the retraction process is essential for correcting the literature and maintaining trust in the scientific process.