Citations and metrics of journals discontinued… | F1000Research

Abstract:  Background: Scopus is a leading bibliometric database. It contains a large part of the articles cited in peer-reviewed publications. The journals included in Scopus are periodically re-evaluated to ensure they meet indexing criteria and some journals might be discontinued for ‘publication concerns’. Previously published articles may remain indexed and can be cited. Their metrics have yet to be studied. This study aimed to evaluate the main features and metrics of journals discontinued from Scopus for publication concerns, before and after their discontinuation, and to determine the extent of predatory journals among the discontinued journals.

Methods: We surveyed the list of discontinued journals from Scopus (July 2019). Data regarding metrics, citations and indexing were extracted from Scopus or other scientific databases, for the journals discontinued for publication concerns. 
Results: A total of 317 journals were evaluated. Ninety-three percent of the journals (294/317) declared they published using an Open Access model. The subject areas with the greatest number of discontinued journals were Medicine (52/317; 16%), Agriculture and Biological Science (34/317; 11%), and Pharmacology, Toxicology and Pharmaceutics (31/317; 10%). The mean number of citations per year after discontinuation was significantly higher than before (median of difference 16.89 citations, p<0.0001), and so was the number of citations per document (median of difference 0.42 citations, p<0.0001). Twenty-two percent (72/317) were included in the Cabell’s blacklist. The DOAJ currently included only 9 journals while 61 were previously included and discontinued, most for ‘suspected editorial misconduct by the publisher’.
Conclusions: Journals discontinued for ‘publication concerns’ continue to be cited despite discontinuation and predatory behaviour seemed common. These citations may influence scholars’ metrics prompting artificial career advancements, bonus systems and promotion. Countermeasures should be taken urgently to ensure the reliability of Scopus metrics for the purpose of scientific assessment of scholarly publishing at both journal- and author-level.

DARPA letter to KEI confirming investigation of Moderna for failure to report government funding in patent applications | Knowledge Ecology International

“On Friday, September 18, 2020, KEI received a letter from the Defense Advanced Research Projects Agency (DARPA) confirming that the agency was investigating Moderna for failure to report government funding in patent applications. The Financial Times and other outlets had previously reported this investigation (see: https://www.keionline.org/moderna), but this letter is the first official notice we have received from DARPA.

The letter from DARPA is signed by D. Peter Donaghue, who is the Division Director for Contracts at DARPA.

The letter is short, and confirms that DARPA is conducting an investigation. I would expect Moderna to report this to shareholders at some point….”

What’s Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers | Fantastic Anachronism

[Some recommendations:]

Ignore citation counts. Given that citations are unrelated to (easily-predictable) replicability, let alone any subtler quality aspects, their use as an evaluative tool should stop immediately.
Open data, enforced by the NSF/NIH. There are problems with privacy but I would be tempted to go as far as possible with this. Open data helps detect fraud. And let’s have everyone share their code, too—anything that makes replication/reproduction easier is a step in the right direction.
Financial incentives for universities and journals to police fraud. It’s not easy to structure this well because on the one hand you want to incentivize them to minimize the frauds published, but on the other hand you want to maximize the frauds being caught. Beware Goodhart’s law!
Why not do away with the journal system altogether? The NSF could run its own centralized, open website; grants would require publication there. Journals are objectively not doing their job as gatekeepers of quality or truth, so what even is a journal? A combination of taxonomy and reputation. The former is better solved by a simple tag system, and the latter is actually misleading. Peer review is unpaid work anyway, it could continue as is. Attach a replication prediction market (with the estimated probability displayed in gargantuan neon-red font right next to the paper title) and you’re golden. Without the crutch of “high ranked journals” maybe we could move to better ways of evaluating scientific output. No more editors refusing to publish replications. You can’t shift the incentives: academics want to publish in “high-impact” journals, and journals want to selectively publish “high-impact” research. So just make it impossible. Plus as a bonus side-effect this would finally sink Elsevier….”

NIH warns drug and device companies to post missing trial data 

“Hundreds of drug companies, medical device manufacturers, and universities owe the public a decade’s worth of missing data from clinical trials, federal officials warned last week.

New rules issued last week in the wake of a federal court ruling in February instructed clinical trial sponsors to submit missing data for trials conducted between 2007 and 2017 “as soon as possible.” For years, many trials conducted during that span have largely been exempted from reporting their data to ClinicalTrials.gov, a public database, meaning a decade of data about approved drugs and medical devices has never been made public.

The court’s ruling, and the federal government’s decision not to appeal it and instead to urge trial sponsors to submit the missing information, represent a major win for transparency advocates, who for years have fought to recover the decadelong gap in publicly available clinical trial data. …

The court ruling, and the resulting change in federal policy, come after years of reporting that has detailed how federal research agencies routinely fail to enforce their own rules regarding clinical trial transparency — which advocates say is critical for the public’s understanding of a given medicines’s safety and efficacy. …”

Wikipedia, The Free Online Medical Encyclopedia Anyone Can Plagiarize: Time to Address Wiki-Plagiarism

Abstract:  Plagiarism and self-plagiarism are widespread in biomedical publications, although journals are increasingly implementing plagiarism detection software as part of their editorial processes. Wikipedia, a free online encyclopedia written by its users, has global public health importance as a source of online health information. However, plagiarism of Wikipedia in peer-reviewed publications has received little attention. Here, I present five cases of PubMed-indexed articles containing Wiki-plagiarism, i.e. copying of Wikipedia content into medical publications without proper citation of the source. The true incidence of this phenomenon remains unknown and requires systematic study. The potential scope and implications of Wiki-plagiarism are discussed.

MetaArXiv Preprints | Publication by association: the Covid-19 pandemic reveals relationships between authors and editors

Abstract:  During the COVID-19 pandemic, the rush to scientific and political judgments on the merits of hydroxychloroquine was fuelled by dubious papers which may have been published because the authors were not independent from the practices of the journals in which they appeared. This example leads us to consider a new type of illegitimate publishing entity, “self-promotion journals” which could be deployed to serve the instrumentalisation of productivity-based metrics, with a ripple effect on decisions about promotion, tenure, and grant funding.

 

MetaArXiv Preprints | Publication by association: the Covid-19 pandemic reveals relationships between authors and editors

Abstract:  During the COVID-19 pandemic, the rush to scientific and political judgments on the merits of hydroxychloroquine was fuelled by dubious papers which may have been published because the authors were not independent from the practices of the journals in which they appeared. This example leads us to consider a new type of illegitimate publishing entity, “self-promotion journals” which could be deployed to serve the instrumentalisation of productivity-based metrics, with a ripple effect on decisions about promotion, tenure, and grant funding.

 

Double dipping and other bad manners

“So in this context, double dipping is when an article is published open access – that is, an author’s fee has been paid for it to be read for free around the world – but the publisher then charges other users to read that article through a subscription. Now, if that were truly the case, the publisher would be paid twice for the same article.

Bad manners indeed!

Yes, but at Elsevier, we do not double dip. We have two models of compensation for an article: through an open access fee or through a subscription – but we are never paid for the same article twice.

But how do you ensure that? How is that managed?

This is managed through our business accounting. Fully gold open access journals, for example, have no subscription price, and therefore no pricing for those journals is included in any licensing contract. Customers are never charged a subscription fee for gold open access journals.

Ok, that makes sense. But what about hybrid journals that publish both gold open access as well as subscription articles?

Yes, I see how this could be confusing. We manage this by maintaining separate accounting streams. If an author selects to publish open access, the article publishing fee is collected and that article is published as open. Done. Those revenues are kept separate from the revenues of the subscription articles. So when pricing for each subscription journal is determined, revenue from the open access articles does not play into that evaluation. We maintain separate accounting and evaluation processes….”