Compliance with ethical rules for scientific publishing in biomedical Open Access journals indexed in Journal Citation Reports | proLéka?e.cz

Abstract:  This study examined compliance with the criteria of transparency and best practice in scholarly publishing defined by COPE, DOAJ, OASPA and WAME in Biomedical Open Access journals indexed in Journal Citation Reports (JCR). 259 Open Access journals were drawn from the JCR database and on the basis of their websites their compliance with 14 criteria for transparency and best practice in scholarly publishing was verified. Journals received penalty points for each unfulfilled criterion when they failed to comply with the criteria defined by COPE, DOAJ, OASPA and WAME. The average number of obtained penalty points was 6, where 149 (57.5%) journals received ? 6 points and 110 (42.5%) journals ? 7 points. Only 4 journals met all criteria and did not receive any penalty points. Most of the journals did not comply with the criteria declaration of Creative Commons license (164 journals), affiliation of editorial board members (116), unambiguity of article processing charges (115), anti-plagiarism policy (113) and the number of editorial board members from developing countries (99). The research shows that JCR cannot be used as a whitelist of journals that comply with the criteria of transparency and best practice in scholarly publishing.

OSF Preprints | Open science and modified funding lotteries can impede the natural selection of bad science

Abstract:  Assessing scientists using exploitable metrics can lead to the degradation of research methods even without any strategic behavior on the part of individuals, via “the natural selection of bad science.” Institutional incentives to maximize metrics like publication quantity and impact drive this dynamic. Removing these incentives is necessary, but institutional change is slow. However, recent developments suggest possible solutions with more rapid onsets. These include what we call open science improvements, which can reduce publication bias and improve the efficacy of peer review. In addition, there have been increasing calls for funders to move away from prestige- or innovation-based approaches in favor of lotteries. We investigated whether such changes are likely to improve the reproducibility of science even in the presence of persistent incentives for publication quantity through computational modeling. We found that modified lotteries, which allocate funding randomly among proposals that pass a threshold for methodological rigor, effectively reduce the rate of false discoveries, particularly when paired with open science improvements that increase the publication of negative results and improve the quality of peer review. In the absence of funding that targets rigor, open science improvements can still reduce false discoveries in the published literature but are less likely to improve the overall culture of research practices that underlie those publications.

Over-optimization of academic publishing metrics: observing Goodhart’s Law in action | GigaScience | Oxford Academic

Abstract:  Background

The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades. Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which, “when a measure becomes a target, it ceases to be a good measure.”

Results

In this study, we analyzed >120 million papers to examine how the academic publishing world has evolved over the last century, with a deeper look into the specific field of biology. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening. In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists. Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors. Moreover, by analyzing properties of >2,600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department.

Conclusions

Academic publishing has changed considerably; now we need to reconsider how we measure success.

The European University Association and Science Europe Join Efforts to Improve Scholarly Research Assessment Methodologies

“Evaluating research and assessing researchers is fundamental to the research enterprise and core to the activities of research funders and research performing organisations, as well as universities. The European University Association (EUA) and Science Europe are committed to building a strong dialogue between their members, who share the responsibility of developing and implementing more accurate, open, transparent and responsible approaches, that better reflect the evolution of research activity in the digital era.

Today, the outcomes of scholarly research are often measured through methods based on quantitative, albeit approximate, indicators such as the journal impact factor. There is a need to move away from reductionist ways of assessing research, as well as to establish systems that better assess research potential. Universities, research funders and research performing organisations are well-placed to explore new and improved research assessment approaches, while also being indispensable in turning these innovations into systemic reforms….”

Rethinking impact factors: better ways to judge a journal

Global efforts are afoot to create a constructive role for journal metrics in scholarly publishing and to displace the dominance of impact factors in the assessment of research. To this end, a group of bibliometric and evaluation specialists, scientists, publishers, scientific societies and research-analytics providers are working to hammer out a broader suite of journal indicators, and other ways to judge a journal’s qualities. It is a challenging task: our interests vary and often conflict, and change requires a concerted effort across publishing, academia, funding agencies, policymakers and providers of bibliometric data.

Here we call for the essential elements of this change: expansion of indicators to cover all functions of scholarly journals, a set of principles to govern their use and the creation of a governing body to maintain these standards and their relevance….”

How to avoid borrowed plumes in academia

Abstract:  Publications in top journals today have a powerful influence on academic careers although there is much criticism of using journal rankings to evaluate individual articles. We ask why this practice of performance evaluation is still so influential. We suggest this is the case because a majority of authors benefit from the present system due to the extreme skewness of citation distributions. “Performance paradox” effects aggravate the problem. Three extant suggestions for reforming performance management are critically discussed. We advance a new proposal based on the insight that fundamental uncertainty is symptomatic for scholarly work. It suggests focal randomization using a rationally founded and well-orchestrated procedure.

Impact Assessment of Non-Indexed Open Access Journals: A Case Study

Abstract:  This case study assesses the impact of a small, open-access social sciences journal not included in citation tracking indexes by exploring measures of the journal’s influence beyond the established “impact factor” formula. An analysis of Google Scholar data revealed the journal’s global reach and value to researchers. This study enabled the journal’s editors to measure the success of their publication according to its professed scope and mission, and to quantify its impact for prospective contributors. The impact assessment strategies outlined here can be leveraged effectively by academic librarians to provide high-value consultancy for scholar-editors of open access research journals.

DORA 6 years out: A global community 14,000 strong – DORA

DORA turns 6 years old this week. Or, as we like to say, this year DORA reached 14,000—that’s how many people have signed DORA, and they come from more than 100 countries! Each signature represents an individual committed to improving research assessment in their community, in their corner of the world. And 1,300 organizations in more than 75 countries, in signing DORA, have publicly committed to improving their practices in research evaluation and to encouraging positive change in research culture….”

The European University Association and Science Europe Join Efforts to Improve Scholarly Research Assessment Methodologies

“EUA and Science Europe are committed to working together on building a strong dialogue between their members, with a view to:

• support necessary changes for a better balance between qualitative and quantitative research assessment approaches, aiming at evaluating the merits of scholarly research. Furthermore, novel criteria and methods need to be developed towards a fairer and more transparent assessment of research, researchers and research teams, conducive to selecting excellent proposals and researchers.governments and public authorities to guarantee scholars and students the rights that constitute academic freedom, including the rights to freedom of expression, opinion, thought, information and assembly as well as the rights to education and teaching;

• recognise the diversity of research outputs and other relevant academic activities and their value in a manner that is appropriate to each research field and that challenges the overreliance on journal-based metrics.universities, funding agencies, academies and other research organisations to ensure that all researchers, teachers and students are guaranteed academic freedom, by fostering a culture in which free expression and the open exchange of opinion are valued and by shielding the research and teaching community from sanctions for exercising academic freedom.

• consider a broad range of criteria to reward and incentivise research quality as the fundamental principle of scholarly research, and ascertain assessment processes and methods that accurately reflect the vast dimensions of research quality and credit all scientific contributions appropriately. EUA and Science Europe will launch activities to further engage their members in improving and strengthening their research assessment practices. Building on these actions, both associations commit to maintaining a continuous dialogue and explore opportunities for joint actions, with a view to promoting strong synergies between the rewards and incentives structures of research funders and research performing organisations, as well as universities….”