Good Practices – Research Institutes – DORA

“DORA’s ultimate aim is not to accumulate signatures but to promote real change in research assessment.  One of the keys to this is the development of robust and time-efficient ways of evaluating research and researchers that do not rely on journal impact factors. We are keen to gather and share existing examples of good practice in research assessment, including approaches to funding and fellowships, hiring and promotion, and awarding prizes, that emphasize research itself and not where it is published. 

If you know of exemplary research assessment methods that could provide inspiration and ideas for research institutes, funders, journals, professional societies, or researchers, please contact DORA….”

To fix research assessment, swap slogans for definitions

“Two years ago, the DORA steering committee hired me to survey practices in research assessment and promote the best ones. Other efforts have similar goals. These include the Leiden Manifesto and the HuMetricsHSS Initiative.

My view is that most assessment guidelines permit sliding standards: instead of clearly defined terms, they give us feel-good slogans that lack any fixed meaning. Facing the problem will get us much of the way towards a solution.

Broad language increases room for misinterpretation. ‘High impact’ can be code for where research is published. Or it can mean the effect that research has had on its field, or on society locally or globally — often very different things. Yet confusion is the least of the problems. Descriptors such as ‘world-class’ and ‘excellent’ allow assessors to vary comparisons depending on whose work they are assessing. Academia cannot be a meritocracy if standards change depending on whom we are evaluating. Unconscious bias associated with factors such as a researcher’s gender, ethnic origin and social background helps to perpetuate the status quo. It was only with double-blind review of research proposals that women finally got fair access to the Hubble Space Telescope. Research suggests that using words such as ‘excellence’ in the criteria for grants, awards and promotion can contribute to hypercompetition, in part through the ‘Matthew effect’, in which recognition and resources flow mainly to those who have already received them….”

Dutch universities and research funders move away from the impact factor – ScienceGuide

“By the end of 2019, all parties involved in this project pledge to have signed DORA  . This commitment has to be more than an empty gesture. For example, norms such as ‘four publications to obtain a PhD’ will be abolished, and NWO and ZonMw will no longer inquire about h-index or journal impact factor when academics submit grant applications. Instead of asking for a publication list and CV, they will ask for a more ‘narrative’ approach – inquiring about why this research is important, and why the applicant is the right person to carry it out.

This change will be fast, but not instant. The parties involved acknowledge that change takes time. Considering that to focus metrics such as impact factors took decades to become part of established practices, unlearning these routines will require a considerable amount of time, energy and perseverance. Correctly identifying diverse forms of talent and ‘good research’ will be a learning experience: “To accelerate the desired cultural change in recognition and rewards, we at NWO and ZonMW will strongly focus on training and instruction for our grant evaluation committees.” …”

Driving Institutional Change for Research Assessment Reform

“Academic institutions and funders assess their scientists’ research outputs to help allocate their limited resources. Research assessments are codified in policies and enacted through practices. Both can be problematic: policies if they do not accurately reflect institutional mission and values; and practices if they do not reflect institutional policies.

Even if new policies and practices are developed and introduced, their adoption often requires significant cultural change and buy-in from all relevant parties – applicants, reviewers and decision makers.

We will discuss how to develop and adopt new research assessment policies and practices through panel discussions, short plenary talks and breakout sessions. We will use the levels of intervention described in the “Changing a Research Culture” pyramid (Nosek, 2019), to organize the breakout sessions….”

Chasing cash cows in a swamp? Perspectives on Plan S from Australia and the USA | Unlocking Research

“Rankings are a natural enemy of openness….

Australian universities are heavily financially reliant on overseas students….

University rankings are extremely important in the recruitment of overseas students….

There is incredible pressure on researchers in Australia to perform. This can take the form of reward, with many universities offering financial incentives for publication in ‘top’ journals….

For example, Griffith University’s Research and Innovation Plan 2017-2020 includes: “Maintain a Nature and Science publication incentive scheme”. Publication in these two journals comprises 20% of the score in the Academic Ranking of World Universities….”

Journal practices (other than OA) promoting Open Science goals | Zenodo

“Journal practices (other than OA) promoting Open Science goals (relevance, reproducibility, efficiency, transparency)

Early, full and reproducible content

preregistration – use preregistrations in the review process
registered reports – apply peer review to preregistration prior to the study and publish results regardless of outcomes
preprint policy – liberally allow preprinting in any archive without license restrictions
data/code availability – foster or require open availability of data and code for reviewers and readers
TDM allowance – allow unrestricted TDM of full text and metadata for any use
null/negative results – publish regardless of outcome
 

Machine readable ecosystem

data/code citation – promote citation and use standards
persistent IDs – e.g. DOI, ORCID, ROR, Open Funder Registry, grant IDs
licenses (in Crossref) – register (open) licenses in Crossref
contributorship roles – credit all contributors for their part in the work
open citations – make citation information openly available via Crossref
 

Peer review

open peer review – e.g. open reports and open identities
peer review criteria – evaluate methodological rigour and reporting quality only or also judge expected relevance or impact?
rejection rates – publish rejection rates and reconsider high selectivity
post-publication peer review – publish immediately after sanity check and let peer review follow that?
 

Diversity

author diversity – age, position, gender, geography, ethnicity, colour
reviewer diversity – age, position, gender, geography, ethnicity, colour
editor diversity – age, position, gender, geography, ethnicity, colour

Metrics and DORA

DORA: journal metrics – refrain from promoting
DORA: article metrics – provide a range and use responsibly…”

How a working group began the process of DORA implementation at Imperial College London – DORA

“Even so, it is much easier to sign DORA than to deliver on the commitment that signing entails. And while I would always recommend that universities sign as soon as they are ready to commit, because doing so sends such a positive message to their researchers, they should not put pen to paper without a clear idea of how signing will impact their approach to research assessment, or how they are going to develop any changes with their staff….

Out went phrases such as “contributions to research papers that appear in high-impact journals” to be replaced by “contributions to high quality and impactful research.” The change is subtle but significant – the revised guidance makes it plain that ‘impactful research’ in this context is not a cypher for the JIF; rather it is work “that makes a significant contribution to the field and/or has impact beyond the immediate field of research.” …”

Driving Institutional Change for Research Assessment Reform – DORA

“What is this meeting about?

DORA and the Howard Hughes Medical Institute (HHMI) are convening a diverse group of stakeholders to consider how to improve research assessment policies and practices.
By exploring different approaches to cultural and systems change, we will discuss practical ways to reduce the reliance on proxy measures of quality and impact in hiring, promotion, and funding decisions. To focus on practical steps forward that will improve research assessment practices, we are not going to discuss the well-documented deficiencies of the Journal Impact Factor (JIF) as a measure of quality….”

Cambridge University signs San Francisco Declaration on Research Assessment | University of Cambridge

“The University of Cambridge and Cambridge University Press today announce that they have signed up to the San Francisco Declaration on Research Assessment (DORA), a set of recommendations agreed in 2012 that seek to ensure that the quality and impact of research outputs are “measured accurately and evaluated wisely”. …”