Good Practices – Research Institutes – DORA

“DORA’s ultimate aim is not to accumulate signatures but to promote real change in research assessment.  One of the keys to this is the development of robust and time-efficient ways of evaluating research and researchers that do not rely on journal impact factors. We are keen to gather and share existing examples of good practice in research assessment, including approaches to funding and fellowships, hiring and promotion, and awarding prizes, that emphasize research itself and not where it is published. 

If you know of exemplary research assessment methods that could provide inspiration and ideas for research institutes, funders, journals, professional societies, or researchers, please contact DORA….”

To fix research assessment, swap slogans for definitions

“Two years ago, the DORA steering committee hired me to survey practices in research assessment and promote the best ones. Other efforts have similar goals. These include the Leiden Manifesto and the HuMetricsHSS Initiative.

My view is that most assessment guidelines permit sliding standards: instead of clearly defined terms, they give us feel-good slogans that lack any fixed meaning. Facing the problem will get us much of the way towards a solution.

Broad language increases room for misinterpretation. ‘High impact’ can be code for where research is published. Or it can mean the effect that research has had on its field, or on society locally or globally — often very different things. Yet confusion is the least of the problems. Descriptors such as ‘world-class’ and ‘excellent’ allow assessors to vary comparisons depending on whose work they are assessing. Academia cannot be a meritocracy if standards change depending on whom we are evaluating. Unconscious bias associated with factors such as a researcher’s gender, ethnic origin and social background helps to perpetuate the status quo. It was only with double-blind review of research proposals that women finally got fair access to the Hubble Space Telescope. Research suggests that using words such as ‘excellence’ in the criteria for grants, awards and promotion can contribute to hypercompetition, in part through the ‘Matthew effect’, in which recognition and resources flow mainly to those who have already received them….”

Content marketing boosts open access adoption | Research Information

“But the lack of widespread adoption of OA is often blamed for slowing down research progress. The truth is, accessing the literature is only half of the story. The other half is the obstacle created by the use of specialised language in the literature. That is, the inability to understand the meaning of the research represents an even greater hindrance than access to multidisciplinary collaboration and open science. It is, therefore, important to carefully consider developing value-added content that is designed to make the original OA research accessible to a wider audience….

This approach requires OA publishers to include the option of adding a marketing fee to the APC….”

Room for everyone’s talent: Toward a new balance in the recognition and reward of academics

Dutch public knowledge institutions and funders call for a modernization of the academic system of recognition and rewards, in particular in five key areas: education, research, impact, leadership and (for university medical centres) patient care. Sicco de Knecht writes, for ScienceGuide, that a culture change and national and international cooperation is required to achieve such modernization. 

“Many academics feel there is a one-sided emphasis on research performance, frequently leading to the undervaluation of the other key areas such as education, impact, leadership and (for university medical centres) patient care. This puts strain on the ambitions that exist in these areas. The assessment system must be adapted and improved in each of the areas and in the connections between them.”

Do articles in open access journals have more frequent altmetric activity than articles in subscription-based journals? An investigation of the research output of Finnish universities | SpringerLink

Abstract:  Scientific articles available in Open Access (OA) have been found to attract more citations and online attention to the extent that it has become common to speak about OA Altmetrics Advantage. This research investigates how the OA Altmetrics Advantage holds for a specific case of research articles, namely the research outputs from universities in Finland. Furthermore, this research examines disciplinary and platform specific differences in that (dis)advantage. The new methodological approaches developed in this research focus on relative visibility, i.e. how often articles in OA journals receive at least one mention on the investigated online platforms, and relative receptivity, i.e. how frequently articles in OA journals gain mentions in comparison to articles in subscription-based journals. The results show significant disciplinary and platform specific differences in the OA advantage, with articles in OA journals within for instance veterinary sciences, social and economic geography and psychology receiving more citations and attention on social media platforms, while the opposite was found for articles in OA journals within medicine and health sciences. The results strongly support field- and platform-specific considerations when assessing the influence of journal OA status on altmetrics. The new methodological approaches used in this research will serve future comparative research into OA advantage of scientific articles over time and between countries.

Do Download Reports Reliably Measure Journal Usage? Trusting the Fox to Count Your Hens? | Wood-Doughty | College & Research Libraries

Abstract:  Download rates of academic journals have joined citation counts as commonly used indicators of the value of journal subscriptions. While citations reflect worldwide influence, the value of a journal subscription to a single library is more reliably measured by the rate at which it is downloaded by local users. If reported download rates accurately measure local usage, there is a strong case for using them to compare the cost-effectiveness of journal subscriptions. We examine data for nearly 8,000 journals downloaded at the ten universities in the University of California system during a period of six years. We find that controlling for number of articles, publisher, and year of download, the ratio of downloads to citations differs substantially among academic disciplines. After adding academic disciplines to the control variables, there remain substantial “publisher effects”, with some publishers reporting significantly more downloads than would be predicted by the characteristics of their journals. These cross-publisher differences suggest that the currently available download statistics, which are supplied by publishers, are not sufficiently reliable to allow libraries to make subscription decisions based on price and reported downloads, at least without making an adjustment for publisher effects in download reports.

 

Tracing the path from social attention to scientific impact | Cardiovascular Research | Oxford Academic

“The ‘citation’ has long served as the bedrock out of which the scientific impact of a published work is carved. This, too, is the basis of common metrics such as the impact factor (IF) and h-index. However, these traditional metrics have been criticized over time for being opaque, difficult to compare across fields, easily manipulatable and ultimately confined to assessing impact within academic circles (rather than educational, or in the general public).1 To confuse matters further, different article databases often report different citation counts for papers depending on their level of coverage.2 For scientists, the ‘publish or perish’ attitude in contemporary science has also led to a fundamental shift in behaviour to emphasize increased output. This includes, for example, splitting papers into multiple, smaller publications. Given the rate of scientific output has doubled approximately every 9 years, it has become increasingly difficult to know exactly what is worth paying attention to in the scientific milieu. Similarly, the considerable time for citations to accumulate hinders an immediate assessment of a work’s impact. These factors are among those which provided the impetus behind the development of Altmetrics: an alternative metric capitalizing on the advent of the internet and social media, where over a third of scholars are on Twitter, to allow a glimpse into a scientific work’s reach at an earlier stage than citations would typically allow.3

Altmetric Attention Scores (AAS) are largely qualitative scores calculated based on weighted variables associated with an article, such as the number of referencing blog posts and Wikipedia articles, Twitter mentions and peer-reviews on Faculty of 1000.4 The different score weightings applied to each variable ostensibly reflects their relative social reach, with news posts and Twitter mentions contributing 8 and 1 points towards the overall AAS, respectively. It gets slightly more complex with Twitter, where scores are applied based on the nature of the Tweeter: a publisher’s account should be valued less than an unconnected individual’s account with a sizeable following. Fundamental to the use of AAS is that they are indicators of attention rather than quality. For this reason, Altmetrics can be seen as complimentary to traditional bibliometrics. Where the score is particularly useful is in helping to discover papers, evaluate scientific impact in a broader context and as a tool for examining trends in disciplines over time….”

Open data linked to higher citations for journal articles | News | Chemistry World

“Research papers that make their underlying data openly available are significantly more likely to be cited in future work, according to an analysis led by researchers at the Alan Turing Institute in London that has been published as a preprint. The study, which is currently under peer review, examined nearly 532,000 articles in over 350 open access journals published by Public Library of Science (PLoS) and BioMed Central (BMC) between 1997 and 2018, and found those that linked directly to source data sets received 25% more citations on average….”

 

 

GYA and cOAlition S form task force on Open Access publishing – Global Young Academy | Global Young Academy

“COAlition S and the Global Young Academy are joining forces to develop a Plan S Monitor Task Force. Plan S is a radical and controversial initiative for Open Access publishing that was launched in September 2018. The plan is supported by cOAlition S, an international consortium of research funders. Plan S requires that, from 2021, scientific publications that result from research funded by public grants must be published in compliant Open Access journals or platforms.

The aim of the Plan S Monitor Task Force is to provide robust indicators by which the impact of Plan S on the research and publication ecosystem can be continuously evaluated. The impact of major policy changes such as Plan S is hard to predict, so it is essential to closely follow their effect from the start. For this, the Task Force will develop key indicators that will allow it to monitor the current situation and every phase of the implementation of Plan S. This will enable lessons to be learned, shared and implemented in a timely fashion to enhance the positive effects and reduce any negative effects of Plan S.”

The effect of “open access” on journal impact factors: A causal analysis of medical journals – NASA/ADS

Abstract:  The Journal Impact Factor (JIF) has a significant influence on authors of research paper submissions. Whether open access (OA) is beneficial to JIFs and the dissemination of academic research results remains an under-examined issue. Using panel data analysis to address this question, the present study analyzed medical journals extracted from databases such as Web of Science and Ulrich’s Periodicals Directory. The results indicate a causal relationship between JIFs and OA, and specifically that: (1) OA enhances JIFs; (2) countries that are less developed in science and technology are more likely to choose OA for publishing papers; (3) authors’ publication language and number of published papers, as well as publication release cycles all have significant effects on JIFs.