Next-generation metrics: Responsible metrics and evaluation for open science

“Over the past year, the Expert Group has reviewed available metrics, with special attention to altmetrics, and identified frameworks for responsible usage, in the context of the EC’s agenda for open science. This agenda is developing under five action lines: fostering and creating incentives for open science; removing barriers for open science; mainstreaming and further promoting open access policies; developing an open science cloud; and open science as a socio-economic driver.

A multi-stakeholder Open Science Policy Platform has been established, to advise on strategic direction and implementation.3 In May 2016, the EU Competitiveness Council issued a set of conclusions on the transition towards an open science system. It noted that the remit of the Open Science Policy Platform should include ‘adapting reward and evaluation systems, alternative models for open access publishing and management of research data (including archiving), altmetrics….and other aspects of open science.’4

This is the context in which the Expert Group on Altmetrics undertook its work, and will input findings to EC policymakers and to the Open Science Policy Platform.

[…] 

This report builds on the expertise of the group members, complemented by desk-research and an extensive literature review. The group also issued a call for evidence in June 2016, to gather the views of stakeholders11. Respondents had one month to reply with brief submissions. They were asked to indicate whether they were making an individual or organisational response, and what role they occupied in the open science agenda. In total, twenty responses to the call for evidence were received, of which nineteen were valid answers. The list of respondents can be found in Appendix 1.

A summary of the results from the call for evidence was presented at the Science and Technology Indicators (STI) Conference in Valencia (September 15, 2016)12 and the 3AM Conference in Bucharest (September 29, 2016)13. Both occasions were used to receive more feedback. The audience at the STI Conference mainly consisted of researchers in scientometrics and bibliometrics, whereas attendees at the 3AM Conference mainly came from research institutes, altmetric providers, and libraries. Feedback was mostly anonymous via plenary contributions and a paperand-pencil-exercise during the 3AM Conference.”

Budapest Open Access Initiative | Open Access: Toward the Internet of the Mind

“On February 14, 2002, a small text of fewer than a thousand words quietly appeared on the Web: titled the “Budapest Open Access Initiative” (BOAI), it gave a public face to discussions between sixteen participants that had taken place on December 1 and 2, 2001 in Budapest, at the invitation of the Open Society Foundations (then known as the Open Society Institute)….Wedding the old – the scientific ethos – with the new – computers and the Internet – elicited a powerful, historically grounded synthesis that gave gravitas to the BOAI. In effect, the Budapest Initiative stated, Open Access was not the hastily cobbled up conceit of a small, marginal band of scholars and scientists dissatisfied with their communication system; instead, it asserted anew the central position of communication as the foundation of the scientific enterprise. Communication, as William D. Harvey famously posited, is the “essence of science,” and thanks to the Internet, scientific communication could be further conceived as the distributed system of human intelligence….”

What are the personal and professional characteristics that distinguish the researchers who publish in high- and low-impact journals? A multi-national web-based survey. ecancermedicalscience – The open access journal from the European Institute of Oncology and the OCEI

Abstract:  Purpose: This study identifies the personal and professional profiles of researchers with a greater potential to publish high-impact academic articles.

Method: The study involved conducting an international survey of journal authors using a web-based questionnaire. The survey examined personal characteristics, funding, and the perceived barriers of research quality, work-life balance, and satisfaction and motivation in relation to career. The processes of manuscript writing and journal publication were measured using an online questionnaire that was developed for this study. The responses were compared between the two groups of researchers using logistic regression models.

Results: A total of 269 questionnaires were analysed. The researchers shared some common perceptions; both groups reported that they were seeking recognition (or to be leaders in their areas) rather than financial remuneration. Furthermore, both groups identified time and funding constraints as the main obstacles to their scientific activities. The amount of time that was spent on research activities, having >5 graduate students under supervision, never using text editing services prior to the publication of articles, and living in a developed and English-speaking country were the independent variables that were associated with their article getting a greater chance of publishing in a high-impact journal. In contrast, using one’s own resources to perform studies decreased the chance of publishing in high-impact journals.

Conclusions: The researchers who publish in high-impact journals have distinct profiles compared with the researchers who publish in low-impact journals. English language abilities and the actual amount of time that is dedicated to research and scientific writing, as well as aspects that relate to the availability of financial resources are the factors that are associated with a successful researcher’s profile.

New study explores disparities between researchers who publish in high-and low-impact journals

“A new study surveying authors from a range of countries investigates the crucial differences between authors who publish in high- and low-impact factor medical journals. This original research shows that the growth of open access hasn’t significantly changed the publishing landscape as regards impact factor….”

A Letter to Thompson Reuters – ASCB

“In April 2013, some of the original signers of DORA [Declaration on Research Assessment] wrote to executives at Thomson Reuters to suggest ways in which it might improve its bibliometric offerings. Suggestions included replacing the flawed and frequently misused two-year Journal Impact Factor (JIF) with separate JIFs for the citable reviews and for the primary research article content of a journal; providing more transparency in Thomson Reuters’ calculation of JIFs; and publishing the median value of citations per citable article in addition to the JIFs. Thomson Reuters acknowledged receipt of the letter and said, “We are carefully reviewing all the points raised and will respond as soon as possible.”…”

Retracted Science and the Retraction Index

Abstract:  Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or they are found to violate ethical guidelines. Using a novel measure that we call the “retraction index,” we found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor. Although retractions are relatively rare, the retraction process is essential for correcting the literature and maintaining trust in the scientific process.

How can academia kick its addiction to the impact factor? – ScienceOpen Blog

“The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).”

PLOS ONE: Open Access Meets Discoverability: Citations to Articles Posted to Academia.edu

“Using matching and regression analyses, we measure the difference in citations between articles posted to Academia.edu and other articles from similar journals, controlling for field, impact factor, and other variables. Based on a sample size of 31,216 papers, we find that a paper in a median impact factor journal uploaded to Academia.edu receives 16% more citations after one year than a similar article not available online, 51% more citations after three years, and 69% after five years. We also found that articles also posted to Academia.edu had 58% more citations than articles only posted to other online venues, such as personal and departmental home pages, after five years.”

Enhancing the efficacy of the ‘DBT and DST Open Access Policy’

“We need to take serious cognizance of the document titled ‘DBT and DST Open Access Policy’ released jointly by DST and DBT on 12 December 2014. The focus of the document is on ensuring that knowledge created through the use of public funds is available to the public. This document stipulates that papers resulting from funds received from DST or DBT from the fiscal year 2012–13 onwards are required to be deposited in institutional repositories or in designated central repositories (dbt. sciencecentral.in and dst.sciencecentral. in). It stipulates that institutes receiving core funding from DST or DBT must set up institutional repositories. Most of this document discusses modalities, etc. for the repositories, but it makes two interesting statements that we should discuss. One is a view about an outcome of such open access, viz. ‘providing free online access by depositing them in an institutional repository is the most effective way of ensuring that the research it funds can be accessed, read and built upon’. The other statement makes a judgment call on the use of journal impact factors (IF). The document states ‘The DBT and DST affirms the principle that the intrinsic merit of the work, and not the title of the journal in which an author’s work is published, should be considered in making future funding decisions. The DBT and DST do not recommend the use of journal impact factors either as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions’. I shall discuss these two statements in some detail …”

The relationship between journal rejections and their impact factors – ScienceOpen Blog

“Frontiers recently published a fascinating article about the relationship between the impact factors (IF) and rejection rates from a range of journals. It was a neat little study designed around the perception that many publishers have that in order to generate high citation counts for their journals, they must be highly selective and only publish the ‘highest quality’ work. Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research. The new data from Frontiers shows that this perception is most likely false. From a random sample of 570 journals (indexed in the 2014 Journal Citation Reports; Thomson Reuters, 2015), it seems that journal rejection rates are almost entirely independent of impact factors. Importantly, this implies that researchers can just as easily submit their work to less selective journals and still have the same impact factor assigned to it. This relationship will remain important while the impact factor continues to dominate assessment criteria and how researchers evaluate each other (whether or not the IF is a good candidate for this is another debate) …”