“The report proposes a vision for the future of scholarly communication; it examines the current system -with its strengths and weaknesses- and its main actors. It considers the roles of researchers, research institutions, funders and policymakers, publishers and other service providers, as well as citizens and puts forward recommendations addressed to each of them. The report places researchers and their needs at the centre of the scholarly communication of the future, and considers knowledge and understanding created by researchers as public goods. Current developments, enabled primarily by technology, have resulted into a broadening of types of actors involved in scholarly communication and in some cases the disaggregation of the traditional roles in the system. The report views research evaluation as a keystone for scholarly communication, affecting all actors. Researchers, communities and all organisations, in particular funders, have the possibility of improving the current scholarly communication and publishing system: they should start by bringing changes to the research evaluation system. Collaboration between actors is essential for positive change and to enable innovation in the scholarly communication and publishing system in the future….”
“In 2013, only 35% of respondents used DOIs; in 2018 this has jumped to 73%*. However, when publishers were asked why they did not use DOIs the 5 most common words given in responses are: implementing, cost, funding, financial, paying….
The number of respondents providing article metadata to DOAJ has increased from 55% in 2013 to 84% in 2018. When asked which format of metadata publishers would like to supply to DOAJ, 46% said they preferred CrossRef, while 8% said JATS. However, 42% of all 2018 respondents said that they didn’t understand what a metadata format was so there is much work to do here! …
Our respondents said that the top 3 benefits of being indexed in DOAJ in 2018 are:
- Certification that our journal(s) are quality publications
- Increased readership
- Increased scientific impact
In 2013, it was:
- Increased visibility of content
- Certification of the journals
62% of respondents said that they didn’t have to deal with competition from predatory publishers or journals. There was no equivalent question in 2013. …
86% of respondents said that in their countries researchers are evaluated on where they publish rather than what they publish. There was no equivalent question in 2013….”
“Research evaluation has become routine and often relies on metrics. But it is increasingly driven by data and not by expert judgement. As a result, the procedures that were designed to increase the quality of research are now threatening to damage the scientific system. To support researchers and managers, five experts led by Diana Hicks, professor in the School of Public Policy at Georgia Institute of Technology, and Paul Wouters, director of CWTS at Leiden University, have proposed 10 principles for the measurement of research performance: the Leiden Manifesto for Research Metrics published as a comment in Nature….”
“Q.4 If you had a magic wand and could change one thing in the scholcomm ecosystem, what would it be?
Like many other contributors to this blog series, my first choice would be changing the promotion and tenure process to incentivize faculty to make their work open. Perhaps the best example of this, for me, is the Liège model, where faculty are required to deposit the full text of their works in the institutional repository in order to have them considered for the purposes of internal research evaluation / P&T. If even a few U.S. institutions were able to implement similar policies, I think that belief in the value of institutional OA policies (and the feasibility of Green OA, more generally) would soar as a result.
To vary the conversation a bit, a close second for me would be increased collaboration around big deal cancellations. I’m thinking here about the nationwide cancellations and renegotiations that have taken place in the Netherlands, Germany, and Finland, for instance, where hundreds of universities have banded together to cancel (and later renegotiate) their big deal contracts with Elsevier on the grounds of unsustainable pricing practices, insufficient respect for authors’ rights, and reluctance (if not outright opposition) to advance the cause of open access. In following these developments, I’ve long wished that we could present a similarly united front on these issues here in the U.S., whether at the state, regional, or national level….”
Abstract: Librarians champion the value of openness in scholarship and have been powerful advocates for the sharing of research data. College and university administrators have recently joined in the push for data sharing due to funding mandates. However, the researchers who create and control the data usually determine whether and how data is shared, so it is worthwhile to look at what they are incentivized to do. The current scholarly publishing landscape plus the promotion and tenure process create a “prisoner’s dilemma” for researchers as they decide whether or not to share data, consistent with the observation that researchers in general are eager for others to share data but reluctant to do so themselves. If librarians encourage researchers to share data and promote openness without simultaneously addressing the academic incentive structure, those who are intrinsically motivated to share data will be selected against via the promotion and tenure process. This will cause those who are hostile to sharing to be disproportionately recruited into the senior ranks of academia. To mitigate the risk of this unintended consequence, librarians must advocate for a change in incentives alongside the call for greater openness. Highly-cited datasets must be given similar weight to highly-cited articles in promotion and tenure decisions in order for researchers to reap the rewards of their sharing. Librarians can help by facilitating data citation to track the impact of datasets and working to persuade higher administration of the value of rewarding data sharing in tenure and promotion.
“Slides from a talk [by Stephen Curry] given to the general assembly of Science Europe in Brussels on 22 Nov 2018. Gives an overview of the problems of over-metricised research evaluation and how this might be tackled, in part through initiatives driven by DORA, and how they are linked with drives such as Plan S to promote open science….”
“Debate is intensifying over Plan S, an initiative backed by 15 research funders to mandate that, by 2020, their research papers are open access as soon as they are published.
The Europe-led statement was launched in September, but details of its implementation haven’t yet been released. And while many open-access supporters have welcomed Plan S, others are now objecting to some of its specifics.
On 5 November, more than 600 researchers, including two Nobel laureates, published an open letter calling the plan “too risky for science”, “unfair”, and “a serious violation of academic freedom” for the scientists affected; more than 950 have now signed.
Letter coordinator Lynn Kamerlin, a biochemist at Uppsala University in Sweden who sits on the boards* of both open-access journals and publications that may be affected by Plan S, talks to Nature about her problems with the plan….”
“October 9, 2018
The review of the evaluation system of the research staff is essential to promote open science
ISGlobal co-organizes a B · Debate on open science in the Spanish context and in Europe Experts and experts, convened by B · Debate , an initiative of Biocat and Obra Social “la Caixa”, agreed that the review of the evaluation system of research personnel is essential to promote open science , a movement that promotes a science more accessible to everyone, that is effective, reproducible and transparent.
Currently, many times the evaluation of a researcher’s career continues to focus on the number of publications and the impact factor of the scientific journals where his articles appear. Different international movements have already underlined the importance of revising this system to improve the way in which the quality of the results and the impact of the research are evaluated , such as the San Francisco Declaration of Evaluation of Research . Apart from the quantity, the evaluation of the research must also take into account the quality….”
“I find this practice already highly questionable. First of all, it appears the formula calculates a statistical mean. However, no article can receive less than 0 citations, while there is no upper limit to citations. Most articles – across all journal – receive only very few citations, and only a few may receive a lot of citations. This means we have a ‘skewed distribution’ when we plot how many papers received how many citations. The statistical mean, however, is not applicable for skewed distributions. Moreover, basic statistics and probability tell us that if you blindly choose one paper from a journal, it is impossible to predict -or even roughly estimate – its quality by the average citation rate, alone. It is further impossible to know the author’s actual contribution to said paper. Thus, we are already stacking three statistical fallacies by applying JIF to evaluate researchers.
But this is just the beginning! Journals don’t have an interest in the Journal Impact Factor as a tool for science evaluation. Their interest is in the advertising effect of the JIF. As we learn from our guest, Dr. Björn Brembs (professor for neurogenetics at University of Regensburg), journals negotiate with the private company Clarivate Analytics (in the past it was Reuters) that provides the numbers. Especially larger publishers have a lot of room to influence the numbers above and below the division line in their favor….”
“A new initiative we are working on, the Maine Intellectual Commons, is exploring this question. One of our University of Maine colleagues, Harlan Onsrud, has recommended re-writing the tenure review criteria to favor open access publications over pay-for-access journals….Prioritizing open access publications is a hard thing to push through a university, however, because of all the bureaucratic hoops you have to negotiate, from the administration to the faculty senate to the unions. So Harlan suggested the short-term goal of simply re-writing the forms on which people submit their tenure applications. The top slots would be filled with open access categories. This would essentially not change the criteria but would make professors think twice when they realize that they do not have anything in these first four slots for open access books or articles.”