Are stakeholders measuring the publishing metrics that matter?: Putting research into context

“Perhaps the most fundamental aspect of compiling and implementing more meaningful research metrics that the NISO panelists discussed is the importance of putting data into context. And, as the speakers noted, there are multiple facets of context to consider, including:

The strengths and limitations of different metrics by discipline/subject matter (e.g., some metrics are better suited to certain types of research)
The intended uses and overall strengths and limitations of particular data points (e.g., altmetrics are “indicators” of impact, not measures of quality and the JIF was never meant to be used to measure the impact of individual articles or scholars)
The cultural context that a researcher is operating within and the opportunities, challenges, and biases they have experienced
How and where a research output fits within scholars’ other professional contributions (e.g., recognizing how individual research outputs are part of broader bodies of work and also measuring the impacts of scholarly outputs that do not fit within traditional publication-based assessment systems) …”

University Rankings and Governance by Metrics and Algorithms | Zenodo

Abstract:  This paper looks closely at how data analytic providers leverage rankings as a part of their strategies to further extract rent and assets from the university beyond their traditional roles as publishers and citation data providers. Multinational publishers such as Elsevier, with over 2,500 journals in its portfolio, has transitioned to become a data analytic firm. Rankings expand their abilities to monetize further their existing journal holdings, as there is a strong association between publication in high-impact journals and improvement in rankings.  The global academic publishing industry has become highly oligopolistic, and a small handful of legacy multinational firms are now publishing the majority of the world’s research output (See Larivière et. al. 2015; Fyfe et. al. 2017; Posada & Chen, 2018). It is therefore crucial that their roles and enormous market power in influencing university rankings be more closely scrutinized. We suggest that due to a combination of a lack of transparency regarding, for example, Elsevier’s data services and products and their self-positioning as a key intermediary in the commercial rankings business, they have managed to evade the social responsibilities and scrutiny that come with occupying such a critical public function in university evaluation. As the quest for ever-higher rankings often works in conflict with universities’ public missions, it is critical to raise questions about the governance of such private digital platforms and the compatibility between their private interests and the maintenance of universities’ public values.

 

Forside – Dansk Open Access-Indikator

From Google’s English:  “The indicator is produced and launched annually by the Danish Agency for Education and Research, which is part of the Ministry of Education and Research. The indicator monitors the implementation of the Danish Open Access strategy 2018-2025 by collecting and analyzing publication data from the Danish universities.

Main menu:

OVERVIEW – National strategic goals and the realization of them at national and university level.
OA TYPES – Types of Open Access realization at national and local level.
DATA – Data for download as well as documentation at an overview and technical level.
GUIDANCE – Information to support the Danish universities’ implementation of Open Access, such as important dates and guidelines.
FAQ – Frequently Asked Questions….”

Manipulation of bibliometric data by editors of scientific journals

“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….

An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….

Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! … 

Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”

Open access book usage data – how close is COUNTER to the other kind?

Abstract:  In April 2020, the OAPEN Library moved to a new platform, based on DSpace 6. During the same period, IRUS-UK started working on the deployment of Release 5 of the COUNTER Code of Practice (R5). This is, therefore, a good moment to compare two widely used usage metrics – R5 and Google Analytics (GA). This article discusses the download data of close to 11,000 books and chapters from the OAPEN Library, from the period 15 April 2020 to 31 July 2020. When a book or chapter is downloaded, it is logged by GA and at the same time a signal is sent to IRUS-UK. This results in two datasets: the monthly downloads measured in GA and the usage reported by R5, also clustered by month. The number of downloads reported by GA is considerably larger than R5. The total number of downloads in GA for the period is over 3.6 million. In contrast, the amount reported by R5 is 1.5 million, around 400,000 downloads per month. Contrasting R5 and GA data on a country-by-country basis shows significant differences. GA lists more than five times the number of downloads for several countries, although the totals for other countries are about the same. When looking at individual tiles, of the 500 highest ranked titles in GA that are also part of the 1,000 highest ranked titles in R5, only 6% of the titles are relatively close together. The choice of metric service has considerable consequences on what is reported. Thus, drawing conclusions about the results should be done with care. One metric is not better than the other, but we should be open about the choices made. After all, open access book metrics are complicated, and we can only benefit from clarity.

 

Developing an objective, decentralised scholarly communication and evaluation system – YouTube

“This is our proposal for how we might create a radically new scholarly publishing system with the potential to disrupt the scholarly publishing industry. The proposed model is: (a) open, (b) objective, (c) crowd sourced and community-controlled, (d) decentralised, and (e) capable of generating prestige. Submitted articles are openly rated by researchers on multiple dimensions of interest (e.g., novelty, reliability, transparency) and ‘impact prediction algorithms’ are trained on these data to classify articles into journal ‘tiers’.

In time, with growing adoption, the highest impact tiers within such a system could develop sufficient prestige to rival even the most established of legacy journals (e.g., Nature). In return for their support, researchers would be rewarded with prestige, nuanced metrics, reduced fees, faster publication rates, and increased control over their outputs….”

Open Access Resources and Evaluation; or: why OA journals might fare badly in terms of conventional usage | Martin Paul Eve | Professor of Literature, Technology and Publishing

“I am frequently asked, by libraries, to provide usage statistics for their institutions at the Open Library of Humanities. I usually resist this, since there are a number of ways in which the metrics are not usually a fair comparison to subscription resources. A few notes on this.

We do not have or require any login information. This means that the only way that we can provide usage information is by using the institutional IP address. This, in turn, means that we can only capture on-site access. This is not the same for journals that have paywalls. They can capture a login, from off-site, and attribute these views to the institution. Therefore, if you compare usage of OA journals vs. paywalled journals, the paywalled journals will likely have higher usage stats, because they include off-site access, which is not possible for OA journals (though Knowledge Unlatched did some interesting work on geo-tracking of off-site access). Further, our authors may deposit copies of their work in institutional repositories or anywhere else – and we encourage this. Again, though, the decentralization makes it very hard to get any meaningful statistical tracking.

Different institutions want us to report on different things. Some want to know “are our academics publishing in OLH journals?” while others want to know “are our academics reading OLH journals?” The reporting requirements for these are different and it seems that OLH is judged differently by different institutional desires.

We run a platform that is composed of several different pieces of journal technology: we have journals at Ubiquity Press; we have journals running on Janeway; and we have journals running on proprietary systems at places like Liverpool University Press. These all run on different reporting systems and require us to interact with different vendors for different usage requests. Reporting in this way requires me to take time out of running other parts of the platform. In short: the labour overhead of this type of reporting is fairly large and adds to the overall costs that we have in running the platform.

There is a privacy issue in tracking our readers. When the US Government has banned the use of the term “climate change”, it seems reasonable to worry that tracking users, by IP address, in logs that could be subpoenaed, could genuinely carry some risk. Indeed, as a library, it feels important to us to protect our readers.

View counts are a terrible proxy for actual reading.

Our mission is to change subscription journals to an OA basis. Libraries have been asked, at each stage, to vote on this. They have done so enthusiastically. We hope that, in doing so, libraries recognise what we are doing and will not just resort to crude rankings of usage in continuing to support us (and, indeed, most do). But I can also see the temptation, in the current budget difficulties, to fall back on usage stats as a ranking of where to invest….”

Guest Post by Jean-Claude Guédon: Scholarly Communication and Scholarly Publishing – OASPA

“In December, I responded to an “Open Post” signed by a diverse group of scholarly publishers: commercial, learned societies, and university presses. Despite differing perspectives and objectives, all the signatories opposed “immediate green OA”. Their unanimity apparently rested on one concept: the “version of record”. 

Invited to contribute something further to this discussion (and I thank OASPA for this opportunity), I propose exploring how scholarly publishing should relate to scholarly communication. Ostensibly aligned, publishing and communication have diverged. Journals and the concept of “version of record” are not only a legacy from print, but their roles have shifted to the point where some processes involved in scholarly publishing are getting in the way of optimal scholarly communication, as the present pandemic amply reveals. Taking full advantage of digital affordances requires moving in different directions. This is an opportunity, not a challenge. Platforms and “record of versions” will eventually supersede journals and their articles, and now is the time to make some fundamental choices….”

Guest Post by Jean-Claude Guédon: Scholarly Communication and Scholarly Publishing – OASPA

“In December, I responded to an “Open Post” signed by a diverse group of scholarly publishers: commercial, learned societies, and university presses. Despite differing perspectives and objectives, all the signatories opposed “immediate green OA”. Their unanimity apparently rested on one concept: the “version of record”. 

Invited to contribute something further to this discussion (and I thank OASPA for this opportunity), I propose exploring how scholarly publishing should relate to scholarly communication. Ostensibly aligned, publishing and communication have diverged. Journals and the concept of “version of record” are not only a legacy from print, but their roles have shifted to the point where some processes involved in scholarly publishing are getting in the way of optimal scholarly communication, as the present pandemic amply reveals. Taking full advantage of digital affordances requires moving in different directions. This is an opportunity, not a challenge. Platforms and “record of versions” will eventually supersede journals and their articles, and now is the time to make some fundamental choices….”

Google Scholar, Web of Science, and Scopus: Which is best for me? | Impact of Social Sciences

“Being able to find, assess and place new research within a field of knowledge, is integral to any research project. For social scientists this process is increasingly likely to take place on Google Scholar, closely followed by traditional scholarly databases. In this post, Alberto Martín-Martín, Enrique Orduna-Malea , Mike Thelwall, Emilio Delgado-López-Cózar, analyse the relative coverage of the three main research databases, Google Scholar, Web of Science and Scopus, finding significant divergences in the social sciences and humanities and suggest that researchers face a trade-off when using different databases: between more comprehensive, but disorderly systems and orderly, but limited systems….”