University Rankings and Governance by Metrics and Algorithms | Zenodo

Abstract:  This paper looks closely at how data analytic providers leverage rankings as a part of their strategies to further extract rent and assets from the university beyond their traditional roles as publishers and citation data providers. Multinational publishers such as Elsevier, with over 2,500 journals in its portfolio, has transitioned to become a data analytic firm. Rankings expand their abilities to monetize further their existing journal holdings, as there is a strong association between publication in high-impact journals and improvement in rankings.  The global academic publishing industry has become highly oligopolistic, and a small handful of legacy multinational firms are now publishing the majority of the world’s research output (See Larivière et. al. 2015; Fyfe et. al. 2017; Posada & Chen, 2018). It is therefore crucial that their roles and enormous market power in influencing university rankings be more closely scrutinized. We suggest that due to a combination of a lack of transparency regarding, for example, Elsevier’s data services and products and their self-positioning as a key intermediary in the commercial rankings business, they have managed to evade the social responsibilities and scrutiny that come with occupying such a critical public function in university evaluation. As the quest for ever-higher rankings often works in conflict with universities’ public missions, it is critical to raise questions about the governance of such private digital platforms and the compatibility between their private interests and the maintenance of universities’ public values.


Manipulation of bibliometric data by editors of scientific journals

“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….

An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….

Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! … 

Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”

Open access book usage data – how close is COUNTER to the other kind?

Abstract:  In April 2020, the OAPEN Library moved to a new platform, based on DSpace 6. During the same period, IRUS-UK started working on the deployment of Release 5 of the COUNTER Code of Practice (R5). This is, therefore, a good moment to compare two widely used usage metrics – R5 and Google Analytics (GA). This article discusses the download data of close to 11,000 books and chapters from the OAPEN Library, from the period 15 April 2020 to 31 July 2020. When a book or chapter is downloaded, it is logged by GA and at the same time a signal is sent to IRUS-UK. This results in two datasets: the monthly downloads measured in GA and the usage reported by R5, also clustered by month. The number of downloads reported by GA is considerably larger than R5. The total number of downloads in GA for the period is over 3.6 million. In contrast, the amount reported by R5 is 1.5 million, around 400,000 downloads per month. Contrasting R5 and GA data on a country-by-country basis shows significant differences. GA lists more than five times the number of downloads for several countries, although the totals for other countries are about the same. When looking at individual tiles, of the 500 highest ranked titles in GA that are also part of the 1,000 highest ranked titles in R5, only 6% of the titles are relatively close together. The choice of metric service has considerable consequences on what is reported. Thus, drawing conclusions about the results should be done with care. One metric is not better than the other, but we should be open about the choices made. After all, open access book metrics are complicated, and we can only benefit from clarity.


Cureus | Scientometric Data and Open Access Publication Policies of Clinical Allergy and Immunology Journals

Abstract. Introduction

The scientific merit of a paper and its ability to reach broader audiences is essential for scientific impact. Thus, scientific merit measurements are made by scientometric indexes, and journals are increasingly using published papers as open access (OA). In this study, we present the scientometric data for journals published in clinical allergy and immunology and compare the scientometric data of journals in terms of their all-OA and hybrid-OA publication policies.


Data were obtained from Clarivate Analytics InCites, Scimago Journal & Country Rank, and journal websites. A total of 35 journals were evaluated for bibliometric data, journal impact factor (JIF), scientific journal ranking (SJR), Eigenfactor score (ES), and Hirsch index (h-index). US dollars (USD) were used for the requested article publishing charge (APC).


The most common publication policy was hybrid-OA (n = 20). The median OA publishing APC was 3000 USD. Hybrid-OA journals charged a higher APC than all-OA journals (3570 USD vs. 675 USD, p = 0.0001). Very strong positive correlations were observed between SJR and JIF and between ES and h-index. All the journals in the h-index and ES first quartiles were hybrid-OA journals.


Based on these results, we recommend the use of SJR and ES together to evaluate journals in clinical allergy and immunology. Although there is a wide APC gap between all-OA and hybrid-OA journals, all journals within the first quartiles for h-index and ES were hybrid-OA. Our results conflict with the literature stating that the OA publication model’s usage causes an increase in citation counts.

Web analytics for open access academic journals: justification, planning and implementation | BiD: textos universitaris de biblioteconomia i documentació

Abstract:  An overview is presented of resources and web analytics strategies useful in setting solutions for capturing usage statistics and assessing audiences for open access academic journals. A set of complementary metrics to citations is contemplated to help journal editors and managers to provide evidence of the performance of the journal as a whole, and of each article in particular, in the web environment. The measurements and indicators selected seek to generate added value for editorial management in order to ensure its sustainability. The proposal is based on three areas: counts of visits and downloads, optimization of the website alongside with campaigns to attract visitors, and preparation of a dashboard for strategic evaluation. It is concluded that, from the creation of web performance measurement plans based on the resources and proposals analysed, journals may be in a better position to plan the data-driven web optimization in order to attract authors and readers and to offer the accountability that the actors involved in the editorial process need to assess their open access business model.



Jisc partners with Unsub to evaluate UK university journal subscriptions | Jisc

“Jisc has announced that it will be using Unsub, an analytics dashboard, to help evaluate journal agreements that UK universities hold with publishers.

The dashboard, created in 2019 by the not-for-profit software company Our Research, can produce forecasts of different journal subscription scenarios, giving Jisc insight into the costs and benefits of subscription packages for each university and across the consortium. …”

The journal coverage of Web of Science, Scopus and Dimensions: A comparative analysis | SpringerLink

Abstract:  Traditionally, Web of Science and Scopus have been the two most widely used databases for bibliometric analyses. However, during the last few years some new scholarly databases, such as Dimensions, have come up. Several previous studies have compared different databases, either through a direct comparison of article coverage or by comparing the citations across the databases. This article aims to present a comparative analysis of the journal coverage of the three databases (Web of Science, Scopus and Dimensions), with the objective to describe, understand and visualize the differences in them. The most recent master journal lists of the three databases is used for analysis. The results indicate that the databases have significantly different journal coverage, with the Web of Science being most selective and Dimensions being the most exhaustive. About 99.11% and 96.61% of the journals indexed in Web of Science are also indexed in Scopus and Dimensions, respectively. Scopus has 96.42% of its indexed journals also covered by Dimensions. Dimensions database has the most exhaustive journal coverage, with 82.22% more journals than Web of Science and 48.17% more journals than Scopus. This article also analysed the research outputs for 20 selected countries for the 2010–2018 period, as indexed in the three databases, and identified database-induced variations in research output volume, rank, global share and subject area composition for different countries. It is found that there are clearly visible variations in the research output from different countries in the three databases, along with differential coverage of different subject areas by the three databases. The analytical study provides an informative and practically useful picture of the journal coverage of Web of Science, Scopus and Dimensions databases.


Dotawo 7: Comparative Northern East Sudanic Linguistics – Dotawo Journal

“Furthermore, it appears that the turn toward open access in the scholarly communications landscape is increasingly facilitating the agendas of an oligopoly of for-profit data analytics companies. Perhaps realizing that “they’ve found something that is even more profitable than selling back to us academics the content that we have produced,”5 they venture ever further up the research stream, with every intent to colonize and canalize its entire flow.6 This poses a severe threat to the independence and quality of scholarly inquiry.7

In the light of these troubling developments, the expansion from Dotawo as a “diamond” open access to a common access journal represents a strong reaffirmation of the call that the late Aaron Swartz succinctly formulated in his “Guerilla Open Access Manifesto”: …

Swartz’s is a call to action that transcends the limitations of the open access movement as construed by the BOAI Declaration by plainly affirming that knowledge is a common good. His call goes beyond open access, because it specifically targets materials that linger on a paper or silicon substrate in academic libraries and digital repositories without being accessible to “fair use.” The deposition of the references from Dotawo contributions in a public library is a first and limited attempt to offer a remedy, heeding the “Code of Best Practices in Fair Use” of the www?Association of Research Libraries, which approvingly cites the late Supreme Court Justice Brandeis that “the noblest of human productions — knowledge, truths ascertained, conceptions, and ideas — become, after voluntary communication to others, free as the air to common use.”9 This approach also dovetails the interpretation of “folk law” recently propounded by Kenneth Goldsmith, the founder of public library www?Ubuweb….”

KNIME Analytics Platform (DCMI) – Event Registration

“KNIME Analytics Platform is an open source software for working with all kinds of data. It uses visual workflows that are created with an intuitive, drag and drop style graphical interface, without the need for coding. The webinar will give an introduction to KNIME and focus on data blending and shaping. All data files and KNIME workflows used in the webinar will be made available after the webinar, so participants can reproduce the demonstrated steps on their own computers. 1) Attendees will be introduced to the KNIME Analytics Platform 2) Data cleaning techniques such as data blending and shaping will be demonstrated. 3) Attendees will learn about machine learning models and their role in optimizing workflow performance and application to validation metrics. Through the generosity of DCMI, this webinar is being offered complimentary to all registrants who attend the live presentation….”