Conjoint analysis of researchers’ hidden preferences for bibliometrics, altmetrics, and usage metrics – Lemke – – Journal of the Association for Information Science and Technology – Wiley Online Library

Abstract:  The amount of annually published scholarly articles is growing steadily, as is the number of indicators through which impact of publications is measured. Little is known about how the increasing variety of available metrics affects researchers’ processes of selecting literature to read. We conducted ranking experiments embedded into an online survey with 247 participating researchers, most from social sciences. Participants completed series of tasks in which they were asked to rank fictitious publications regarding their expected relevance, based on their scores regarding six prototypical metrics. Through applying logistic regression, cluster analysis, and manual coding of survey answers, we obtained detailed data on how prominent metrics for research impact influence our participants in decisions about which scientific articles to read. Survey answers revealed a combination of qualitative and quantitative characteristics that researchers consult when selecting literature, while regression analysis showed that among quantitative metrics, citation counts tend to be of highest concern, followed by Journal Impact Factors. Our results suggest a comparatively favorable view of many researchers on bibliometrics and widespread skepticism toward altmetrics. The findings underline the importance of equipping researchers with solid knowledge about specific metrics’ limitations, as they seem to play significant roles in researchers’ everyday relevance assessments.

 

“It’s hard to explain why this is taking so long” – scilog

When it comes into force at the beginning of 2021, the Open Access initiative “Plan S” is poised to help opening up and improving academic publishing. Ulrich Pöschl, a chemist and Open Access advocate of the first hour, explains why free access to research results is important and how an up-to-date academic publishing system can work.

PBJ ranks higher, enhances diversity and offers free global access – Daniell – 2021 – Plant Biotechnology Journal – Wiley Online Library

“Since I started as the Editor?in?Chief in 2012, submission of manuscripts has almost tripled, despite transition to an open access journal a few years ago. Despite COVID?19, the number of submissions to PBJ [Plant Biotechnology Journal] continued to increase in 2020….”

A communication strategy based on Twitter improves article citation rate and impact factor of medical journals – ScienceDirect

[Note even an abstract is OA.] 

“Medical journals use Twitter to optimise their visibility on the scientific community. It is by far the most used social media to share publications, since more than 20% of published articles receive at least one announcement on Twitter (compared to less than 5% of notifications on other social networks) [5] . It was initially described that, within a medical specialty, journals with a Twitter account have a higher impact factor than others and that the number of followers is correlated to the impact factor of the journal [67] . Several observational works showed that the announcement of a medical article publication on Twitter was strongly associated with its citation rate in the following years 891011 . In 2015, among anaesthesia journals, journals with an active and influential Twitter account had an higher journal impact factor and a greater number of article citations than those not embracing social media [12] . A meta-analysis of July 2020 concluded that the presence of an article on social media was probably associated with a higher number of citations [13] . Finally, two randomised studies, published in 2020 and not included in this meta-analysis, also showed that, for a given journal, articles that benefited from exposure on Twitter were 1.5 to 9 times more cited in the year following publication than articles randomised in the “no tweeting” group [1415] 

The majority of these works have only been published very recently and the strategy for using Twitter to optimise the number of citations is now a challenge for all medical journals. Several retrospective studies have looked at the impact of the use of a social media communication strategy by medical journals. They have shown that the introduction of Twitter to communicate as part of this strategy was associated with a higher number of articles consulted, a higher number of citations and shorter delays in citation after publication [1617] . Two studies (including one on anaesthesia journals) showed that journals that used a Twitter account to communicate were more likely to increase their impact factor than those that did not [1218] . Some researchers even suggest that the dissemination of medical information through social media, allowing quick and easy access after the peer-review publication process, may supplant the classical academic medical literature in the future [19] . This evolution has led to the creation of a new type of Editor in several medical journal editorial boards: the social media Editor (sometimes with the creation of a “specialised social media team” to assist him or her) [20] . This medical Editor shares, across a range of social media platforms, new journal articles with the aim of improving dissemination of journal content. Thus, beyond the scientific interest of a given article, which determines its chances of being cited, there is currently a parallel Editorial work consisting in optimising the visibility on Twitter to increase the number of citations and improve the impact factor. Some authors also start to focus on the best techniques for using Twitter and on the best ways to tweet to optimise communication, for example during a medical congress [21] ….”

 

Data analysis shows Journal Impact Factors in sociology are pretty worthless – Family Inequality

“In these sociology journals, there is so much noise in citation rates within the journals, compared to any stable difference between them, that outside the very top the journal ranking won’t much help you predict how much a given paper will be cited. If you assume a paper published in AJS will be more important than one published in Social Forces, you might be right, but if the odds that you’re wrong are too high, you just shouldn’t assume anything. Let’s look closer….

Using JIF to decide which papers in different sociology journals are likely to be more impactful is a bad idea. Of course, lots of people know JIF is imperfect, but they can’t help themselves when evaluating CVs for hiring or promotion. And when you show them evidence like this, they might say “but what is the alternative?” But as Brito & Rodríguez-Navarro write: “if something were wrong, misleading, and inequitable the lack of an alternative is not a cause for continuing using it.” These error rates are unacceptably high….”

High-throughput analysis suggests differences in journal false discovery rate by subject area and impact factor but not open access status

Abstract:  Background

A low replication rate has been reported in some scientific areas motivating the creation of resource intensive collaborations to estimate the replication rate by repeating individual studies. The substantial resources required by these projects limits the number of studies that can be repeated and consequently the generalizability of the findings. We extend the use of a method from Jager and Leek to estimate the false discovery rate for 94 journals over a 5-year period using p values from over 30,000 abstracts enabling the study of how the false discovery rate varies by journal characteristics.

Results

We find that the empirical false discovery rate is higher for cancer versus general medicine journals (p?=?9.801E?07, 95% CI: 0.045, 0.097; adjusted mean false discovery rate cancer?=?0.264 vs. general medicine?=?0.194). We also find that false discovery rate is negatively associated with log journal impact factor. A two-fold decrease in journal impact factor is associated with an average increase of 0.020 in FDR (p?=?2.545E?04). Conversely, we find no statistically significant evidence of a higher false discovery rate, on average, for Open Access versus closed access journals (p?=?0.320, 95% CI ??0.015, 0.046, adjusted mean false discovery rate Open Access?=?0.241 vs. closed access?=?0.225).

Conclusions

Our results identify areas of research that may need additional scrutiny and support to facilitate replicable science. Given our publicly available R code and data, others can complete a broad assessment of the empirical false discovery rate across other subject areas and characteristics of published research.

The transformative power of values-enacted scholarship | Humanities and Social Sciences Communications

Abstract:  The current mechanisms by which scholars and their work are evaluated across higher education are unsustainable and, we argue, increasingly corrosive. Relying on a limited set of proxy measures, current systems of evaluation fail to recognize and reward the many dependencies upon which a healthy scholarly ecosystem relies. Drawing on the work of the HuMetricsHSS Initiative, this essay argues that by aligning values with practices, recognizing the vital processes that enrich the work produced, and grounding our indicators of quality in the degree to which we in the academy live up to the values for which we advocate, a values-enacted approach to research production and evaluation has the capacity to reshape the culture of higher education.

 

Contemporary China Centre Blog » The Hidden Language Policy of China’s Research Evaluation Reform

“In February, China’s Ministries of Education and of Science and Technology released two documents that reshaped the research landscape: “Some Suggestions on Standardizing the Use of SCI Paper Indexes” and “Some Measures to Eliminate the Bad Orientation of ‘Papers Only’.” Elaborating the academic reform that President Xi has pursued since 2016, they provide the first detailed steps for dramatically reducing the role of the Science Citation Index (SCI) in evaluating Chinese research….

For twenty years, the SCI—a prestige listing of “high impact” scientific journals—controlled the careers of Chinese researchers. It and various derived indices are commonly used for university rankings and research evaluation (the UK, for example, uses SCI-derived data to allocate funding), but China relied on the SCI to an unusual degree. There, quotas for publishing in SCI journals governed hiring and advancement, pay bonuses, and even graduation from doctoral programs. In using the SCI as a “gold standard,” Chinese administrators sought to increase productivity, enhance national prestige, and benchmark the closure of gaps between China’s research sector and cutting-edge work internationally.

To a significant extent, these goals have been met. China has risen rapidly up international rankings, and Chinese research productivity routinely exceeds the world average (Li & Wang, 2019). Since 2016, China has been the world’s largest producer of published research, accounting for over a third of all global activity (Xie & Freeman, 2018, p. 2). …

So why change a winning formula? The Ministries’ announcements have focused on eliminating perverse incentives created by over-reliance on the SCI, which saw researchers prioritizing quantity over quality, nepotistically inflating citation counts, and falling prey to predatory journals. The Chinese government has, accordingly, allocated tens of millions of dollars to initiatives for improving Chinese journal quality and combating corrupt publishing practices. At the same time, commentators have noted the potential cost savings of de-centering SCI metrics….”

Scrutinising what Open Access Journals Mean for Global Inequalities | SpringerLink

Abstract:  In the current article, we tested our hypothesis by which high-impact journals tend to have higher Article Processing Charges (APCs) by comparing journal IF metrics with the OA publishing fees they charge. Our study engaged with both journals in Science, Technology, Engineering and Mathematics (STEM) fields and the Humanities and Social Sciences (HSS) and included Hybrid, Diamond and No OA journals. The overall findings demonstrate a positive relationship between APCs and journals with high IF for two of the subject areas we examined but not for the third, which could be mediated by the characteristics and market environment of the publishers. We also found significant differences between the analysed research fields in terms of APC policies, as well as differences in the relationship between APCs and the IF across periodicals. The study and analysis conducted reinforces our concerns that Hybrid OA models are likely to perpetuate inequalities in knowledge production.