[2011.09079] Do ‘altmetric mentions’ follow Power Laws? Evidence from social media mention data in Altmetric.com

Abstract:  Power laws are a characteristic distribution that are ubiquitous, in that they are found almost everywhere, in both natural as well as in man-made systems. They tend to emerge in large, connected and self-organizing systems, for example, scholarly publications. Citations to scientific papers have been found to follow a power law, i.e., the number of papers having a certain level of citation x are proportional to x raised to some negative power. The distributional character of altmetrics has not been studied yet as altmetrics are among the newest indicators related to scholarly publications. Here we select a data sample from the altmetrics aggregator this http URL containing records from the platforms Facebook, Twitter, News, Blogs, etc., and the composite variable Alt-score for the period 2016. The individual and the composite data series of ‘mentions’ on the various platforms are fit to a power law distribution, and the parameters and goodness of fit determined using least squares regression. The log-log plot of the data, ‘mentions’ vs. number of papers, falls on an approximately linear line, suggesting the plausibility of a power law distribution. The fit is not very good in all cases due to large fluctuations in the tail. We show that fit to the power law can be improved by truncating the data series to eliminate large fluctuations in the tail. We conclude that altmetric distributions also follow power laws with a fairly good fit over a wide range of values. More rigorous methods of determination may not be necessary at present.

 

Full article: Pharmaceutical industry-authored preprints: scientific and social media impact

Abstract

Aim: Non–peer-reviewed manuscripts posted as preprints can be cited in peer-reviewed articles which has both merits and demerits. International Committee of Medical Journal Editors guidelines mandate authors to declare preprints at the time of manuscript submission. We evaluated the trends in pharma-authored research published as preprints and their scientific and social media impact by analyzing citation rates and altmetrics.

Research design and methods: We searched EuroPMC, PrePubMed bioRxiv and MedRxiv for preprints submitted by authors affiliated with the top 50 pharmaceutical companies from inception till June 15, 2020. Data were extracted and analyzed from the search results. The number of citations for the preprint and peer-reviewed versions (if available) were compiled using the Publish or Perish software (version 1.7). Altmetric score was calculated using the “Altmetric it” online tool. Statistical significance was analyzed by Wilcoxon rank-sum test.

Results: A total of 498 preprints were identified across bioRxiv (83%), PeerJ (5%), F1000Research (6%), Nature Proceedings (3%), Preprint.org (3%), Wellcome Open Research preprint (0.2%) and MedRxiv (0.2%) servers. Roche, Sanofi and Novartis contributed 56% of the retrieved preprints. The median number of citations for the included preprints was 0 (IQR =1, Min-Max =0-45). The median number of citations for the published preprints and unpublished preprints was 0 for both (IQR =1, Min-Max =0-25 and IQR =1, Min-Max =0-45, respectively; P?=?.091). The median Altmetric score of the preprints was 4 (IQR =10.5, Min-Max =0-160).

Conclusion: Pharma-authored research is being increasingly published as preprints and is also being cited in other peer-reviewed publications and discussed in social media.

A fairer way to compare researchers at any career stage and in any discipline using open-access citation data

Abstract:  The pursuit of simple, yet fair, unbiased, and objective measures of researcher performance has occupied bibliometricians and the research community as a whole for decades. However, despite the diversity of available metrics, most are either complex to calculate or not readily applied in the most common assessment exercises (e.g., grant assessment, job applications). The ubiquity of metrics like the h-index (h papers with at least h citations) and its time-corrected variant, the m-quotient (h-index ÷ number of years publishing) therefore reflect the ease of use rather than their capacity to differentiate researchers fairly among disciplines, career stage, or gender. We address this problem here by defining an easily calculated index based on publicly available citation data (Google Scholar) that corrects for most biases and allows assessors to compare researchers at any stage of their career and from any discipline on the same scale. Our ??-index violates fewer statistical assumptions relative to other metrics when comparing groups of researchers, and can be easily modified to remove inherent gender biases in citation data. We demonstrate the utility of the ??-index using a sample of 480 researchers with Google Scholar profiles, stratified evenly into eight disciplines (archaeology, chemistry, ecology, evolution and development, geology, microbiology, ophthalmology, palaeontology), three career stages (early, mid-, late-career), and two genders. We advocate the use of the??-index whenever assessors must compare research performance among researchers of different backgrounds, but emphasise that no single index should be used exclusively to rank researcher capability.

 

Could early tweet counts predict later citation counts? A gender study in Life Sciences and Biomedicine (2014–2016)

Abstract:  In this study, it was investigated whether early tweets counts could differentially benefit female and male (first, last) authors in terms of the later citation counts received. The data for this study comprised 47,961 articles in the research area of Life Sciences & Biomedicine from 2014–2016, retrieved from Web of Science’s Medline. For each article, the number of received citations per year was downloaded from WOS, while the number of received tweets per year was obtained from PlumX. Using the hurdle regression model, I compared the number of received citations by female and male (first, last) authored papers and then I investigated whether early tweet counts could predict the later citation counts received by female and male (first, last) authored papers. In the regression models, I controlled for several important factors that were investigated in previous research in relation to citation counts, gender or Altmetrics. These included journal impact (SNIP), number of authors, open access, research funding, topic of an article, international collaboration, lay summary, F1000 Score and mega journal. The findings showed that the percentage of papers with male authors in first or last authorship positions was higher than that for female authors. However, female first and last-authored papers had a small but significant citation advantage of 4.7% and 5.5% compared to male-authored papers. The findings also showed that irrespective of whether the factors were included in regression models or not, early tweet counts had a weak positive and significant association with the later citations counts (3.3%) and the probability of a paper being cited (21.1%). Regarding gender, the findings showed that when all variables were controlled, female (first, last) authored papers had a small citation advantage of 3.7% and 4.2% in comparison to the male authored papers for the same number of tweets.

 

WikiCite/2020 Virtual conference – Meta

“A Wikimedia initiative to develop open citations and linked bibliographic data to serve free knowledge. WikiCite is a series of conferences and workshops in support of that goal. The project is based in the Wikidata, which celebrates its 8th Birthday this year. As part of this year’s online conference, there are a series of sessions looking in depth at the WikiCite facets of Wikidata relating to citations, publications, authors, institutions, archives and related topics….”

Do open access journal articles experience a citation advantage? Results and methodological reflections of an application of multiple measures to an analysis by WoS subject areas | SpringerLink

Abstract:  This study is one of the first that uses the recently introduced open access (OA) labels in the Web of Science (WoS) metadata to investigate whether OA articles published in Directory of Open Access Journals (DOAJ) listed journals experience a citation advantage in comparison to subscription journal articles, specifically those of which no self-archived versions are available. Bibliometric data on all articles and reviews indexed in WoS, and published from 2013 to 2015, were analysed. In addition to normalised citation score (NCS), we used two additional measures of citation advantage: whether an article was cited at all; and whether an article is among the most frequently cited percentile of articles within its respective subject area (pptopX %). For each WoS subject area, the strength of the relationship between access status (whether an article was published in an OA journal) and each of these three measures was calculated. We found that OA journal articles experience a citation advantage in very few subject areas and, in most of these subject areas, the citation advantage was found on only a single measure of citation advantage, namely whether the article was cited at all. Our results lead us to conclude that access status accounts for little of the variability in the number of citations an article accumulates. The methodology and the calculations that were used in this study are described in detail and we believe that the lessons we learnt, and the recommendations we make, will be of much use to future researchers interested in using the WoS OA labels, and to the field of citation advantage in general.