Abstract: An overview is presented of resources and web analytics strategies useful in setting solutions for capturing usage statistics and assessing audiences for open access academic journals. A set of complementary metrics to citations is contemplated to help journal editors and managers to provide evidence of the performance of the journal as a whole, and of each article in particular, in the web environment. The measurements and indicators selected seek to generate added value for editorial management in order to ensure its sustainability. The proposal is based on three areas: counts of visits and downloads, optimization of the website alongside with campaigns to attract visitors, and preparation of a dashboard for strategic evaluation. It is concluded that, from the creation of web performance measurement plans based on the resources and proposals analysed, journals may be in a better position to plan the data-driven web optimization in order to attract authors and readers and to offer the accountability that the actors involved in the editorial process need to assess their open access business model.
Abstract: In the era of digitization and Open Access, article-level metrics are increasingly employed to distinguish influential research works and adjust research management strategies. Tagging individual articles with digital object identifiers allows exposing them to numerous channels of scholarly communication and quantifying related activities. The aim of this article was to overview currently available article-level metrics and highlight their advantages and limitations. Article views and downloads, citations, and social media metrics are increasingly employed by publishers to move away from the dominance and inappropriate use of journal metrics. Quantitative article metrics are complementary to one another and often require qualitative expert evaluations. Expert evaluations may help to avoid manipulations with indiscriminate social media activities that artificially boost altmetrics. Values of article metrics should be interpreted in view of confounders such as patterns of citation and social media activities across countries and academic disciplines.
Abstract: Introduction. This study aimed to analyse the current use status of Korean scholarly papers accessible in the repository of the Korea Institute of Science and Technology Information in order to assess the economic validity of the maintenance and operation of the repository.
Method. This study used the modified historical cost method and performed regression analysis on the use of Korean scholarly papers by year and subject area.
Analysis. The development cost of the repository and the use volumes were analysed based on 1,154,549 Korean scholarly papers deposited in the Institute repository.
Results. Approximately 86% of the deposited papers were downloaded at least once and on average, a paper was downloaded over twenty-six times. Regression analysis showed that the ratio of use of currently deposited papers is likely to decrease by 7.6% annually, as new ones are added.
Conclusions. The need to manage currently deposited papers for at least thirteen years into the future and provide empirical proof that the repository has contributed to Korean researchers conducting research and development in the fields of science and technology. The benefit-cost ratio was above nineteen, confirming the economic validity of the repository.
“Over the last two-and-a-half years, we have been working as part of the EU-funded HIRMEOS (High Integration of Research Monographs in the European Open Science Infrastructure) project to create open source software and databases to collectively gather and host usage data from various platforms for multiple publishers. As part of this work, we have been thinking deeply about what the data we collect actually means. Open Access books are read on, and downloaded from, many different platforms – this availability is one of the benefits of making work available Open Access, after all – but each platform has a different way of counting up the number of times a book has been viewed or downloaded.
Some platforms count a group of visits made to a book by the same user within a continuous time frame (known as a session) as one ‘view’ – we measure usage in this way ourselves on our own website – but the length of a session might vary from platform to platform. For example, on our website we use Google Analytics, according to which one session (or ‘view’) lasts until there is thirty minutes of inactivity. But platforms that use COUNTER-compliant figures (the standard that libraries prefer) have a much shorter time-frame for a single session – and such a platform would record more ‘views’ than a platform that uses Google Analytics, even if it was measuring the exact same pattern of use.
Other platforms simply count each time a book is accessed (known as a visit) as one ‘view’. There might be multiple visits by the same user within a short time frame – which our site would count as one session, or one ‘view’ – but which a platform counting visits rather than sessions would record as multiple ‘views’.
Downloads (which we also used to include in the number of ‘views’) also present problems. For example, many sites only allow chapter downloads (e.g. JSTOR), others only whole book downloads (e.g. OAPEN), and some allow both (e.g. our own website). How do you combine these different types of data? Somebody who wants to read the whole book would need only one download from OAPEN, but as many downloads as there are chapters from JSTOR – thus inflating the number of downloads for a book that has many chapters.
So aggregating this data into a single figure for ‘views’ isn’t only comparing apples with oranges – it’s mixing apples, oranges, grapes, kiwi fruit and pears. It’s a fruit salad….”
“Journal articles downloaded from Sci-Hub, an illegal site of pirated materials, were cited nearly twice as many times as non-downloaded articles, reports a new paper published online in the journal, Scientometrics….
Correa and colleagues could have added either one of these sources of usage data to their model to verify whether the Sci-Hub indicator continued to independently predict future citations. That would have confirmed whether Sci-Hub was a cause of — instead of merely associated with — future citations. Without such a control, the authors may have fumbled both their analysis and conclusion.
Sci-Hub may indeed lead to more article citations, although it is impossible to reach that conclusion from this study….”
“Open research is fundamentally changing the way that researchers communicate and collaborate to advance the pace and quality of discovery. New and dynamic open research-driven workflows are emerging, thus increasing the findability, accessibility, and reusability of results. Distribution channels are changing too, enabling others — from patients to businesses, to teachers and policy makers — to increasingly benefit from new and critical insights. This in turn has dramatically increased the societal impact of open research. But what remains less clear is the exact nature and scope of this wider impact as well as the societal relevance of the underpinning research….”
“What impact does open research have on society and progressing global societal challenges? The latest results of research carried out between Springer Nature, the Association of Universities in the Netherlands (VSNU) and the Dutch University Libraries and the National Library consortium (UKB), illustrates a substantial advantage for content published via the Gold OA route where research is immediately and freely accessible.
Since the UN’s Sustainable Development Goals (SDGs) were launched in 2015, researchers, their funders and other collaborative partnerships have sought to explore the impact and contribution of open research on SDG development. However – until now – it has been challenging to map, and therefore identify, emerging trends and best practice for the research and wider community. Through a bibliometric analysis of nearly 360,000 documents published in 2017 and a survey of nearly 6,000 readers on Springer Nature websites, the new white paper, Open for All, Exploring the Reach of Open Access Content to Non-Academic Audiences shows not only the effects of content being published OA but more importantly who that research is reaching.”
“Preprint servers offer a means to disseminate research reports before they undergo peer review and are relatively new to clinical research.1-4 medRxiv is an independent, not-for-profit preprint server for clinical and health science researchers that was introduced in June 2019.4 A central question was whether there would be adoption of a new approach to dissemination of pre–peer-review science. Now, a year after its establishment, we report medRxiv’s submissions, posts, and downloads.”