Open access: Half the time Unpaywall users search for academic journal articles that are legally free to access — Quartz

“Now a new study has found that nearly half of all academic articles that users want to read are already freely available. These studies may or may not have been published in an open-access journal, but there is a legally free version available for a reader to download.

To arrive at this conclusion, researcher Heather Piwowar and her colleagues used data from a web-browser extension they had developed called Unpaywall. When users of the extension land on an academic article, it trawls the web to find if there are free versions to download from places such as pre-print services or those uploaded on university websites.

In an analysis of 100,000 papers queried by Unpaywall, Piwowar and her colleagues found that as many as 47% searched for studies that had a free-to-read version available. The study is yet to be peer-reviewed, but Ludo Waltman of Leiden University told Nature that it is ‘careful and extensive.'”

The State of OA: A large-scale analysis of the prevalence and impact of Open Access articles [PeerJ Preprints]

Despite growing interest in Open Access (OA) to scholarly literature, there is an unmet need for large-scale, up-to-date, and reproducible studies assessing the prevalence and characteristics of OA. We address this need using oaDOI, an open online service that determines OA status for 67 million articles.


The strange world of academic publishing | Ecology Ng?tahi

“A couple of days ago I tried to explain to my parents (non-scientists, obviously) how publishing a paper works and why it is so important for us scientists. No problem to wrap your head around the publish or perish principle. Naturally they wanted to know where they could read these papers and that’s where the story became a little bit more complicated and confusing for an outsider. It just doesn’t make sense to them that scientists give their work to publishers for free and that reviewers and editors, who also put in considerable work hours don’t see a penny either. The publishing companies on the other hand earn huge amounts of money by selling single articles to individuals and more importantly journal subscriptions to numerous university and research libraries worldwide. The big publishing houses basically make their profits from selling free work from scientists back to them through the university libraries with profit margins of up to 40%. Sounds a bit insane, right?”

Getting serious about open access discovery – Is open access getting too big to ignore? | Musings about librarianship

“With all the intense interest Unpaywall is getting (See coverage in academic sites like Nature, ScienceChronicle of Higher education, as well as more mainstream tech sites like TechcruchGimzo), you might be surprised to know that Unpaywall isn’t in fact the first tool that promises to help users unlock paywalls by finding free versions.

Predecessors like Open Access button (3K users), Lazy Scholar button (7k Users), Google Scholar button (1.2 million users) all existed before Unpaywall(70k users) and are arguably every bit as capable as Unpaywall and yet remained a niche service for years.”

I like the new Clarivate-Impactstory partnership for several reasons, but…

“I like the new Clarivate-Impactstory partnership for several reasons….However, the Clarivate PR team…inserted this passage into the press release: “Researchers conducting online searches for scholarly articles frequently get unreliable results that can compromise their work. This is typically because the results omit journal articles behind paid-subscription paywalls or because ‘web-scraping’ utilities return versions of articles that are not peer-reviewed or are in violation of copyright laws.” …

It’s true that search results can be unreliable because they omit paywalled articles. But there are a few problems with the rest of the passage….

* The sentence on web-scraping utilities is obscure. Because it mentions articles that are not peer-reviewed, it seems to be an oblique criticism of preprint repositories. But preprint repositories depend on voluntary author deposits, not web scraping. Moreover, finding preprints in a search is a feature for people who know how to use them, not a bug. It doesn’t make the search less reliable. The criticism misses the target. 

* Perhaps the reference to web scraping is an oblique criticism of Sci-Hub. But Sci-Hub focuses on refereed postprints, indeed versions of record, not unrefereed preprints. Moreover, it depends on downloads, even if illicit, not web scraping. The criticism misses the target.

* The final part implies that finding illegal copies of peer-reviewed articles in a search makes the search unreliable. This is false. The writer probably meant to criticize these copies for infringement, but instead criticizes them for unreliability. The criticism misses the target.”

Clarivate Analytics announces landmark partnership with Impactstory to make open access content

“Clarivate Analytics today announced a novel public/private strategic partnership with Impactstory that will remove a critical barrier for researchers: limited open access (OA) to high-quality, trusted peer-reviewed content. Under the terms of the partnership, Clarivate Analytics is providing a grant to Impactstory to build on its oaDOI service, making open access content more easily discoverable, and the research workflow more efficient from discovery through publishing….The oaDOI service is from Impactstory, a nonprofit creating online tools to make science more open and reusable. It currently indexes 90 million articles and delivers open-access full text versions over a free, fast, open API that is currently used by over 700 libraries worldwide and fulfills over 2 million requests daily. Impactstory has also built Unpaywall, a free browser extension that uses oaDOI to find full text whenever researchers come across paywalled articles….”