Meet the Robin Hood of Science | Big Think

” … On September 5th, 2011, Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it. The website works in two stages, firstly by attempting to download a copy from the LibGen database of pirated content, which opened its doors to academic papers in 2012 and now contains over 48 million scientific papers. The ingenious part of the system is that if LibGen does not already have a copy of the paper, Sci-hub bypasses the journal paywall in real time by using access keys donated by academics lucky enough to study at institutions with an adequate range of subscriptions. This allows Sci-Hub to route the user straight to the paper through publishers such as JSTOR, Springer, Sage, and Elsevier. After delivering the paper to the user within seconds, Sci-Hub donates a copy of the paper to LibGen for good measure, where it will be stored forever, accessible by everyone and anyone …”

Will your paper be more cited if published in Open Access? | SciELO in Perspective

“Academia.edu is a well-known social network for scholars, established in 2008, which currently informs over 30 million registered users. The platform is used to share research papers, monitor their impact and follow up on any research in a particular area of ??expertise. Its repository contains more than 8 million full-text articles published in open access (OA) and receives 36 million visitors per month. In April 2015, a research conducted by six Academia.edu employees and the consulting company Polynumeral1 on the growth of received citations to research publications that were deposited in its open access repository was distributed to 20 million users registered on its website, stating that the articles there deposited increased citations received by 83% within five years …”

The relationship between journal rejections and their impact factors – ScienceOpen Blog

“Frontiers recently published a fascinating article about the relationship between the impact factors (IF) and rejection rates from a range of journals. It was a neat little study designed around the perception that many publishers have that in order to generate high citation counts for their journals, they must be highly selective and only publish the ‘highest quality’ work. Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research. The new data from Frontiers shows that this perception is most likely false. From a random sample of 570 journals (indexed in the 2014 Journal Citation Reports; Thomson Reuters, 2015), it seems that journal rejection rates are almost entirely independent of impact factors. Importantly, this implies that researchers can just as easily submit their work to less selective journals and still have the same impact factor assigned to it. This relationship will remain important while the impact factor continues to dominate assessment criteria and how researchers evaluate each other (whether or not the IF is a good candidate for this is another debate) …”

How peer reviewers might hold the key to making science more transparent | Pete Etchells | Science | The Guardian

” … While the theoretical case for open science is easy to make, practically getting scientists to make those changes is less trivial. Over the past few years, initiatives such as the Transparency and Openness Promotion Guidelines, Open Science Foundation badges, and study preregistration have been developed to encourage scientists to adopt open practices. These drives have been very successful in driving top-down change, by encouraging journals to adopt new policies and practices. But what about bottom-up approaches to the problem of promoting open science? On Wednesday, a new paper published in Royal Society Open Science argued for a new, grassroots approach to this problem, by putting the power back into the hands of scientists at the coalface of research, by changing the way that we think about the peer review process (full disclosure: both myself and fellow Head Quarters blogger Chris Chambers are co-authors on the paper). The Peer Reviewers’ Openness (PRO) Initiative is, at its core, a simple pledge: scientists who sign up to the initiative agree that, from January 1 2017, will not offer to comprehensively review, or recommend the publication of, any scientific research papers for which the data, materials and analysis code are not publicly available, or for which there is no clear reason as to why these things are not available. To date, over 200 scientists have signed the pledge …”

Radical OA Conference – Introduction: Gary Hall and Janneke Adema : Free Download & Streaming : Internet Archive

” … One of the outcomes of this conference has been the establishment of the Radical Open Access Network, a network of publishers, theorists, scholars, librarians, technology specialists, activists and others, from different fields and backgrounds, both inside and outside of the university. This horizontal collective endeavours to strengthen alliances between the open access movement and other struggles concerned with the right to access, copy, distribute, sell and (re)use artistic, literary, cultural and academic research works and other materials. It draws on a spirit of on-going creative experimentation, and a willingness to subject some of our most established scholarly communication and publishing practices, together with the institutions that sustain them (the library, publishing house etc.), to rigorous critique …”

Is Academia.edu Improving Access to Professors’ Research—or Is It Just Profiting From It? – The Atlantic

“Richard Price always had an entrepreneurial bent. He started a cake business in his mum’s kitchen during a summer break from his doctoral program at Oxford, eventually converting it into a sandwich-delivery service after realizing people only ate cake once a week. Then, when one of his philosophy papers took three years to get published, Price channeled his business interests into a new venture aimed at streamlining that academic process. After finishing his DPhil (the English equivalent of a Ph.D.), Price raised venture capital in London and moved to San Francisco to start Academia.edu in 2008 …”

Open and Shut?: The open access movement slips into closed mode

“In October 2003, at a conference held by the Max Planck Society (MPG) and the European Cultural Heritage Online (ECHO) project, a document was drafted that came to be known as the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities. More than 120 cultural and political organisations from around the world attended and the names of the signatories are openly available here. Today the Berlin Declaration is held to be one of the keystone events of the open access movement — offering as it did a definition of open access, and calling as it did on all researchers to publish their work in accordance with the open principles outlined in the Declaration … There have been annual follow-up conferences to monitor implementation of the Berlin Declaration since 2003, and these have been held in various parts of the world — in March 2005, for instance, I attended Berlin 3, which that year took place in Southampton (and for which I wrote a report). The majority of these conferences, however, have been held in Germany, with the last two seeing a return to Berlin. This year’s event (Berlin 12) was held on December 8th and 9th at the Seminaris CampusHotel Berlin.  Of course, open access conferences and gatherings are two a penny today. But given its historical importance, the annual Berlin conference is viewed as a significant event in the OA calendar. It was particularly striking, therefore, that this year (unlike most OA conferences, and so far as I am aware all previous Berlin conferences) Berlin 12 was ‘by invitation only‘. Also unlike other open access conferences, there was no live streaming of Berlin 12, and no press passes were available. And although a Twitter hashtag was available for the conference, this generated very little in the way of tweets, with most in any case coming from people who were not actually present at the conference,  including a tweet from a Max Planck librarian complaining that no MPG librarians had been invited to the conference.  Why it was decided to make Berlin 12 a closed event is not clear. We do however know who gave presentations as the agenda is online, and this indicates that there were 14 presentations, 6 of which were given by German presenters (and 4 of these by Max Planck people). This is a surprising ratio given that the subsequent press release described Berlin 12 as an international conference. There also appears to have been a shortage of women presenters (see herehere, and here).  But who were the 90 delegates who attended the conference? That we do not know …”

Annotating the scholarly web : Nature News & Comment

Would researchers scrawl notes, critiques and comments across online research papers if software made the annotation easy for them? Dan Whaley, founder of the non-profit organization Hypothes.is, certainly thinks so.  Whaley’s start-up company has built an open-source software platform for web annotations that allows users to highlight text or to comment on any web page or PDF file. And on 1 December, Hypothes.is announced partnerships with more than 40 publishers, technology firms and scholarly websites, including Wiley, CrossRef, PLOS, Project Jupyter, HighWire and arXiv.  Whaley hopes that the partnerships will encourage researchers to start annotating the world’s online scholarship. Scientists could scribble comments on research papers and share them publicly or privately, and educators could use annotation to build interactive classroom lessons, he says. If the idea takes off, some enthusiasts suggest that the ability to annotate research papers online might even change the way that papers are written, peer reviewed and published.  Hypothes.is, which was founded in 2011 in San Francisco, California, and is supported by philanthropic grants, has a bold mission: ‘To enable conversations over the world’s knowledge.’ But the concept it implements, online annotation, is as old as the web itself. The idea of permitting readers of web pages to annotate them dates back to 1993; an early version of the Mosaic web browser had this functionality. Yet the feature was ultimately discarded. A few websites today have inserted code that allows annotations to be made on their pages by default, including the blog platform Medium, the scholarly reference-management system F1000 Workspace and the news site Quartz. However, annotations are visible only to users on those sites. Other annotation services, such as A.nnotate or Google Docs, require users to upload documents to cloud-computing servers to make shared annotations and comments on them.  Hypothes.is is not the only service that wants to make it easy for users to leave annotations across the entire web. A competing offering is a web annotation service from Genius, a start-up firm that began as a site for annotating rap lyrics. In April, it launched services such as browser plugins to help users to annotate any web page. But unlike Hypothes.is, the Genius code is not open-source, its service doesn’t work on PDFs, and it is not working with the scholarly community. On the scholarly side, the reference-management tool ReadCube makes it possible for users to annotate PDFs of papers viewed on a ReadCube web reader — but that software is proprietary. (ReadCube is owned by Digital Science, a firm operated by the Holtzbrinck Publishing Group, which also has a share in Nature’s publisher.) …”

How ContentMine at Cambridge will use CrossRef’s API to mine Science | ContentMine

“I’ve described how CrossRef works – now I’ll show how ContentMine will use it for daily mining. ContentMine sets out to mine the whole scientific literature “100 million facts”. Up till now we’ve been building the technical infrastructure, challenging for our rights, understanding the law, and ordering the kit. We’ve built and deployed a number of prototypes. But we are now ready to start indexing science in earnest. Since ContentMining has been vastly underused, and because publisher actions have often chilled researchers and libraries, we don’t know in detail what people want and how they would tackle it. We think there are many approaches – here are a few …”

Four Reasons Why We Are Converting to Open Science | Rene Bekkers

“Open Science offers four advantages to the scientific community, nonprofit organizations, and the public at large: Access: we make our work more easily accessible for everyone. Our research serves public goods, which are served best by open access. Efficiency: we make it easier for others to build on our work, which saves time. Quality: we enable others to check our work, find flaws and improve it. Innovation: ultimately, open science facilitates the production of knowledge. What does the change mean in practice? … “