From Google’s English: “A new way of conceiving scientific research, open science, was born with the computer revolution. In the wake of Open Access (free access to the results of research funded by public money), it accompanies the great ideal of transparency that today invades all spheres of life in society. This book [by Bernard Rentier] describes its origins, perspectives and objectives, and reveals the obstacles and obstacles to private profit and academic conservatism. …”
“2014 was the year of groundbreaking conversions to open access. The most publicized-one was the transition of Nature Communications, which revealed that open access is attractive even for the most reputable journals worldwide. The conversion of 8 Central European Journals was also accomplished this year by De Gruyter Open, and was a significant change for researchers in the region, and hopefully it will prove to be important for global community. Due to this recent development I was quite surprised when I came back to the paper by David J. Solomon, Bo-Christer Bjork and Mikael Laakso “A longitudinal comparison of citation rates and growth among open access journals”, which has been already discussed on this blog.
The thing which surprised me is the graph attached to the text, representing the number of journals that converted to open access each year. The graph is based on data from SCOPUS and DOAJ. According to this data the number of journal conversions had been growing gradually year by year from 1995 to 2000, but then it started to decline, which is hard to explain. Eventually, in 2012, it reached a lower value than in 1997….”
“The internet has dramatically lowered the cost of copying, including illicit copying. When the web was first weaved in the 1990s, intellectual-property owners found their property had, involuntarily, been turned into a common. Strong new copyright rules and draconian enforcement seemed to be necessary to tame the rebellious digital commoners and reclaim the level of control that had existed in an analogue world.
These arguments found a receptive audience among policymakers worldwide, and copyright’s scope, duration and penalties were dramatically expanded. Over the past two decades new legal rights have allowed “digital fences” to be used to surround copyrighted works, even if those fences interfered with people’s rights, such as to freely use snippets of content (the legal doctrine of “fair dealing,” known as “fair use” in America). Copyright’s restrictions were also misused to curtail competition, block research on cryptography and produce new online monopolies. Again, the “solution” to the tragedy of the commons—property rights—came with hefty costs.
You could consider the growing restrictions around intellectual property as “the second enclosure movement”. The first enclosures were the centuries-long waves of expropriation of English and Scottish common lands, turning them over to a handful of landowners….
Yet just as Hardin’s argument met with pushback from Ostrom and others in the physical context, there has also been powerful intellectual resistance to the second enclosure movement. Most notably, some of the problems of the terrestrial commons do not apply to the intangible versions: it is hard to overfish an idea….
Consider open-source software. It is precisely because the licence guarantees that the commons will remain open, and that each new contribution will be shared under the same terms, that people can commit to using it. Imagine trying to get phone manufacturers to use the Android operating system if Google could take it private at any time….
Furthermore, the proliferation of property rights has its costs. The American legal scholars Michael Heller and Rebecca Eisenberg call it the “anti-commons”: the idea that innovation withers because of too many property rights, patent thickets, exhaustive and exhausting copyright licensing procedures and the like. To take one example, the smartphone in your pocket is covered by between 5,000 and 15,000 patents, and potentially by as many as 250,000 when all related patents are counted. …”
“Never underestimate the power of one determined person. What Carole Cadwalladr has done to Facebook and big data, and Edward Snowden has done to the state security complex, the young Kazakhstani scientist Alexandra Elbakyan has done to the multibillion-dollar industry that traps knowledge behind paywalls. Sci-Hub, her pirate web scraper service, has done more than any government to tackle one of the biggest rip-offs of the modern era: the capture of publicly funded research that should belong to us all. Everyone should be free to learn; knowledge should be disseminated as widely as possible. No one would publicly disagree with these sentiments. Yet governments and universities have allowed the big academic publishers to deny these rights. Academic publishing might sound like an obscure and fusty affair, but it uses one of the most ruthless and profitable business models of any industry.
The model was pioneered by the notorious conman Robert Maxwell. He realised that, because scientists need to be informed about all significant developments in their field, every journal that publishes academic papers can establish a monopoly and charge outrageous fees for the transmission of knowledge. He called his discovery “a perpetual financing machine”. He also realised that he could capture other people’s labour and resources for nothing. Governments funded the research published by his company, Pergamon, while scientists wrote the articles, reviewed them and edited the journalsfor free. His business model relied on the enclosure of common and public resources. Or, to use the technical term, daylight robbery.
As his other ventures ran into trouble, he sold his company to the Dutch publishing giant Elsevier. Like its major rivals, it has sustained the model to this day, and continues to make spectacular profits. Half the world’s research is published by five companies: Reed Elsevier, Springer, Taylor & Francis, Wiley-Blackwell and the American Chemical Society. Libraries must pay a fortune for their bundled journals, while those outside the university system are asked to pay $20, $30, sometimes $50 to read a single article….”
“We should aim to create an open scientific culture where as much information as possible is moved out of people’s heads and labs, onto the network, and into tools which can help us structure and filter the information. This means everything – data, scientific opinions, questions, ideas, folk knowledge, workflows, and everything else – the works. Information not on the network can’t do any good….”
“At the time, the conversation felt constructive. Vocal customer segments were demanding change, new players were entering the market, and existing players were modifying their stance. From 2005 to 2007 I had many conversations with multiple funding body representatives. I heard on many occasions words to the effect of “We recognize the value of publishing, but we want to improve the dissemination of research we fund, will you work with us?” Sitting across the table there was a desire to find a solution.
When returning to journal publishing a few years back, I thought the whole Open Access debate might have been put to bed. After all, if customers want Open Access options there are plenty. But to be clear, it has not been put to bed, and today the tone is much different. In September, thirteen European funding bodies proposed Plan S, and the Wellcome Trust and The Gates Foundation quickly endorsed it. Though draped in Open Access, Plan S is not about ensuring the research these funders fund can be Open Access (these venues already exist), it is about undermining the commercial viability of subscription journal publishing, and also, it turns out, limiting the commercial viability of Open Access publishing. There are a number of provisions in Plan S that are intended to do real harm to publishers. And Robert-Jan Smits, the European’s Special Envoy on Open Access, who leads Plan S, is unabashed in Plan S’s aspiration to undermine publishers. For example, in a recent Physics Today article he states “There is something very wrong in the [publishing] system, and it has to change big time,” and “for the last 20 years, libraries, universities, and [others] had the possibility to sort this out. But they did not. Now the funders have stepped in, and they now call the shots.” …
Open Access is no longer about Open Access, it is about harming publishers. And that is a shame.”
“My path to OpenStax was a convoluted one. I went to Rice University to study opera performance, but like many students I had a change of heart somewhere along the way. Luckily, I had also been working as a technologist throughout college, doing website design and development for faculty. When I graduated in 2008, in the height of the recession, I was fortunate enough to have one of my supervisors recommend that I look into working at Connexions – the predecessor to OpenStax.
I joined the team as a content manager, thinking that this would be a good interim job where I could learn some new skills while I figured out what I was going to do with my life.
I quickly realized that I was working with brilliant people looking to solve a very interesting problem: how to democratize access to publishing and increase the availability of knowledge….”
“As for the articles themselves, Suber is a witty, intelligent, and compelling advocate for OA. In the first sections, Suber lays a foundation explaining OA and its emergence as a response to the serials pricing crisis and the development of the web. Across multiple articles, Suber lays out his strongest arguments for knowledge as a public good and for OA specifically, describes and refutes the opposing arguments, creates the vocabulary that distinguishes between flavors of OA, and presents evidence that OA can and will work.
Moving from the early overview chapters, Suber explores some practical applications that are useful for librarians looking to learn more about how OA can be implemented. One of the interesting concepts that he explores is “flipping a journal” (150), a process by which a journal could become OA by replacing the subscription fees imposed on readers with publication fees imposed on accepted authors. The model is not perfect, and Suber devotes considerable space to envisioning the realistic obstacles that would be faced in practice, but he ultimately views the process as a win-win that would allow publishers to explore OA without much risk. Elsewhere, Suber discusses scenarios for creating OA digitization projects, establishing OA for electronic theses and dissertations, and setting an OA policy for a funding agency or university. Suber’s advocacy is pragmatic throughout, arguing that different forms of OA are suited to different contexts and that, although some forms are more ideal than others, it is important to recognize all progress and not let the perfect become the enemy of the good….”
“This compelling blend of theory, policy, and practice is also on display in Suber’s fascinating new book, Knowledge Unbound: Selected Writings on Open Access, 2002-2011, published last year by MIT Press. Anyone with an interest in the rich history and evolving landscape of academic publishing should take note of Suber’s work. Director of both the Harvard Office for Scholarly Communication and the Harvard Open Access Project, Senior Researcher at SPARC and the Berkman Klein Center for Internet & Society, and Research Professor of Philosophy at Earlham College, Suber became a leader in the open access movement during its pivotal decade, a period traced in this collection of forty-four essays….
For Suber, asking vital questions about the future of scholarly publishing naturally requires a critical stance toward “the assumption that the interests of the research community should be subordinated to the business interests of publishers” (p. 21). With forceful clarity, Suber guides the reader away from bended-knee paeans to a tiresome red herring, encouraging serious reflection about what scholarly publishing ought to accomplish in the first place….”