Push Versus Pull | April 2018 | Communications of the ACM

“The best consequence of the proposed Pull Model is access for all. It also introduces a free market mechanism for scholarly publications, whereby publishers must compete for institution submission subscription fees, by establishing themselves to be worthy outlets for dissemination, maintaining their reputation for quality, and preserving the integrity of the peer-review process. Lastly, it encourages institutions and their faculty to work more closely in assessing publication quality. With these ends in mind, the future of publications will continue to change, and the Pull Model, though disruptive to the existing publishing ecosystem, is one step to initiate a discussion on such a transformation.”

Can the automatic posting of preprints increase the pace of medical research? – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning

“Preprints — versions of research papers made publicly available prior to formal publication in a peer reviewed journal — continue to be a topic of much discussion within the medical publications community. As the industry looks at ways to improve and advance the transparent and timely dissemination of research, preprints offer a potential route to achieving these aims. Already commonly used in fields such as physics, the launch of the medical publications preprint server medRxiv, expected later this year, is awaited with interest.

Meanwhile, Public Library of Science (PLOS) announced last month that all articles submitted to PLOS journals will now automatically be published on the biology preprint server bioRxiv as preprints, ahead of ‘traditional’ publication in a PLOS journal. Following initial top-line checks by PLOS, to ensure adherence to things like ethical standards and the journal’s scope, articles will be posted to bioRxiv while undergoing peer review at PLOS in parallel.

PLOS and Cold Spring Harbor Laboratory, which operates bioRxiv, hope this collaboration will help advance data dissemination and ultimately increase the speed of research. The potential of preprints has also been explored by other groups, including the possibility for preprints to improve online article engagement and for journals to use preprint servers to identify potential articles for publication.”

Seeking Impact and Visibility: Scholarly Communication in Southern Africa

“African scholarly research is relatively invisible globally because even though research production on the continent is growing in absolute terms, it is falling in comparative terms. In addition, traditional metrics of visibility, such as the Impact Factor, fail to make legible all African scholarly production. Many African universities also do not take a strategic approach to scholarly communication to broaden the reach of their scholarsí work. To address this challenge, the Scholarly Communication in Africa Programme (SCAP) was established to help raise the visibility of African scholarship by mapping current research and communication practices in Southern African universities and by recommending and piloting technical and administrative innovations based on open access dissemination principles. To do this, SCAP conducted extensive research in four faculties at the Universities of Botswana, Cape Town, Mauritius and Namibia.”

Scholarly Communication in Africa Project – SALDRU

“The Scholarly Communication in Africa Programme (SCAP) was a three-year initiative aimed at increasing the publication and visibility of African research through harnessing the potential for scholarly communication in the digital age. Jointly led by the Centre for Educational Technology and the Research Office at UCT, the project engaged four African universities [the Universities of Botswana, Cape Town, Mauritius, and Namibia] in action research to better understand the ecosystem of scholarly communication in Africa and address the scholarly communication needs and aspirations at the various participating institutions….”

Blockchain offers a route to a true scholarly commons

“Elsevier’s acquisition of these open services highlights the prospect that companies will come to control the global scientific infrastructure. Even though the openness of the digital services and resources used by research is more and more taken for granted, the traditional web is not a public domain….

But there is another way, in the form of the decentralised web. Here, there are no central repositories; instead infrastructure is decentralised and information is distributed between countless different computers, accessible to all, in what are called peer-to-peer networks.

One example of this technology is BitTorrent. This allows the rapid transmission of large files by dividing them into chunks and allowing clients to load objects piece by piece, while at the same time making the already loaded pieces available to all other clients. Various initiatives, such as decentralised archive transport (DAT) and the interplanetary file system (IPFS), have adapted bit torrent for information sharing, replacing a centralised infrastructure of servers and clients with a decentralised network. In the decentralised web, trust is created not through trade names or URLs, but through cryptography….

Hosting content is no longer a special role that requires trustworthy institutions, service contracts and plausible business models. Rather, the objects are in the public domain, and their dissemination requires only the open protocols of the decentralised web. This paves the way for new business models, by opening up another area of the internet to the “permissionless innovation” that drove its development.

The benefits of such approaches for making digital objects available for research, teaching and digital cultural heritage are obvious. As well as technical improvements in the transmission and storage of information, the current situation of privileged players controlling access to content would be replaced with a true scholarly commons, distributed between many computer systems….

The impetus for the second step—complete disintermediation, doing away with centralised publishers in the same way that bitcoin renders banks unnecessary— is more likely to come from start-ups than incumbents. There are already decentralised peer-to-peer scholarly publishing platforms, for example Aletheia and Pluto. Decentralised social networking platforms, such as the Akasha Project, are also in development….”

What sufficient conditions allow editors to be comfortable with following the example of J. Algebraic Combinatorics? (#20) · Issues · Publishing Reform / discussion · GitLab

Discussion point: “confidence that a new Fair OA-compliant publisher is at least as good technically as the current one”.

The Varieties of Lock-in in Scholarly Communications – The Scholarly Kitchen

“My colleague Roger Schonfeld and I spend a great deal of time talking about lock-in: what it is, who is doing it, who is doing it well — and perhaps most curiously, why so many people and organizations seem to be unaware that they are in a marketing net until it is pulled tight.”

Pluto interviewed with Research Stash – Pluto Network – Medium

“According to National Science Foundation, 4000 new papers are published within the scientific community every day and the number of annual publications has increased from 1 million in 2000 to more than 2 million in 2013. On the other hand, the publication fees are skyrocketing in the past few decades… wasting of research resources and leading to ineffective communications.

PLUTO a nonprofit based in Seoul, Korea wants to address this issue by creating a Decentralized scholarly communication platform which makes the scholarly communication reasonable and transparent for the scientific community.

Q. Can you tell us about your founding team members and what inspired you to build Pluto Network?

We’re attaching a separate document describing the founding members. We gathered to develop applications using blockchain technology as we were fascinated with the emerging technology and the consequences it would enable. As most of us are graduates from POSTECH, a research-focused science, and technology university in South Korea, it wasn’t long until our concerns on the implementation of the technology concluded that we must integrate it with Scholarly Communication….”

The idea of an open-access evidence rack

“Here’s the idea in three steps.

First, identify the basic propositions in the field or sub-field you want to cover. To start small, identify the basic propositions you want to defend in a given article.

Second, create a separate OA web page for each proposition. For now, don’t worry about the file format or other technicalities. What’s important is that the pages should (1) be easy to update, (2) carry a time-stamp showing when they were last updated, and (3) give each proposition a unique URL. Let’s call them “proposition pages”.

Third, start filling in each page with the evidence in support of its proposition. If some evidence has been published in an article or book, then cite the publication. When the work is online (OA or TA), add a link as well. Whenever you can link directly to evidence, rather than merely to publications describing evidence, do that. For example, some propositions can be supported by linkable data in an open dataset. But because citations and data don’t always speak for themselves, consider adding some annotations to explain how cited pieces of evidence support the given proposition.

Each supporting study or piece of evidence should have an entry to itself. A proposition page should look more like a list than an article. It should look like a list of citations, annotated citations, or bullet points. It should look like a footnote, perhaps a very long footnote, for the good reason that one intended use of a proposition page is to be available for citation and review as a compendious, perpetually updated, public footnote. …”