National Library of Medicine Announces Departure of NCBI Director Dr. David Lipman

“The National Library of Medicine today announced the departure of David J. Lipman, MD, who has served as the Director of the National Center for Biotechnology Information (NCBI) since its creation almost 30 years ago….

NCBI creates and maintains a series of databases relevant to biotechnology and biomedicine, and is a world-renowned and trusted resource for bioinformatics tools and services. Major NCBI databases include GenBank for DNA sequences and PubMed, one of the most heavily used sites in the world for the search and retrieval of biomedical information.

 “It’s hard to think of anyone at NIH who has had a greater impact on the way research is conducted around the world than David Lipman,” noted NLM Director Patricia Flatley Brennan, RN, PhD. “Under his visionary leadership, NCBI has greatly improved access to biomedical information and genomic data for scientists, health professionals, and the public worldwide—something we now practically take for granted.”…Dr. Lipman has been an advocate for promoting open access to the world’s biomedical literature and launched PubMed in 1997, followed by the full-text repository, PubMed Central (PMC), in 2000.   He was instrumental in implementing the NIH Public Access Policy whereby NIH-funded papers are made publicly available in PMC….”

Predecessors of preprint servers

Abstract:  Although there was an early experiment in the 1960s with the central distribution of paper preprints in the biomedical sciences, these sciences have not been early adopters of electronic preprint servers. Some barriers to the development of a ‘preprint culture’ in the biomedical sciences are described. Multiple factors that, from the 1960s, fostered the transition from a paper-based preprint culture in high energy physics to an electronic one are also described. A new revolution in scientific publishing, in which journals come to be regarded as an overlay on electronic preprint databases, will probably overtake some areas of research much more quickly than others.

Predecessors of preprint servers

Abstract:  Although there was an early experiment in the 1960s with the central distribution of paper preprints in the biomedical sciences, these sciences have not been early adopters of electronic preprint servers. Some barriers to the development of a ‘preprint culture’ in the biomedical sciences are described. Multiple factors that, from the 1960s, fostered the transition from a paper-based preprint culture in high energy physics to an electronic one are also described. A new revolution in scientific publishing, in which journals come to be regarded as an overlay on electronic preprint databases, will probably overtake some areas of research much more quickly than others.

Preprints are beginning to arrive in biology, will medicine be next? – Open Pharma

“In September 2016, 1564 life science preprints were posted to eight of the largest preprint servers available to life scientists. This is a five-fold increase from September 2011, when only 300 preprints were posted, and only three of the platforms examined, if they existed at all at that point, hosted any life science preprints. The increase in submissions has prompted journal policy changes and attitude changes amongst funders and research institutions.

But what does this mean for medicine? Biomedical sciences still only constitute approximately 22% of preprints submitted to BioRxiv, genetics research accounting for almost half of this figure. Clinical trials in particular are rarely posted, and account for less than 1%. The restrictions placed upon medical researchers by journals have been a leading cause of this. Some prominent medical publishers still abide by conservative policies, for instance, the American Heart Association has stated in correspondence as recently as September 2016 that it will not review preprinted submissions. A similar policy was reported in communications from the American Association for Cancer research in November 2015.

Concerns also exist surrounding the sharing of medical research before peer review. This is understandable as poorly conducted research, particularly in medicine, can certainly be damaging. For this reason pharmaceutical companies, major funders of medical research, have been cautious of using preprint platforms. This is particularly true for clinical trial results, and stems from fears that sharing research publicly ahead of peer review could violate regulations regarding off-label or direct-to-consumer promotion.

The meeting concluded with the benefits of preprints seen so far, and encouraging wider uptake in all fields of the life sciences. It is clear in the figures presented that many significant shortcomings of journal publishing can be ameliorated through the hosting of preprints upon submission. Engagement with research is facilitated, with 10% of the preprints posted on BioRxiv receiving comments from other users on the site; 90% of submissions to the site are made publically available and all that pass the basic editorial check are shared in less than 24 hours from submission. Whether or not preprint platforms become widely adopted in biomedical research is yet to be seen, but they have great potential if author behaviour and funder attitudes continue on their present trajectory.”

Open source lessons for synthetic biology – O’Reilly Radar

“So, that’s software. How does open source work in biology? Examples lie on a spectrum ranging from “garage” to “academic lab.”

 

Biohackers, for one, in many ways resemble the original “two nerds in a garage” origins of the computer movement. Biohackers use open source protocols and designs for equipment, such as PCR to set up personal laboratories that would normally be beyond the scope of casual tinkerers. This is assisted by recent attempts to standardize genetic elements, as seen, for example, in the BioBrick movement (which curates various DNA sequences designed to easily clone together into a biological circuit) or the OpenPlant collaborative initiative (which promotes an open source approach to plant synthetic biology). Supported by a surprising number of open, collaborative labs around the world, these groups aim to bring about the same sort of changes as were seen with the start of the PC era.

 

At the other end, we have institutions such as CambiaLabs and the BiOS Initiative, which aim to support open source IP initiatives for biological systems via collaborative licensing agreements. A good example of their work would be the Transbacter project, an attempt to perform an end-run around the multitude of Agrobacteria-mediated plant engineering techniques patents by identifying other vectors — which were then released to the community.

 

Both of these are attempts to democratize biological research and development, and tie into a general increase in popular interest over biotechnology — as can be seen by the success of the crowdfunded “Glowing Plants” synthetic biology project….”

Synthetic biology: Cultural divide : Nature News & Comment

“[Andrew] Hessel represents an increasingly impatient and outspoken faction of synthetic biology that believes that the patent-heavy intellectual-property model of biotechnology is hopelessly broken. His plan relies instead on freely available software and biological parts that could be combined in innovative ways to create individualized cancer treatments — without the need for massive upfront investments or a thicket of protective patents. He calls himself a “catalyst for open-source synthetic biology”.

This openness is one vision of synthetic biology’s future. Another is more akin to what happens at big pharmaceutical companies such as Pfizer, Merck and Roche, where revenues from blockbuster drugs fund massive research initiatives behind locked doors. For such businesses, the pursuit of new drugs and other medical advances depends heavily on protecting discoveries through patents and restrictive licensing agreements….”

Reproducible and reusable research: Are journal data sharing policies meeting the mark? [PeerJ Preprints]

Abstract:  Background. There is wide agreement in the biomedical research community that research data sharing is a primary ingredient for ensuring that science is more transparent and reproducible. Publishers could play an important role in facilitating and enforcing data sharing; however, many journals have not yet implemented data sharing policies and the requirements vary widely across journals. This study set out to analyze the pervasiveness and quality of data sharing policies in the biomedical literature. Methods. The online author’s instructions and editorial policies for 318 biomedical journals were manually reviewed to analyze the journal’s data sharing requirements and characteristics. The data sharing policies were ranked using a rubric to determine if data sharing was required, recommended, required only for omics data, or not addressed at all. The data sharing method and licensing recommendations were examined, as well any mention of reproducibility or similar concepts. The data was analyzed for patterns relating to publishing volume, Journal Impact Factor, and the publishing model (open access or subscription) of each journal. Results. 11.9% of journals analyzed explicitly stated that data sharing was required as a condition of publication. 9.1% of journals required data sharing, but did not state that it would affect publication decisions. 23.3% of journals had a statement encouraging authors to share their data but did not require it. There was no mention of data sharing in 31.8% of journals. Impact factors were significantly higher for journals with the strongest data sharing policies compared to all other data sharing mark categories. Open access journals were not more likely to require data sharing than subscription journals. Discussion. Our study confirmed earlier investigations which observed that only a minority of biomedical journals require data sharing, and a significant association between higher Impact Factors and journals with a data sharing requirement. Moreover, while 65.7% of the journals in our study that required data sharing addressed the concept of reproducibility, as with earlier investigations, we found that most data sharing policies did not provide specific guidance on the practices that ensure data is maximally available and reusable.

Let’s speed up science by embracing open access publishing

“Despite this success story, most scientific research today is not published openly — meaning freely, to everyone, without delay from the time of publication. Instead, it lives behind time embargoes and paywalls, costing as much as $35 per article to access. Even when scientific information is free to read, it is subject to copyright restrictions that prevent it from being recast quickly in new ways.”

NOT-OD-17-015: NIH Request for Information (RFI) on Strategies for NIH Data Management, Sharing, and Citation

“This Request for Information (RFI) seeks public comments on data management and sharing strategies and priorities in order to consider: (1) how digital scientific data generated from NIH-funded research should be managed, and to the fullest extent possible, made publicly available; and, (2) how to set standards for citing shared data and software….”