Emory Libraries Blog | Put a badge on it: incentives for data sharing and reproducibility

“How do you encourage researchers to share the data underlying their publications? The journal Psychological Science introduced a digital badge system in 2014 to signify when authors make the data and related materials accompanying their articles openly available. Criteria to earn the Open Data badge include (1) sharing data via a publicly accessible repository with a persistent identifier, such as a DOI, (2) assigning an open license, such as CC-BY or CC0, allowing reuse and credit to the data producer, and (3) providing enough documentation that another researcher could reproduce the reported results (Badges to Acknowledge Open Practices project on the Open Science Framework)….”

mixtrak — Open scientific methods: my experience with protocols.io

“In the continuing quest to make my PhD research comply with the ideals of open science, I’m uploading my protocols to protocols.io. This will create a detailed, publicly available, citable methods record (with a DOI!) for my research which aids with transparency, peer review, replication and re-use.”

Alternative Funding Mechanisms for APC-free Open Access Journals: results of the first call : OpenAIRE blog

“In 2016, within the FP7 Post-Grant Open Access Pilot, a sub-project focused on Alternative Funding Mechanisms for APC-free Open Access Journals was launched. Approximately one year later, we would like to share the main results of this workline with the public – as we believe these findings can be of interest for other initiatives and publishing platforms.”

Code Ocean | Discover & Run Scientific Code

“Our mission is to make the world’s scientific code more reusable, executable and reproducible

 

Code Ocean is a cloud-based computational reproducibility platform that provides researchers and developers an easy way to share, discover and run code published in academic journals and conferences.

More and more of today’s research includes software code, statistical analysis and algorithms that are not included in traditional publishing. But they are often essential to reproducing the research results and reusing them in a new product or research. This creates a major roadblock for researchers, one that inspired the first steps of Code Ocean as part of the 2014 Runway Startup Postdoc Program at the Jacobs Technion Cornell Institute. Today, the company employs more than 10 people and officially launched the product in February 2017.

For the first time, researchers, engineers, developers and scientists can upload code and data in 10 programming languages and link working code in a computational environment with the associated article for free. We assign a Digital Object Identifier (DOI) to the algorithm, providing correct attribution and a connection to the published research.

The platform provides open access to the published software code and data to view and download for everyone for free. But the real treat is that users can execute all published code without installing anything on their personal computer. Everything runs in the cloud on CPUs or GPUs according to the user needs. We make it easy to change parameters, modify the code, upload data, run it again, and see how the results change….”

“How automated workflows helped us ingest 600 faculty publications in t” by Shilpa Rele and Jessea Young

Abstract: “Conducting copyright clearance and ingesting appropriate versions of faculty publications can be a labor intensive and time consuming process. At Loyola Marymount University (LMU), a medium-size, private institution, the Digital Library Program (DLP) had been conducting copyright clearance one publication at a time. This meant that it took an enormous amount of time from start to finish to review and process the list of publications on a given faculty member’s CV. In October 2016, the Digital Program Librarian learned about the automated workflow developed by librarians at University of North Texas and decided to give it a try. At this time, the DLP hired a Library Assistant who then began exploring and experimenting with this automated workflow. The goal of such experimentation was to increase efficiency in our processes to ingest more faculty publications in LMU’s institutional repository.

In this session, we will share information about our workflows and tools used to manage our various processes. […]”

Mackenzie DataStream: How an open access platform for sharing water data was built and how it is evolving to meet community needs

“This presentation focuses on an effort to manage and make the western scientific results of these programs widely available. It tells the story of how Mackenzie DataStream, an open access platform, was developed and how it is evolving to meet community needs.”