A reputation economy: how individual reward considerations trump systemic arguments for open access to data : Palgrave Communications

Abstract:  Open access to research data has been described as a driver of innovation and a potential cure for the reproducibility crisis in many academic fields. Against this backdrop, policy makers are increasingly advocating for making research data and supporting material openly available online. Despite its potential to further scientific progress, widespread data sharing in small science is still an ideal practised in moderation. In this article, we explore the question of what drives open access to research data using a survey among 1564 mainly German researchers across all disciplines. We show that, regardless of their disciplinary background, researchers recognize the benefits of open access to research data for both their own research and scientific progress as a whole. Nonetheless, most researchers share their data only selectively. We show that individual reward considerations conflict with widespread data sharing. Based on our results, we present policy implications that are in line with both individual reward considerations and scientific progress.

Data Access & Research Transparency (DA-RT)

“Working together, researchers, journal editors, publishers, and professional associations have made important progress on matters of data sharing and research transparency. Our hope is that these continuing conversations will increase the legitimacy, credibility, and openness of intellectually diverse research communities….DA-RT has no formal organization chart. It began in 2010, as an Ad Hoc Committee of the American Political Science Association. As of today, it has no formalized membership. DA-RT is more accurately concieved as an idea rather than an institution. If you want to be part of it, you are….”

Open Data: The Researcher Perspective

“A year ago, in April 2016, Leiden University’s Centre for Science and Technology Studies (CWTS) and Elsevier embarked on a project to investigate open data practices at the workbench in academic research. Knowledge knows no borders, so to understand open data practices comprehensively the project has been framed from the outset as a global study. That said, both the European Union and the Dutch government have formulated the transformation of the scientific system into an open innovation system as a formal policy goal. At the time we started the project, the Amsterdam Call for Action on Open Science had just been published under the Dutch presidency of the Council of the European Union. However, how are policy initiatives for open science related to the day-to-day practices of researchers and scholars? With this report, we aim to contribute to bridging the gap between policy on the one hand, and daily research practices from a global perspective on the other hand. As we show, open data practices are less developed than anticipated, with the exception of fields where data practices are integrated in the research design from the very beginning. While policy has high expectations about open science and open data, the motive force comes not from the policy aims, but in changing practice at the grass roots level. This requires we confront the harsh reality that the rewards for researchers and scholars to make data available are few, and the complexity in doing so is high….”

SCHOLIX

“The Scholix initiative is a high level interoperability framework for exchanging information about the links between scholarly literature and data. It aims to build an open information ecosystem to understand systematically what data underpins literature and what literature references data. The DLI Service is the first exemplar aggregation and query service fed by the Scholix open information ecosystem. The Scholix framework together with the DLI aggregation are designed to enable other 3rd party services (domain-specific aggregations, integrations with other global services, discovery tools, impact assessments etc).

Scholix is an evolving lightweight set of Guidelines to increase interoperability rather than a normative standard….”

Reimagining the Digital Monograph Design Thinking to Build New Tools for Researchers

“Digital scholarly book files should be open and flexible. This is as much a design question as it is a business question for publishers and libraries. The working group returned several times to the importance of scholarly book files being available in nonproprietary formats that allow for a variety of uses and re-uses…. Another pointed out that the backlist corpus of scholarly books in the humanities and social sciences is an invaluable resource for text-mining, but the ability to carry out that research at scale means that the underlying text of the books has to be easy to extract. “It’s so important to be able to ‘scrape’ the text,” one participant said, using a common term for gathering machine-readable characters from a human-readable artifact (for example, a scanned page image)….Whether a wider group of publishers and technology vendors will feel that they can enable these more expansive uses of a book file without upending the sustainability of the scholarly publishing system is a larger question than this project sought to answer….Our working group also pointed to other challenges for the future of the monograph that have little to do with its visual representation in a user interface: for example, what might be a viable long-term business model for monographs, and whether a greater share of the publishing of monographs in a free-to-read, open-access model can be made sustainable….As interest continues to grow in extending the open-access publishing model from journals to scholarly books, publishers and librarians are working to understand better the upfront costs that must be covered in order to operate a self-sustaining open-access monograph publishing program—costs that have been complicated to pin down because the production of any given scholarly book depends on partial allocations of staff time from many different staff members at a press, and different presses have different cost bases, as well….”

10 ways to support preprints (besides posting one) | ASAPbio

Preprinting in biology is gaining steam, but the process is still far from normal: the upload rate to all preprint servers is about 1% that of PubMed. The most obvious way for individual scientists to help turn the tide is, of course, to preprint their own work. But given that it now takes longer to accumulate data for a paper, this opportunity might not come up as often as we’d like.

So, what else can we do to promote the productive use of preprints in biology?”

Open Access Must-Reads for Spring 2017 – Copyright Clearance Center

“Together with the Association for Learned and Professional Publishing (ALPSP), Copyright Clearance Center is excited to introduce the inaugural post of our “Open Access Must-Reads” series – a thoughtfully curated selection of important articles from the past few months that expound upon “can’t miss” developments in the world of Open Access.”

Challenges and opportunities: Open Educational Resources (OERs) at McGill University

“‘Challenges and Opportunities: Open Educational Resources (OERs) at McGill University,’ recommends:

  1. The SSMU and McGill University should engage in further data collection and information on OERs and affordable course content at McGill. a. This should be done in order to better understand where OERs may have the most impact for students and educators (e.g. what faculty or specific courses could be initial OER candidates)
  2. The SSMU and other student associations on-campus should engage in greater student advocacy efforts towards OERs. This would include educating the McGill community on the concerns of course material accessibility, what OERs are and how they can be utilized on campus.
  3. Increase the amount of institutional support for OERs on-campus through:
    1. Partnerships with the Library and Teaching & Learning Services
    2. Adoption of OER policies by the University and/or individual departments/faculties
    3. Increasing on-campus incentives to adopt/create OERs, including but not limited to financial incentives, recognition awards, and/or time-off for faculty interested in employing/developing OERs”

Experiences in integrated data and research object publishing using GigaDB | SpringerLink

“In the era of computation and data-driven research, traditional methods of disseminating research are no longer fit-for-purpose. New approaches for disseminating data, methods and results are required to maximize knowledge discovery. The ‘long tail’ of small, unstructured datasets is well catered for by a number of general-purpose repositories, but there has been less support for ‘big data’. Outlined here are our experiences in attempting to tackle the gaps in publishing large-scale, computationally intensive research. GigaScience is an open-access, open-data journal aiming to revolutionize large-scale biological data dissemination, organization and re-use. Through use of the data handling infrastructure of the genomics centre BGI, GigaScience links standard manuscript publication with an integrated database (GigaDB) that hosts all associated data, and provides additional data analysis tools and computing resources. Furthermore, the supporting workflows and methods are also integrated to make published articles more transparent and open. GigaDB has released many new and previously unpublished datasets and data types, including as urgently needed data to tackle infectious disease outbreaks, cancer and the growing food crisis. Other ‘executable’ research objects, such as workflows, virtual machines and software from several GigaScience articles have been archived and shared in reproducible, transparent and usable formats. With data citation producing evidence of, and credit for, its use in the wider research community, GigaScience demonstrates a move towards more executable publications. Here data analyses can be reproduced and built upon by users without coding backgrounds or heavy computational infrastructure in a more democratized manner.”