FAIR metrics and certification, rewards and recognition, skills and training: FAIRsFAIR contribution to the EOSC Strategic Research and Innovation Agenda | FAIRsFAIR

“FAIRsFAIR is a key contributor to the ongoing development of global standards for FAIR data and repository certification and to the policies and practices that will turn the EOSC programme into a functioning infrastructure. The project strongly endorses all of the guiding principles already identified as relevant to implementing the EOSC vision, with a special emphasis on the importance of FAIR-by-design tools. The guiding principles are a multi-stakeholder approach; data as open as possible and as closed as necessary; implementation of a Web of FAIR data and related services for science; federation of existing research infrastructures; and the need for machine-run algorithms transparent to researchers)….”

OCLC-LIBER Open Science Discussion on the FAIR Principles – Hanging Together

“What is the ideal future vision of an open science ecosystem supporting FAIR data? What are the challenges in getting there? These were the topics of the second installment of the OCLC/LIBER discussion series on open science, which brought together an international group of participants with a shared interest in the FAIR principles. The discussion series, which runs from September 24 through November 5, illuminates key topics related to the LIBER Open Science Roadmap. Both the discussion series and the Roadmap have the mutual goal of informing research libraries as they envision their roles in an open science landscape.

The first discussion in the series addressed the topic of scholarly publishing; a summary of the discussion highlights can be found here. In the second discussion, the focus was FAIR research data. FAIR is a set of broadly articulated principles describing the foundations of “good data management”, aimed at those who produce, publish, and/or steward research data sets, and serving as a set of guideposts for leveraging the full value of research data in support of scholarly inquiry. FAIR research data – that is, data that is findable, accessible, interoperable, and reusable – is seen as an important component of a broader open science ecosystem….”

Guest Post –  On Clarifying the Goals of a Peer Review Taxonomy – The Scholarly Kitchen

“During a period of reevaluation and change, the description of peer review processes should include both existing and emerging models. Designing or reforming a peer review process involves managing multiple goals such as timeliness, effort, equity and inclusion to achieve increased reliability in assessments of novelty, importance of results, rigor and other factors for targeted components of the scientific process. There is no ideal form of peer review in the abstract — whether particular tradeoffs are justified depends on the broader context in which that review is carried out. The proposed taxonomy focuses on the most established forms, which may reinforce barriers to innovation and create blind spots for policy-makers and researchers.

In order to meet today’s pressing need for timely policy decisions that are informed by scientific communication, what a particular system of peer review promises, how it is designed, and how it operates must be understandable and trustworthy. This requires clearly describing the scope of the vetting process; the range of content to which the process is applied; what reviewers and authors are permitted to know about each other and how they are allowed to communicate; how acceptance decision are made; how reviewers are selected; and basic statistics about submissions, acceptance, desk rejections, and review dispositions. And, it requires making this information transparently available in a FAIR way.”

Decentralized Assessment of FAIR datasets

“The pilot is based on DEIP’s own developed deep-tech innovation – Decentralized Assessment System (DAS). DAS is a peer review system that uses an incentive model with reputation rewards and produces a quantifiable metric about the quality and reliability of any data set(s) being assessed. DAS is designed specifically for assessment of assets in expertise-intensive areas, such as scientific research. DAS introduces a comprehensive and robust assessment model:

it sources the consensus about the quality of data sets among the domain experts through continuous two-level peer-review;

it ensures fair rewards for contributions and curation efforts;
it formalizes the result of assessment into explicit metrics/indicators useful for non-experts….”

Charting a path to a more open future. . . together – Hanging Together

“Last week, representatives from OCLC Research and LIBER (the Association of European Research Libraries) presented a webinar to kick off the OCLC-LIBER Open Science Discussion Series. This discussion series, which takes place from 24 September through 5 November 2020, is based upon the LIBER Open Science Roadmap, and will help guide research libraries in envisioning the support infrastructure for Open Science (OS) and their role at local, national, and global levels.

OCLC and LIBER had initially planned a collaborative in-person workshop to take place at the OCLC Library Futures Conference (EMEARC 2020) on March 3 in Vienna. But with COVID rapidly advancing globally at that time, the event was cancelled, and we took some time to plan a larger series of webinars and discussions. 

There are a couple of key goals for our collaboration. First of all, our organizations want to jointly offer a forum for discussion and exploration, and to collectively stimulate the exchange of ideas. But secondly, we want this activity to also inform us as we seek to identify research questions that OCLC and LIBER can collaboratively address to advance Open Science. 

The LIBER Open Science Roadmap provides an excellent, well. . . roadmap. . . for this effort. The report calls upon libraries to “advocate for Open Science locally and internationally, to support Open Science through tools and services and to expand the impact of their work through collaboration and partnerships.” …”

Open Science: Promises and Performance| Qualtrics Survey Solutions

“Although many scientists and organisations endorse this notion, progress has been slow. Some of my research explores the barriers that have impeded progress and makes recommendations to encourage future success. This  survey forms part of that work and addresses a variety of issues, including attitudes towards data storage and access, the role of journals in open science, and associated ethical issues. 

Those interested in scientific progress are invited to take part, and participation should take less than 10 minutes. Responses will be anonymous and participants can withdraw at any time.

The findings from the survey will be submitted to open access journal and made available as open access preprint. The raw data will be lodged with the Open Science Foundation …”

Towards FAIR protocols and workflows: the OpenPREDICT use case [PeerJ]

Abstract:  It is essential for the advancement of science that researchers share, reuse and reproduce each other’s workflows and protocols. The FAIR principles are a set of guidelines that aim to maximize the value and usefulness of research data, and emphasize the importance of making digital objects findable and reusable by others. The question of how to apply these principles not just to data but also to the workflows and protocols that consume and produce them is still under debate and poses a number of challenges. In this paper we describe a two-fold approach of simultaneously applying the FAIR principles to scientific workflows as well as the involved data. We apply and evaluate our approach on the case of the PREDICT workflow, a highly cited drug repurposing workflow. This includes FAIRification of the involved datasets, as well as applying semantic technologies to represent and store data about the detailed versions of the general protocol, of the concrete workflow instructions, and of their execution traces. We propose a semantic model to address these specific requirements and was evaluated by answering competency questions. This semantic model consists of classes and relations from a number of existing ontologies, including Workflow4ever, PROV, EDAM, and BPMN. This allowed us then to formulate and answer new kinds of competency questions. Our evaluation shows the high degree to which our FAIRified OpenPREDICT workflow now adheres to the FAIR principles and the practicality and usefulness of being able to answer our new competency questions.



Abstract:  Research Infrastructures (RIs) play a key role in enabling and developing research in all scientific domains and represent an increasingly large share of research investment. Most RIs are funded, managed and operated at a national or federal level, and provide services mostly to national research communities. This policy report presents a generic framework for improving the use and operation of national RIs. It includes two guiding models, one for portfolio management and one for user-base optimisation. These guiding models lay out the key principles of an  effective national RI portfolio management system and identify the factors that should be considered by RI managers with regards to optimising the user-base of national RIs. Both guiding models take into consideration the diversity of national systems and RI operation approaches.

This report also contains a series of more generic policy recommendations and suggested actions for RI portfolio managers and RI managers.

[From the body of the report:]

As described in Section 8.1.2, data-driven RIs often do not have complex access mechanisms in place, as they mostly provide open access. Such access often means reducing the number of steps needed by a user to gain access to data. This can have knock-on implications for the ability of RIs to accurately monitor user access: for instance, the removal of login portals that were previously used to provide data access statistics….

Requiring users to submit Data Management Plans (DMPs) prior to the provision of access to an RI may encourage users to consider compliance with FAIR (Findable, Accessible, Interoperable, Reusable) data principles whilst planning their project (Wilkinson et al., 2016[12]). The alignment of requirements for Data Management Plans (Science Europe, 2018[13]) used for RI access provision and those used more generally in academic research should be considered to facilitate their adoption by researchers….

The two opposing extremes, described above, of either FAIR / open access or very limited data access provision, highlight the diversity in approaches of national RIs towards data access, and the lack of clear policy guidance…..

It is important that RIs have an open and transparent data policies in line with the FAIR principles to broaden their user base. Collaborating with other RIs to federate repositories and harmonize meta-data may be an important step in standardising open and transparent data policies across the RI community. …

There are a wide variety of pricing policies, both between and also within individual RIs, and the need for some flexibility is recognised. RIs should ensure that their pricing policies for all access modes are clear and cost-transparent, and that merit-based academic usage is provided openly and ‘free-from-costs’, wherever possible. …

For Mendeley Data winner, sharing FAIR data helps researchers learn from each other

“Vanessa, a Research Fellow in the Department of Translational Research and New Technologies in Medicine and Surgery at the University of Pisa, is a recent winner of the Mendeley Data FAIRest Datasets Award. The award recognizes researchers or research groups that make their research data available for additional research and do so in a way that exemplifies the FAIR Data Principles – Findable, Accessible, Interoperable, Reusable….”

On the Importance of Data Transparency: Patterns

“We all need to shoulder this burden. Researchers have the responsibility to ensure that their conclusions are backed up by their data and that their data are in a state where it is easy for those with domain expertise to understand. Peer reviewers must ask questions about the data in their reviews and dig a little deeper into the databases, if something doesn’t seem right. Journal editors need to ensure that these checks on data are carried out before publication and that the policies on data accessibility statements are adhered to.

The lack of transparency around data and the resulting retraction of peer-reviewed papers show that we cannot afford to ignore everyday issues regarding accessibility of data any longer. The good news is that these issues are well understood and policies are already in place to deal with them. What we need to do now is find ways of making it easy and quick to abide by those policies, and that will require time, investment, and a willingness to engage from the entire research community.

The road to scientific transparency is long, but we’re already on our way.”