Towards FAIR protocols and workflows: the OpenPREDICT use case [PeerJ]

Abstract:  It is essential for the advancement of science that researchers share, reuse and reproduce each other’s workflows and protocols. The FAIR principles are a set of guidelines that aim to maximize the value and usefulness of research data, and emphasize the importance of making digital objects findable and reusable by others. The question of how to apply these principles not just to data but also to the workflows and protocols that consume and produce them is still under debate and poses a number of challenges. In this paper we describe a two-fold approach of simultaneously applying the FAIR principles to scientific workflows as well as the involved data. We apply and evaluate our approach on the case of the PREDICT workflow, a highly cited drug repurposing workflow. This includes FAIRification of the involved datasets, as well as applying semantic technologies to represent and store data about the detailed versions of the general protocol, of the concrete workflow instructions, and of their execution traces. We propose a semantic model to address these specific requirements and was evaluated by answering competency questions. This semantic model consists of classes and relations from a number of existing ontologies, including Workflow4ever, PROV, EDAM, and BPMN. This allowed us then to formulate and answer new kinds of competency questions. Our evaluation shows the high degree to which our FAIRified OpenPREDICT workflow now adheres to the FAIR principles and the practicality and usefulness of being able to answer our new competency questions.


Google AI Blog: An NLU-Powered Tool to Explore COVID-19 Scientific Literature

“Due to the COVID-19 pandemic, scientists and researchers around the world are publishing an immense amount of new research in order to understand and combat the disease. While the volume of research is very encouraging, it can be difficult for scientists and researchers to keep up with the rapid pace of new publications. Traditional search engines can be excellent resources for finding real-time information on general COVID-19 questions like “How many COVID-19 cases are there in the United States?”, but can struggle with understanding the meaning behind research-driven queries. Furthermore, searching through the existing corpus of COVID-19 scientific literature with traditional keyword-based approaches can make it difficult to pinpoint relevant evidence for complex queries.

To help address this problem, we are launching the COVID-19 Research Explorer, a semantic search interface on top of the COVID-19 Open Research Dataset (CORD-19), which includes more than 50,000 journal articles and preprints. We have designed the tool with the goal of helping scientists and researchers efficiently pore through articles for answers or evidence to COVID-19-related questions….”

LINCS – Linked Infrastructure for Networked Cultural Scholarship

“Human brains work through a vast web of interconnections, but the web that researchers increasingly use to understand human culture and history has few meaningful links. Linked Infrastructure for Networked Cultural Scholarship (LINCS) will create the conditions to think differently, with machines, about human culture in Canada….

The LINCS infrastructure project will convert large datasets into an organized, interconnected, machine-processable set of resources for Canadian cultural research….

LINCS aims to provide context for the cultural material that currently floats around online, interlink it, ground it in its sources, and help to make the World Wide Web a trusted resource for scholarly knowledge production….

With a team of technical and domain experts, LINCS will allow Canadian scholars and partner institutions to play a significant role in the developing the Semantic Web.”

What is MEI?

“The Music Encoding Initiative (MEI) is a 21st century community-driven open-source effort to define guidelines for encoding musical documents in a machine-readable structure.

It brings together specialists from various music research communities, including technologists, librarians, historians, and theorists in a common effort to discuss and define best practices for representing a broad range of musical documents and structures. The results of these discussions are then formalized into the MEI schema, a core set of rules for recording physical and intellectual characteristics of music notation documents expressed as an eXtensible Markup Language (XML) schema. This schema is developed and maintained by the MEI Technical Team….”

WikiJournal Preprints/Aggregation of scholarly publications and extracted knowledge on COVID19 and epidemics – Wikiversity

“This project aims to use modern tools, especially Wikidata (and Wikpedia), R, Java, textmining, with semantic tools to create a modern integrated resource of all current published information on viruses and their epidemics. It relies on collaboration and gifts of labour and knowledge.

The world faces (and will continue to face) viral epdemics which arise suddenly and where scientific/medical knowledge is a critical resource. Despite over 100 Billion USD on medical research worldwide much knowledge is behind publisher paywalls and only available to rich universities. Moreover it is usually badly published, dispersed without coherent knowledge tools. It particularly disadvantages the Global South. This project aims to use modern tools, especially Wikidata (and Wikpedia), R, Java, textmining, with semantic tools to create a modern integrated resource of all current published information on viruses and their epidemics. It relies on collaboration and gifts of labour and knowledge….”

Linked Research on the Decentralised Web

Abstract:  This thesis is about research communication in the context of the Web. I analyse literature which reveals how researchers are making use of Web technologies for knowledge dissemination, as well as how individuals are disempowered by the centralisation of certain systems, such as academic publishing platforms and social media. I share my findings on the feasibility of a decentralised and interoperable information space where researchers can control their identifiers whilst fulfilling the core functions of scientific communication: registration, awareness, certification, and archiving.

The contemporary research communication paradigm operates under a diverse set of sociotechnical constraints, which influence how units of research information and personal data are created and exchanged. Economic forces and non-interoperable system designs mean that researcher identifiers and research contributions are largely shaped and controlled by third-party entities; participation requires the use of proprietary systems.

From a technical standpoint, this thesis takes a deep look at semantic structure of research artifacts, and how they can be stored, linked and shared in a way that is controlled by individual researchers, or delegated to trusted parties. Further, I find that the ecosystem was lacking a technical Web standard able to fulfill the awareness function of research communication. Thus, I contribute a new communication protocol, Linked Data Notifications (published as a W3C Recommendation) which enables decentralised notifications on the Web, and provide implementations pertinent to the academic publishing use case. So far we have seen decentralised notifications applied in research dissemination or collaboration scenarios, as well as for archival activities and scientific experiments.

Another core contribution of this work is a Web standards-based implementation of a clientside tool, dokieli, for decentralised article publishing, annotations and social interactions. dokieli can be used to fulfill the scholarly functions of registration, awareness, certification, and archiving, all in a decentralised manner, returning control of research contributions and discourse to individual researchers.

The overarching conclusion of the thesis is that Web technologies can be used to create a fully functioning ecosystem for research communication. Using the framework of Web architecture, and loosely coupling the four functions, an accessible and inclusive ecosystem can be realised whereby users are able to use and switch between interoperable applications without interfering with existing data.

Technical solutions alone do not suffice of course, so this thesis also takes into account the need for a change in the traditional mode of thinking amongst scholars, and presents the Linked Research initiative as an ongoing effort toward researcher autonomy in a social system, and universal access to human- and machine-readable information?. Outcomes of this outreach work so far include an increase in the number of individuals self-hosting their research artifacts, workshops publishing accessible proceedings on the Web, in-the-wild experiments with open and public peer-review, and semantic graphs of contributions to conference proceedings and journals (the Linked Open Research Cloud).

Some of the future challenges include: addressing the social implications of decentralised Web publishing, as well as the design of ethically grounded interoperable mechanisms; cultivating privacy aware information spaces; personal or community-controlled on-demand archiving services; and further design of decentralised applications that are aware of the core functions of scientific communication.

The OpenAIRE Research Graph – OpenAIRE Blog

“The backdrop: Open Science is gradually becoming the modus operandi in research practices, affecting the way researchers collaborate and publish, discover, and access scientific knowledge. Scientists are increasingly publishing research results beyond the article, to share all scientific products (metadata and files) generated during an experiment, such as datasets, software, experiments. They publish in scholarly communication data sources (e.g. institutional repositories, data archives, software repositories), rely where possible on persistent identifiers (e.g. DOI, ORCID,, PDBs), specify semantic links to other research products (e.g. supplementedBy, citedBy, versionOf), and possibly to projects and/or relative funders. By following such practices, scientists are implicitly constructing the Global Open Science Graph, where by “graph” we mean a collection of objects interlinked by semantic relationships.”

Born-digital, open source, media-rich scholarly publishing that’s as easy as blogging.

“Scalar is a free, open source authoring and publishing platform that’s designed to make it easy for authors to write long-form, born-digital scholarship online. Scalar enables users to assemble media from multiple sources and juxtapose them with their own writing in a variety of ways, with minimal technical expertise required.

More fundamentally, Scalar is a semantic web authoring tool that brings a considered balance between standardization and structural flexibility to all kinds of material. It includes a built-in reading interface as well as an API that enables Scalar content to be used to drive custom-designed applications. If you’re dealing with small to moderate amounts of structured content and need a lightweight platform that encourages improvisation with your data model, Scalar may be the right solution for you.

Scalar also gives authors tools to structure essay- and book-length works in ways that take advantage of the unique capabilities of digital writing, including nested, recursive, and non-linear formats. The platform also supports collaborative authoring and reader commentary. The ANVC’s partner presses and archives are now beginning to implement Scalar into their research and publishing workflows, and several projects leveraging the platform have been published already.

Scalar is a project of the Alliance for Networking Visual Culture (ANVC) in association with Vectors and  IML, and with the support of the Andrew W. Mellon Foundation and the National Endowment for the Humanities….”