Pubfair – A Framework for Sustainable, Distributed, Open Science Publishing Services

“This white paper provides the rationale and describes the high level architecture for an innovative publishing framework that positions publishing functionalities on top of the content managed by a distributed network of repositories. The framework is inspired by the vision and use cases outlined in the COAR Next Generation Repositories work, first published in November 2017 and further articulated in a funding proposal developed by a number of European partners.

By publishing this on Comments Press, we are seeking community feedback about the Pubfair framework in order to refine the functionalities and architecture, as well as to gauge community interest….

The idea of Pubfair is not to create another new system that competes with many others, but rather to leverage, improve and add value to existing institutional and funder investments in research infrastructures (in particular open repositories and open journal platforms). Pubfair positions repositories (and the content managed by repositories) as the foundation for a distributed, globally networked infrastructure for scholarly communication. It moves our thinking beyond the artificial distinction between green and gold open access by combining the strengths of open repositories with easy-to-use review and publishing tools for a multitude of research outputs….”

Pubfair – A Framework for Sustainable, Distributed, Open Science Publishing Services

“This white paper provides the rationale and describes the high level architecture for an innovative publishing framework that positions publishing functionalities on top of the content managed by a distributed network of repositories. The framework is inspired by the vision and use cases outlined in the COAR Next Generation Repositories work, first published in November 2017 and further articulated in a funding proposal developed by a number of European partners.

By publishing this on Comments Press, we are seeking community feedback about the Pubfair framework in order to refine the functionalities and architecture, as well as to gauge community interest….

The idea of Pubfair is not to create another new system that competes with many others, but rather to leverage, improve and add value to existing institutional and funder investments in research infrastructures (in particular open repositories and open journal platforms). Pubfair positions repositories (and the content managed by repositories) as the foundation for a distributed, globally networked infrastructure for scholarly communication. It moves our thinking beyond the artificial distinction between green and gold open access by combining the strengths of open repositories with easy-to-use review and publishing tools for a multitude of research outputs….”

Pubfair – A Framework for Sustainable, Distributed, Open Science Publishing Services

“This white paper provides the rationale and describes the high level architecture for an innovative publishing framework that positions publishing functionalities on top of the content managed by a distributed network of repositories. The framework is inspired by the vision and use cases outlined in the COAR Next Generation Repositories work, first published in November 2017 and further articulated in a funding proposal developed by a number of European partners.

By publishing this on Comments Press, we are seeking community feedback about the Pubfair framework in order to refine the functionalities and architecture, as well as to gauge community interest….

The idea of Pubfair is not to create another new system that competes with many others, but rather to leverage, improve and add value to existing institutional and funder investments in research infrastructures (in particular open repositories and open journal platforms). Pubfair positions repositories (and the content managed by repositories) as the foundation for a distributed, globally networked infrastructure for scholarly communication. It moves our thinking beyond the artificial distinction between green and gold open access by combining the strengths of open repositories with easy-to-use review and publishing tools for a multitude of research outputs….”

Workflow systems turn raw data into scientific knowledge

“Finn is head of the sequence-families team at the European Bioinformatics Institute (EBI) in Hinxton, UK; Meyer is a computer scientist at Argonne National Laboratory in Lemont, Illinois. Both run facilities that let researchers perform a computationally intensive process called metagenomic analysis, which allows microbial communities to be reconstructed from shards of DNA. It would be helpful, they realized, if they could try each other’s code. The problem was that their analytical ‘pipelines’ — the carefully choreographed computational steps required to turn raw data into scientific knowledge — were written in different languages. Meyer’s team was using an in-house system called AWE, whereas Finn was working with nearly 9,500 lines of Python code.

“It was a horrible Python code base,” says Finn — complicated, and difficult to maintain. “Bits had been bolted on in an ad hoc fashion over seven years by at least four different developers.” And it was “heavily tied to the compute infrastructure”, he says, meaning it was written for specific computational resources and a particular way of organizing files, and thus essentially unusable outside the EBI. Because the EBI wasn’t using AWE, the reverse was also true. Then Finn and Meyer learnt about the Common Workflow Language (CWL).

CWL is a way of describing analytical pipelines and computational tools — one of more than 250 systems now available, including such popular options as Snakemake, Nextflow and Galaxy. Although they speak different languages and support different features, these systems have a common aim: to make computational methods reproducible, portable, maintainable and shareable. CWL is essentially an exchange language that researchers can use to share pipelines for whichever system. For Finn, that language brought sanity to his codebase, reducing it by around 73%. Importantly, it has made it easier to test, execute and share new methods, and to run them on the cloud….”

Crossing the Borders: Re-Use of Smart Learning Objects in Advanced Content Access Systems

Abstract:  Researchers in many disciplines are developing novel interactive smart learning objects like exercises and visualizations. Meanwhile, Learning Management Systems (LMS) and eTextbook systems are also becoming more sophisticated in their ability to use standard protocols to make use of third party smart learning objects. But at this time, educational tool developers do not always make best use of the interoperability standards and need exemplars to guide and motivate their development efforts. In this paper we present a case study where the two large educational ecosystems use the Learning Tools Interoperability (LTI) standard to allow cross-sharing of their educational materials. At the end of our development process, Virginia Tech’s OpenDSA eTextbook system became able to import materials from Aalto University’s ACOS smart learning content server, such as python programming exercises and Parsons problems. Meanwhile, University of Pittsburgh’s Mastery Grids (which already uses the ACOS exercises) was made to support CodeWorkout programming exercises (a system already used within OpenDSA). Thus, four major projects in CS Education became inter-operable. 

African Principles for Open Access in Scholarly Communication – AfricArXiv

“1) Academic Research and knowledge from and about Africa should be freely available to all who wish to access, use or reuse it while at the same time being protected from misuse and misappropriation.

2) African scientists and scientists working on African topics and/or territory will make their research achievements including underlying datasets available in a digital Open Access repository or journal and an explicit Open Access license is applied.

3) African research output should be made available in the principle common language of the global science community as well as in one or more local African languages – at least in summary.

4) It is important to take into consideration in the discussions indigenous and traditional knowledge in its various forms.

5) It is necessary to respect the diverse dynamics of knowledge generation and circulation by discipline and geographical area.

6) It is necessary to recognise, respect and acknowledge the regional diversity of African scientific journals, institutional repositories and academic systems.

7) African Open Access policies and initiatives promote Open Scholarship, Open Source and Open Standards for interoperability purposes.

8) Multi-stakeholder mechanisms for collaboration and cooperation should be established to ensure equal participation across the African continent.

9) Economic investment in Open Access is consistent with its benefit to societies on the African continent – therefore institutions and governments in Africa provide the enabling environment, infrastructure and capacity building required to support Open Access

10) African Open Access stakeholders and actors keep up close dialogues with representatives from all world regions, namely Europe, the Americas, Asia, and Oceania….”

African Principles for Open Access in Scholarly Communication – AfricArXiv

“1) Academic Research and knowledge from and about Africa should be freely available to all who wish to access, use or reuse it while at the same time being protected from misuse and misappropriation.

2) African scientists and scientists working on African topics and/or territory will make their research achievements including underlying datasets available in a digital Open Access repository or journal and an explicit Open Access license is applied.

3) African research output should be made available in the principle common language of the global science community as well as in one or more local African languages – at least in summary.

4) It is important to take into consideration in the discussions indigenous and traditional knowledge in its various forms.

5) It is necessary to respect the diverse dynamics of knowledge generation and circulation by discipline and geographical area.

6) It is necessary to recognise, respect and acknowledge the regional diversity of African scientific journals, institutional repositories and academic systems.

7) African Open Access policies and initiatives promote Open Scholarship, Open Source and Open Standards for interoperability purposes.

8) Multi-stakeholder mechanisms for collaboration and cooperation should be established to ensure equal participation across the African continent.

9) Economic investment in Open Access is consistent with its benefit to societies on the African continent – therefore institutions and governments in Africa provide the enabling environment, infrastructure and capacity building required to support Open Access

10) African Open Access stakeholders and actors keep up close dialogues with representatives from all world regions, namely Europe, the Americas, Asia, and Oceania….”

Sage Bionetworks Executive Urges Adoption of Standards to Create ‘Open Science’ | GenomeWeb

Since All of Us is collecting samples and health data from 1 million people at healthcare facilities all over the country, the only way this information dissemination will work is because NIH and its partners are standardizing the results according to the Observational Medical Outcomes Partnership (OMOP) Common Data Model. All of Us also is normalizing phenotypic information on the Substitutable Medical Apps, Reusable Technology (SMART) on FHIRframework, based on the Fast Healthcare Interoperability Resources (FHIR) standard….

In a keynote address to open the annual Bio-IT World Conference & Expo here yesterday, John Wilbanks, chief commons officer at Sage Bionetworks, was clear about his preference for those standards to promote interoperability.  

“Choose OMOP or SMART on FHIR and don’t choose anything else,” he said. The openness of standards and of data itself is key, according to Wilbanks, a longtime advocate of open data….

Sometimes that is because scientists tend to strip out many of the insights before they report results, but often it is due to the fact that researchers do not have or will not make the time to annotate their data in a way that would make their findings more useful to others.

“Until, in my opinion, we figure out how to get machine learning and [artificial intelligence] to do that annotation for us, it’s going to be really hard to have data get as reusable as open-source software is,” Wilbanks said. “But we will eventually get there.” …”