Green digitization: Botanical collections data answer real-world questions | EurekAlert! Science News

“Special issue of Applications in Plant Sciences explores new developments and applications of digital plant data

Even as botany has moved firmly into the era of “big data,” some of the most valuable botanical information remains inaccessible for computational analysis, locked in physical form in the orderly stacks of herbaria and museums. Herbarium specimens are plant samples collected from the field that are dried and stored with labels describing species, date and location of collection, along with various other information including habitat descriptions. The detailed historical record these specimens keep of species occurrence, morphology, and even DNA provides an unparalleled data source to address a variety of morphological, ecological, phenological, and taxonomic questions. Now efforts are underway to digitize these data, and make them easily accessible for analysis. Two symposia were convened to discuss the possibilities and promise of digitizing these data–at the Botanical Society of America’s 2017 annual meeting in Fort Worth, Texas, and again at the XIX International Botanical Congress in Shenzhen, China. The proceedings of those symposia have been published as a special issue of Applications in Plant Sciences; the articles discuss a range of methods and remaining challenges for extracting data from botanical collections, as well as applications for collections data once digitized. Many of the authors contributing to the issue are involved in iDigBio (Integrated Digitized Biocollections), a new “national coordinating center for the facilitation and mobilization of biodiversity specimen data,” as described by Dr. Gil Nelson, a botanist at Florida State University and coeditor of this issue….”

An open data law for climate resilience and disaster risk reduction | PreventionWeb.net

“This document aims to clarify the key elements of open data and to serve as a proposal to institute and strictly implement a policy for climate change and disaster risk reduction-related data and information based on its articulated and internationally accepted definition in the Philippines. The document describes the different considerations for the Philippines in its decision to fully adopt, support and promote a policy for open data for DRR. Defining the standards in an open data law will mandate compliance to the key elements of open data, which include: availability in digital format of data, downloadable via the internet in bulk for ease of use; amenability to intermixing with other datasets through an interoperable format structure and machine-readability of digital files; freedom to use, reuse and redistribute, even on commercial basis; and a ‘no conditions’ rule on the use of open data, except for appropriate citation for due credit.”

https://www.scribd.com/document/374847472/An-Open-Data-Law-for-Climate-Resilience-and-Disaster-Risk-Reduction

Siyavula

“Our mission is to create and enable engaging, integrated, high-quality learning experiences in Mathematics and the Sciences; to have a long-lasting, enriching impact on learners and teachers in South Africa and globally; to constantly seek out and build the most relevant, effective technology whilst remaining rooted in the science of learning and instruction; and to engage and motivate young minds, helping them to master and develop the skills our future….

We believe in openness, a key principle in our philosophy that every learner and teacher should have access to high quality educational resources as a basis for long-term growth and development….”

Extracting research evidence from publications | EMBL-EBI Train online

“Extracting research evidence from publications Bioinformaticians are routinely handling big data, including DNA, RNA, and protein sequence information. It’s time to treat biomedical literature as a dataset and extract valuable facts hidden in the millions of scientific papers. This webinar demonstrates how to access text-mined literature evidence using Europe PMC Annotations API. We highlight several use cases, including linking diseases with potential treatment targets, or identifying which protein structures are cited along with a gene mutation.

This webinar took place on 5 March 2018 and is for wet-lab researchers and bioinformaticians who want to access scientific literature and data programmatically. Some prior knowledge of programmatic access and common programming languages is recommended.

The webinar covers: Available data (annotation types and sources) (1:50) API operations and parameters and web service outputs (8:08) Use case examples (16:56) How to get help (24:16)

You can download the slides from this webinar here. You can learn more about Europe PMC in our Europe PMC: Quick tour and our previous webinar Europe PMC, programmatically.

For documentation, help and support visit the Europe PMC help pages or download the developer friendly web service guide. For web service related question you can get in touch via the Google group or contact the helpdesk [at] europepmc.org”>help desk.”

Can the automatic posting of preprints increase the pace of medical research? – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning

“Preprints — versions of research papers made publicly available prior to formal publication in a peer reviewed journal — continue to be a topic of much discussion within the medical publications community. As the industry looks at ways to improve and advance the transparent and timely dissemination of research, preprints offer a potential route to achieving these aims. Already commonly used in fields such as physics, the launch of the medical publications preprint server medRxiv, expected later this year, is awaited with interest.

Meanwhile, Public Library of Science (PLOS) announced last month that all articles submitted to PLOS journals will now automatically be published on the biology preprint server bioRxiv as preprints, ahead of ‘traditional’ publication in a PLOS journal. Following initial top-line checks by PLOS, to ensure adherence to things like ethical standards and the journal’s scope, articles will be posted to bioRxiv while undergoing peer review at PLOS in parallel.

PLOS and Cold Spring Harbor Laboratory, which operates bioRxiv, hope this collaboration will help advance data dissemination and ultimately increase the speed of research. The potential of preprints has also been explored by other groups, including the possibility for preprints to improve online article engagement and for journals to use preprint servers to identify potential articles for publication.”

Converting the Literature of a Scientific Field to Open Access Through Global Collaboration: the Experience of SCOAP3 in Particle Physics[v1] | Preprints

Kohls, A.; Mele, S. Converting the Literature of a Scientific Field to Open Access Through Global Collaboration: the Experience of SCOAP3 in Particle Physics. Preprints 2018

Abstract: Gigantic particle accelerators, incredibly complex detectors, an antimatter factory and the discovery of the Higgs boson – this is part of what makes CERN famous. Only a few know that CERN also hosts the world largest Open Access initiative: SCOAP3. The Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP3) started operation in 2014 and has since supported the publication of 19,000 Open Access articles in the field of particle physics, at no direct cost, nor burden, for individual authors worldwide. SCOAP3 is made possible by a 3,000-institute strong partnership, where libraries re-direct funds previously used for subscriptions to ’flip’ articles to ’gold Open Access’. With its recent expansion, the initiative now covers about 90% of the journal literature of the field. This article describes the economic principles of SCOAP3, the collaborative approach of the partnership, and finally summarizes financial results after four years of successful operation.”

Why OpenStreetMap is in Serious Trouble — Emacsen’s Blog

“…The first problem that I feel plagues OSM is that the OpenStreetMap Foundation views the mission of the project to provide the world a geographic database, but not geographic services. OSM gives people the tools to create their own map rather than offering them a simple, out of the box solution. Providing the ability for individuals and organizations to make their own map may work well for some, but it discourages small and medium size organizations from using OSM and thus engaging with the project. And even if they do use our data, their engagement is through a third party, rather than directly with us….”