Accelerating Standards for 3D Data to Improve Long-Term Usability – Association of Research Libraries

“3D data means different things to different people. Most are probably familiar with highly processed outputs, like the previous examples, which often lack documentation describing how the data has been created and processed. In fact, depending on the creation method, the creator may not even have access to the processing information due to the use of proprietary tools. However, even when 3D data is well documented through the best efforts of a creator, data steward, or repository, the data’s description is generally bespoke, and the terms used are ambiguous. This gives 3D data a steep slope to climb to achieve findability, accessibility, interoperability, and reusability (FAIR-ness).

The use of 3D technologies has grown exponentially in the last 10 years. As a result, research libraries have invested significant infrastructure, services, and people into supporting research, teaching principles, and modeling applications of 3D technologies and data. Research libraries have begun creating and capturing 3D data using a variety of methods and formats, establishing 3D immersion labs, opening 3D printing shops within their library spaces, and adding 3D data to their repositories. As use of these tools and services has become more widespread, appropriate stewardship of the digital data is critical for ongoing accessibility, but not yet widely established or agreed upon. Enter the Community Standards for 3D Data Preservation (CS3DP) initiative.

Organized by colleagues at Washington University in St. Louis, the University of Michigan, and Iowa State University, CS3DP aims to be an open, radically inclusive, and collaborative community invested in creating standards. Composed of working groups from national and international participants, the CS3DP community has increased awareness and accelerated the creation and adoption of best practices, metadata standards, and policies for the stewardship of 3D data….”

OA Switchboard Reporting Made Easy

“We took a step back to speak with some of our partners about the rationale behind the need for a standardised, structured and validated data format, delivering real-time, situational authoritative data from the source. We’re grateful they agreed to share their views and experiences. These partners are Stacey Burke (American Physiological Society), Colleen Campbell (Max Planck Digital Library, ESAC, OA2020), Todd Carpenter (NISO), Helen Dobson (Jisc), Matthew Goddard (Iowa State University), Marten Stavenga (John Benjamins Publishing Company) and Ivo Verbeek (Elitex).

 

We asked our interviewees to answer five questions:

What is the underlying need for ‘reporting’?

What is the minimum set of metadata required to achieve that goal?

What sources (systems) capture and manage these (meta)data? Is it possible to extract the data?

What makes a ‘standard’? What’s the benefit of a ‘standard’? How to get there?

How does the OA Switchboard make reporting ‘easy’? How does it work, end-to-end and real-time? …”

Crossref expects rapid growth in use of unique grant identifiers – Research Professional News

“A representative of Crossref has said that the not-for-profit scholarly communications organisation is expecting a rapid expansion in the number of research grants that are allocated unique identifiers to allow anyone to easily search for resulting papers or data.

Speaking at the annual conference of the European Association of Research Managers and Administrators on 15 April, Rachael Lammey, head of special programmes at Crossref, said the organisation had already labelled just under 17,000 grants with unique codes known as digital object identifiers….”

A Brave New PID: DMP-IDs

“Despite the challenges over the last year, we are pleased to share some exciting news about launching the brave new PID, DMP IDs. Two years ago we set out a plan in collaboration with the University of California Curation Center and the DMPTool to bring DMP IDs to life. The work was part of the NSF Eager grant DMP Roadmap: Making Data Management Plans Actionable and allowed us to explore the potential of machine-actionable DMPs as a means to transform the DMP into a critical component of networked research data management.

The plan was to develop a persistent identifier (PID) for Data Management Plans (DMPs). We already have PIDs for many entities, such as articles, datasets etc. (DOIs), people (such as ORCID iDs) and places (such as ROR IDs). We knew that it would be important for DataCite to support the community in establishing a unique persistent identifier for DMPs. Until now, we had no PID for the document that “describes data that will be acquired or produced during research; how the data will be managed, described, and stored, what standards you will use, and how data will be handled and protected during and after the completion of the project”. There was no such thing as a DMP-ID; and today that changes….”

OA Switchboard Update

“Dealing with Open Access publications, where there are multiple authors, affiliated institutions, research funders, business models, policies, mandates, requirements and agreements involved, is complex and administratively burdensome. Funders, institutions and publishers are faced with a myriad of systems, portals and processes when dealing with Open Access publication-level arrangements. This hampers the transition to Open Access, the realisation of policies and agreements, and progress in developing new business models. From a researcher perspective, this landscape is at best confusing and at worst impenetrable. (see visual 1)

 

These challenges are in no way unique to the open access publishing landscape and are in fact relatively common in marketplaces that have an increasingly complex web of interactions between buyers and sellers (see OA Switchboard introductory blog, 2019). The introduction of a central intermediary is often the easiest way to reduce complexity on all sides, but a payment component doesn’t come without challenges and risks. If one leaves out the payment component, there is still complexity around information and data exchange in our ecosystem. Other industries have already tackled similar problems successfully a long time ago. (Think of SWIFT, the global financial messaging service:  they have developed a common language across banks worldwide, serving their community for over 40 years now). The inspiration for the OA Switchboard has come also from examples of community-governed scholarly infrastructure (such as Crossref), which have successfully brought together a large and diverse community of stakeholders to address complex challenges.?…”

LYRASIS and Michigan Publishing Advance Community-owned, Publishing Ecosystem for eBook Distribution and Reading with Open-source System Integration

“LYRASIS and Michigan Publishing announce the successful integration of the Fulcrum platform with Library Simplified/SimplyE and The Readium Foundation’s Thorium Desktop Reader. 

This initiative brings together three open source reading and content delivery platforms, utilizing entirely open standards and technologies. By working together, the partners are improving discovery and access for ebooks and supporting the sustainability and scalability of two community-led social enterprises. …”

The broken promise that undermines human genome research

“Data sharing was a core principle that led to the success of the Human Genome Project 20 years ago. Now scientists are struggling to keep information free….

So in 1996, the HGP [Human Genome Project] researchers got together to lay out what became known as the Bermuda Principles, with all parties agreeing to make the human genome sequences available in public databases, ideally within 24 hours — no delays, no exceptions.

 

Fast-forward two decades, and the field is bursting with genomic data, thanks to improved technology both for sequencing whole genomes and for genotyping them by sequencing a few million select spots to quickly capture the variation within. These efforts have produced genetic readouts for tens of millions of individuals, and they sit in data repositories around the globe. The principles laid out during the HGP, and later adopted by journals and funding agencies, meant that anyone should be able to access the data created for published genome studies and use them to power new discoveries….

The explosion of data led governments, funding agencies, research institutes and private research consortia to develop their own custom-built databases for handling the complex and sometimes sensitive data sets. And the patchwork of repositories, with various rules for access and no standard data formatting, has led to a “Tower of Babel” situation, says Haussler….”