“Despite the challenges over the last year, we are pleased to share some exciting news about launching the brave new PID, DMP IDs. Two years ago we set out a plan in collaboration with the University of California Curation Center and the DMPTool to bring DMP IDs to life. The work was part of the NSF Eager grant DMP Roadmap: Making Data Management Plans Actionable and allowed us to explore the potential of machine-actionable DMPs as a means to transform the DMP into a critical component of networked research data management.
The plan was to develop a persistent identifier (PID) for Data Management Plans (DMPs). We already have PIDs for many entities, such as articles, datasets etc. (DOIs), people (such as ORCID iDs) and places (such as ROR IDs). We knew that it would be important for DataCite to support the community in establishing a unique persistent identifier for DMPs. Until now, we had no PID for the document that “describes data that will be acquired or produced during research; how the data will be managed, described, and stored, what standards you will use, and how data will be handled and protected during and after the completion of the project”. There was no such thing as a DMP-ID; and today that changes….”
“Most are familiar with registering Digital Object Identifiers (DOIs), a type of Persistent Identifier (PID), to create lasting records for online research outputs. Registering DOIs for journal articles and other scholarly content and adding DOI links to references when possible is one of the best steps publishers can take to support research linking and discovery. But publishers shouldn’t stop at creating DOIs for articles. There are many other PIDs to consider adding to article-level metadata to support research discovery, assessment, and reuse. Additional PIDs can also expand the potential reach of content outputs when included in metadata registered with discovery services like Crossref.
During the NISO Plus session “Linked Data and the Future of Information Sharing,” Christian Herzog, CEO of Dimensions, and Shelley Stall, Senior Director of Data Leadership at the American Geophysical Union, spoke to emerging PIDs for linking research outputs by not only the content referenced in them but also the scholars, institutions, and funders associated with them. Among the PIDs they said all publishers should consider adding to their metadata are:
ORCID identifiers for authors and their history of research contributions
Institutional IDs such as those developed by GRID, which is the seed data set for the community-led ROR open research organization identifier registry
Grant IDs and funder IDs, such as those in The Open Funder Registry…”
“Research Resource Identifiers (#RRID) are ID numbers assigned to help researchers cite key resources (antibodies, model organisms and software projects) in the biomedical literature to improve transparency of research methods….”
“Dealing with Open Access publications, where there are multiple authors, affiliated institutions, research funders, business models, policies, mandates, requirements and agreements involved, is complex and administratively burdensome. Funders, institutions and publishers are faced with a myriad of systems, portals and processes when dealing with Open Access publication-level arrangements. This hampers the transition to Open Access, the realisation of policies and agreements, and progress in developing new business models. From a researcher perspective, this landscape is at best confusing and at worst impenetrable. (see visual 1)
These challenges are in no way unique to the open access publishing landscape and are in fact relatively common in marketplaces that have an increasingly complex web of interactions between buyers and sellers (see OA Switchboard introductory blog, 2019). The introduction of a central intermediary is often the easiest way to reduce complexity on all sides, but a payment component doesn’t come without challenges and risks. If one leaves out the payment component, there is still complexity around information and data exchange in our ecosystem. Other industries have already tackled similar problems successfully a long time ago. (Think of SWIFT, the global financial messaging service: they have developed a common language across banks worldwide, serving their community for over 40 years now). The inspiration for the OA Switchboard has come also from examples of community-governed scholarly infrastructure (such as Crossref), which have successfully brought together a large and diverse community of stakeholders to address complex challenges.?…”
“LYRASIS and Michigan Publishing announce the successful integration of the Fulcrum platform with Library Simplified/SimplyE and The Readium Foundation’s Thorium Desktop Reader.
This initiative brings together three open source reading and content delivery platforms, utilizing entirely open standards and technologies. By working together, the partners are improving discovery and access for ebooks and supporting the sustainability and scalability of two community-led social enterprises. …”
“Data sharing was a core principle that led to the success of the Human Genome Project 20 years ago. Now scientists are struggling to keep information free….
So in 1996, the HGP [Human Genome Project] researchers got together to lay out what became known as the Bermuda Principles, with all parties agreeing to make the human genome sequences available in public databases, ideally within 24 hours — no delays, no exceptions.
Fast-forward two decades, and the field is bursting with genomic data, thanks to improved technology both for sequencing whole genomes and for genotyping them by sequencing a few million select spots to quickly capture the variation within. These efforts have produced genetic readouts for tens of millions of individuals, and they sit in data repositories around the globe. The principles laid out during the HGP, and later adopted by journals and funding agencies, meant that anyone should be able to access the data created for published genome studies and use them to power new discoveries….
The explosion of data led governments, funding agencies, research institutes and private research consortia to develop their own custom-built databases for handling the complex and sometimes sensitive data sets. And the patchwork of repositories, with various rules for access and no standard data formatting, has led to a “Tower of Babel” situation, says Haussler….”
“We are pleased to announce the next OASPA webinar which will explore the question of open metadata with regard to books. What are the relations, challenges, and opportunities of thinking and developing open book metadata and open access in terms of labor, quality, persistence, standardization, accessibility, and discoverability?”