Abstract: In 2019 we became increasingly aware of authors at Imperial College London choosing to publish grey literature through local website PDF or full text hosting. Recognising the need to improve the institutional open access repository as a venue of choice to publish or co-publish grey literature, we developed a publishing model of identifiers (DOIs and ORCIDs) and metrics (indexing, citations and Altmetric coverage). Some of the incentives already existed in the repository but had not previously been explicitly communicated as benefits; whilst others required technical infrastructure development and scholarly communications education for authors. As of September 2020, a 206% increase in deposit of one type of grey literature has been observed on the previous full year, including Imperial’s influential COVID-19 reports.
“There are more than 25 million Open Access versions of otherwise “paywalled” scientific articles, however they are often not easy to find.
Open Access Helper for iOS & macOS is designed to help you get easy access to these documents, with a lot of help from some amazing APIs….
Open Access Helper is designed to make finding the best Open Access location easy. Whenever my app comes across a DOI, it will query the APIs of unpaywall.org & core.ac.uk to see if an Open Access copy is available elsewhere.
The App is free and Open Source and I have no intention to change that….”
“The National Information Standards Organization (NISO) announced today that their Voting Members have approved a new work item to update the 2008 Recommended Practice, NISO RP-8-2008, Journal Article Versions (JAV): Recommendations of the NISO/ALPSP JAV Technical Working Group. A NISO Working Group is being set up, and work is expected to begin in early 2021.
Publication practices have changed rapidly since the publication of the original recommendations. For example, preprints have become much more important as a publication type in many disciplines, and publishers are increasingly experimenting with new ways to publish, update, and keep research alive. All of these versions of an article are important and citable, making the concept of a single ‘version of record’ less relevant. These additional processes to support public availability make the consistent assignment of DOIs to one or more versions challenging.
The NISO JAV working group will define a set of terms for each of the different versions of content that are published, as well as a recommendation for whether separate DOIs should be assigned to them. They will address questions such as: Should there be a single DOI for an article, regardless of version? Different DOIs for each version? How are the identifiers connected and used? How do we define a version? As with all NISO output, the group’s draft recommendations will be shared for public comment before publication….”
“Part of reimagining of the research ecosystem means the publication is not the only important output of research. What about the datasets unpinning the publications, the software and other outputs? What about getting credit for producing and sharing those outputs? A research data management policy was in its early stages of development at the CRG and my time with publications had come to an end. I said goodbye to ISA with a heavy heart and travelled deeper into the depths of the open infrastructure supporting data sharing.
I have now been working with DataCite for 2 years. I am the support manager, which means I spend time helping librarians with metadata problems and reporting bugs, as well as organizing meetings and writing documentation. I work with the community engagement team, the data community being the backbone of our organization.
DataCite is a DOI registration agency. I didn’t know much about DOIs when I started working for them, now maybe I know too much. A DOI (Digital Object Identifier) is a type of persistent identifier (PID). They are used to uniquely identify “stuff” for example: publications (Crossref DOIs), datasets (DataCite DOIs), researchers (ORCIDs) and research institutions (RoR). We sometimes joke, in a very nerdy way, about other types of things that could have identifiers. There is already a move to have them for conferences, samples and instruments. DOIs are always accompanied by metadata. Some basic examples of metadata would be the “title”, “publisher” and “date” of the content being shared.
We work primarily with repositories (some well known generalist repositories are Zenodo and Figshare) to assign DOIs to research outputs. Assigning a DOI and the accompanying metadata means that the research outputs in these repositories can be discovered, cited and tracked. It makes data FAIR (Findable, Accessible, Interoperable, Reusable). There is no doubt that data sharing and citation are an essential part of moving towards a better research ecosystem. But getting this to happen takes time and effort. It involves changing practices – like actually citing the underlying data in research articles – and much more work lies ahead. DataCite works primarily with nonprofit organizations, but partnerships with for-profits open up new possibilities. There is no good and evil in this church. We must strive for openness, trust and transparency, there is no time to waste….”
“A stakeholder group was therefore formed earlier this year, with representatives from all disciplines and sectors — funders, HEIs, infrastructure providers, libraries, publishers, researchers, research managers, and more. At an initial meeting of this group in April, participants discussed the five persistent identifiers (PIDs) that have been deemed high priority for improving access to UK research. These are ORCID iDs for people, Crossref and DataCite DOIs for outputs, Crossref grant DOIs, ROR identifiers for organisations, and RAiDs for projects. This was followed by five focus group meetings during May and June, each focused on one of the priority PIDs….”
“In a bid to boost the reach and reuse of scientific results, a group of scholarly publishers has pledged to make abstracts of research papers free to read in a cross-disciplinary repository.
Most abstracts are already available on journal websites or on scholarly databases such as PubMed, even if the papers themselves are behind paywalls. But this patchwork limits the reach and visibility of global research, says Ludo Waltman, deputy director of the Centre for Science and Technology Studies at Leiden University in the Netherlands, and coordinator of the initiative for open abstracts, called I4OA.
Publishers involved in I4OA have agreed to submit their article summaries to Crossref, an agency that registers scholarly papers’ unique digital object identifiers (DOIs). Crossref will make the abstracts available in a common format. So far, 52 publishers have signed up to the initiative, including the American Association for the Advancement of Science and the US National Academy of Sciences….”
“Today DataCite is proud to announce the launch of DataCite Commons, available at https://commons.datacite.org. DataCite Commons is a discovery service that enables simple searches while giving users a comprehensive overview of connections between entities in the research landscape. This means that DataCite members registering DOIs with us will have easier access to information about the use of their DOIs and can discover and track connections between their DOIs and other entities. DataCite Commons was developed as part of the EC-funded project Freya and will form the basis of new DataCite services….
We integrate with both the ORCID and ROR (Research Organization Registry) APIs to enable a search for (10 million) people and (100,000) organizations and to show the associated content. For funding, we take advantage of the inclusion of Crossref Funder IDs in ROR metadata. We combine these connections, showing a funder, research organization, or researcher not only their content but also the citations and views and downloads if available, together with aggregate statistics such as numbers by year or content type….”
“The Digital Object Identifier, or DOI, is a persistent link, and specifically, it is an identifier allowing data to the traced from production to publication. By citing DOIs in all their publications, users guarantee the traceability of all the details of their experiment. This includes the request for beamtime, the experimental parameters and conditions, the instrumentation used, the data obtained, the analysis of this data, and the names of the research team members.”
Abstract: Wikipedia’s contents are based on reliable and published sources. To this date, little is known about what sources Wikipedia relies on, in part because extracting citations and identifying cited sources is challenging. To close this gap, we release Wikipedia Citations, a comprehensive dataset of citations extracted from Wikipedia. A total of 29.3M citations were extracted from 6.1M English Wikipedia articles as of May 2020, and classified as being to books, journal articles or Web contents. We were thus able to extract 4.0M citations to scholarly publications with known identifiers — including DOI, PMC, PMID, and ISBN — and further labeled an extra 261K citations with DOIs from Crossref. As a result, we find that 6.7% of Wikipedia articles cite at least one journal article with an associated DOI. Scientific articles cited from Wikipedia correspond to 3.5% of all articles with a DOI currently indexed in the Web of Science. We release all our code to allow the community to extend upon our work and update the dataset in the future.
“The Repository Dashboard is a free service for our data providers. The Repository Dashboard has been created in an effort to improve the quality and transparency of the harvesting process of open access content and to create a two way collaboration between the CORE project and our data providers.
The Repository Dashboard provides an online interface offering valuable technical information and statistics to content providers. It is the tool you need to check that your repository is configured correctly to provide maximum visibility to your research outputs. Additional features include identifier enrichments, such as detecting missing DOIs for repository records. The tool also offers REF 2021 Open Access compliance monitoring functionality to UK HEIs, and a RIOXX metadata quality checker….”