A few thoughts on OA Monitoring and CRISs (I) | euroCRIS | Pablo de Castro

“In the wake of the AT2OA workshop on Open Access monitoring to be imminently held in Vienna, the post looks into recent attempts to coordinate the various national-level initiatives that are taking place in the area and suggests some possible prerequisites for this international endeavour to be able to succeed. It also argues that a successful OA monitoring in the pioneering countries should pave the way for other ones to eventually follow for their own progress assessment needs.                                                                                                                                                                                                                                                                                                                                                                                                           A European Council statement was issued in May 2016 aiming to achieve full Open Access to research outputs by 2020. This was hailed at the time as a major step forwards in the push to widen access to the results of publicly-funded research. Nearly two years later there’s a generalised awareness of the difficulty to reach this political goal across the EU by the proposed deadline. This should however not stop the efforts to achieve further progress and to improve the way Open Access is being implemented – this 100% Open Access objective is clearly achievable in specific countries that will then to some extent provide a best practice approach.                                                                                                                                                                                                                                    One of the areas where more work needs to be done is the actual monitoring of the progress in Open Access implementation. This has been on the cards for some time now, since national roadmaps with specific milestones and deadlines for reaching this 100% Open Access started to be produced quite a long time before the European Council meeting itself was held. This national-level discussions have resulted in a number of initiatives to monitor Open Access that are being implemented in different countries. The Knowledge Exchange, that brings together stakeholders like the Jisc in the UK, the DFG in Germany, SURF in the Netherlands, DEFF in Denmark or CSC in Finland, have taken a particularly relevant role in the past couple of years in ensuring that the various national-level approaches to Open Access monitoring would have the opportunity to discuss the progress with each other at a number of workshops….”

Why OpenStreetMap is in Serious Trouble — Emacsen’s Blog

“…The first problem that I feel plagues OSM is that the OpenStreetMap Foundation views the mission of the project to provide the world a geographic database, but not geographic services. OSM gives people the tools to create their own map rather than offering them a simple, out of the box solution. Providing the ability for individuals and organizations to make their own map may work well for some, but it discourages small and medium size organizations from using OSM and thus engaging with the project. And even if they do use our data, their engagement is through a third party, rather than directly with us….”

David Kernohan: Open data is about more than a licence | Wonkhe | Comment

“The release of the “Higher Education Student Statistics: UK, 2016/2017” (Statistical First Release 247) by HESA was accompanied around the sector by a series of sudden sharp intakes of breath in institutional data offices. It represents a brave and bold move into new ways of presenting and sharing data, and showed off a new format that will delight some and disappoint others. In this article I look at what has changed, and why.

The dash for designation. In applying for Designated Data Body status in England, HESA has made a move towards offering “open data”, suggesting that “From 2021 all of our publications will be available in open data format, allowing additional access to the information we enrich.” The Open Data Institute defines open data as “data that anyone can access, use or share,” which sounds like a pretty good thing. In many cases, though, open data has simply meant data that is available under an open (usually Creative Commons) licence – good to have legal clarity, but not at all the same as providing easily usable data.. HESA should be lauded for making this move for SFR248, but it is only a starting point….”

Data aggregators: a solution to open data issues – Open Knowledge International Blog

“Open Knowledge International’s report on the state of open data identifies the main problems affecting open government data initiatives. These are: the very low discoverability of open data sources, which were rightfully defined as being “hard or impossible to find”; the lack of interoperability of open data sources, which are often very difficult to be utilised; and the lack of a standardised open license, representing a legal obstacle to data sharing. These problems harm the very essence of the open data movement, which advocates data easy to find, free to access and to be reutilised.  

In this post, we will argue that data aggregators are a potential solution to the problems mentioned above.  Data aggregators are online platforms which store data of various nature at once central location to be utilised for different purposes. We will argue that data aggregators are, to date, one of the most powerful and useful tools to handle open data and resolve the issues affecting it.

We will provide the evidence in favour of this argument by observing how FAIR principles, namely Findability, Accessibility, Interoperability and Reusability, are put into practice by four different data aggregators engineered in Indonesia, Czech Republic, the US and the EU. …”

Data aggregators: a solution to open data issues – Open Knowledge International Blog

“Open Knowledge International’s report on the state of open data identifies the main problems affecting open government data initiatives. These are: the very low discoverability of open data sources, which were rightfully defined as being “hard or impossible to find”; the lack of interoperability of open data sources, which are often very difficult to be utilised; and the lack of a standardised open license, representing a legal obstacle to data sharing. These problems harm the very essence of the open data movement, which advocates data easy to find, free to access and to be reutilised.  

In this post, we will argue that data aggregators are a potential solution to the problems mentioned above.  Data aggregators are online platforms which store data of various nature at once central location to be utilised for different purposes. We will argue that data aggregators are, to date, one of the most powerful and useful tools to handle open data and resolve the issues affecting it.

We will provide the evidence in favour of this argument by observing how FAIR principles, namely Findability, Accessibility, Interoperability and Reusability, are put into practice by four different data aggregators engineered in Indonesia, Czech Republic, the US and the EU. …”

Open and Shut?: Realising the BOAI vision: Peter Suber’s Advice

Peter Suber’s current high-priority recommendations for advancing open access.

Global Persistent Identifiers for grants, awards, and facilities – Crossref

“Most funders already have local, internal grant identifiers. But there are over 15K funders currently listed in the aforementioned Open Funder Registry. The problem is that each funder has its own identifier scheme and (sometimes) API. It is very difficult for third parties to integrate with so many different systems. Open, global, persistent and machine-actionable identifiers are key to scaling these activities.

We already have a sophisticated open, global, interoperable infrastructure of persistent identifier systems for some key elements of scholarly communications. We have persistent identifiers for researchers and contributors (ORCID iDs), for data and software (DataCite DOIs), for journal articles, preprints, conference proceedings, peer reviews, monographs and standards (Crossref DOIs), and for Funders (Open Funder Registry IDs).

And there are similar systems under active development for research organizations, conferences, projects and resources reported in the biomedical literature (e.g. antibodies, model organisms). At a minimum, open, persistent identifiers address the inherent difficulty in disambiguating entities based on textual strings (structured or otherwise). This precision, in turn, allows automated cross-walking of linked identifiers through APIs and metadata which enable advanced applications….”

Global Persistent Identifiers for grants, awards, and facilities – Crossref

“Most funders already have local, internal grant identifiers. But there are over 15K funders currently listed in the aforementioned Open Funder Registry. The problem is that each funder has its own identifier scheme and (sometimes) API. It is very difficult for third parties to integrate with so many different systems. Open, global, persistent and machine-actionable identifiers are key to scaling these activities.

We already have a sophisticated open, global, interoperable infrastructure of persistent identifier systems for some key elements of scholarly communications. We have persistent identifiers for researchers and contributors (ORCID iDs), for data and software (DataCite DOIs), for journal articles, preprints, conference proceedings, peer reviews, monographs and standards (Crossref DOIs), and for Funders (Open Funder Registry IDs).

And there are similar systems under active development for research organizations, conferences, projects and resources reported in the biomedical literature (e.g. antibodies, model organisms). At a minimum, open, persistent identifiers address the inherent difficulty in disambiguating entities based on textual strings (structured or otherwise). This precision, in turn, allows automated cross-walking of linked identifiers through APIs and metadata which enable advanced applications….”