Free Journal Network – Nurturing an ecosystem of high quality, open access, scholar-controlled journals with no author-facing charges

“The purpose of this site is to promote scholarly journals run according to the Fair Open Access model (roughly, journals that are controlled by the scholarly community, and have no financial barriers to readers and authors – see the Fair Open Access Principles for full details). Such journals have a long history. Many are of high procedural quality, but  are less well known than commercial journals of similar or lower quality.

One main aim of this site is to help such journals to coordinate their efforts to accelerate the creation of a journal ecosystem that will out-compete the commercially controlled journals. Such efforts are complementary to the work of discipline-based organizations such as LingOAMathOAPsyOA, and the overarching FOAA, that focus primarily on converting commercially controlled subscription journals to Fair Open Access….”

Harvard researchers to help develop cloud-based NIH Data Commons platform – Harvard Gazette

“The National Institute of Health has announced that Harvard co-Principal Investigators Dr. Mercè Crosas and Dr. Timothy Clark are NIH Data Commons Pilot Phase Awardees.

The awards are part of the National Institutes of Health’s new Data Commons program, which will be implemented in a 4-year pilot phase to explore the feasibility and best practices for making digital objects including very large-scale genomics resources, available and computable through collaborative platforms. This will be done on public clouds, virtual spaces where service providers make resources, such as applications and storage, available over the internet. The goal of the NIH Data Commons Pilot Phase is to accelerate biomedical discoveries by making biomedical research data Findable, Accessible, Interoperable, and Reusable (FAIR) for more researchers….”

Data aggregators: a solution to open data issues – Open Knowledge International Blog

“Open Knowledge International’s report on the state of open data identifies the main problems affecting open government data initiatives. These are: the very low discoverability of open data sources, which were rightfully defined as being “hard or impossible to find”; the lack of interoperability of open data sources, which are often very difficult to be utilised; and the lack of a standardised open license, representing a legal obstacle to data sharing. These problems harm the very essence of the open data movement, which advocates data easy to find, free to access and to be reutilised.  

In this post, we will argue that data aggregators are a potential solution to the problems mentioned above.  Data aggregators are online platforms which store data of various nature at once central location to be utilised for different purposes. We will argue that data aggregators are, to date, one of the most powerful and useful tools to handle open data and resolve the issues affecting it.

We will provide the evidence in favour of this argument by observing how FAIR principles, namely Findability, Accessibility, Interoperability and Reusability, are put into practice by four different data aggregators engineered in Indonesia, Czech Republic, the US and the EU. …”

Data aggregators: a solution to open data issues – Open Knowledge International Blog

“Open Knowledge International’s report on the state of open data identifies the main problems affecting open government data initiatives. These are: the very low discoverability of open data sources, which were rightfully defined as being “hard or impossible to find”; the lack of interoperability of open data sources, which are often very difficult to be utilised; and the lack of a standardised open license, representing a legal obstacle to data sharing. These problems harm the very essence of the open data movement, which advocates data easy to find, free to access and to be reutilised.  

In this post, we will argue that data aggregators are a potential solution to the problems mentioned above.  Data aggregators are online platforms which store data of various nature at once central location to be utilised for different purposes. We will argue that data aggregators are, to date, one of the most powerful and useful tools to handle open data and resolve the issues affecting it.

We will provide the evidence in favour of this argument by observing how FAIR principles, namely Findability, Accessibility, Interoperability and Reusability, are put into practice by four different data aggregators engineered in Indonesia, Czech Republic, the US and the EU. …”

Implementing FAIR Data Principles: The Role of Libraries – LIBER

The FAIR Data Principles are a set of guiding principles in order to make data findable, accessible, interoperable and reusable (Wilkinson et al., 2016). These principles provide guidance for scientific data management and stewardship and are relevant to all stakeholders in the current digital ecosystem. They directly address data producers and data publishers to promote maximum use of research data. Research libraries can use the FAIR Data Principles as a framework for fostering and extending research data services.

FAIRDOM

“Do you want to get the most impact from your research?   Are you tired of searching through old computer files to find the methods and data that link together?   Do you want to showcase your research from your best publications?  

FAIRDOM helps you to be in control of collecting, managing, storing, and publishing your data, models, and operating procedures. 

Join the hundreds of researchers who have improved research management practices in their lab, and for themselves using our software and expertise. …”

FAIR Principles

“This page is for elaboration of the individual FAIR Principles, the rationale behind them and the reason they are worded the way they are.  This is also a living document.  The Principles are not intended to be static, and have not be “ratified”.  The principles may change, based on community input and discussion of suggestions among the FAIR Principles Stewardship group.  …”

Big Deal journal bundles: price information from New Zealand | Filling a much-needed gap

“In 2014 Timothy Gowers and others used Freedom of Information laws to extract the relevant price information from UK universities. See here for more detailed information. Earlier (2009), less extensive, work in the USA  had also been done by Ted Bergstrom and others. Inspired by this, I tried the same thing in New Zealand (for 7 of the 8 universities – representing around 8400 academic/research staff and 130000 students, so far (Lincoln University, very much smaller than the others, was omitted owing to an oversight). Whereas Gowers was able to obtain the requested information within a few weeks, it has taken me 3.5 years. In both countries universities originally refused to release the information. However, in the UK there is an automatic right of review of such decisions, undertaken by an academic. In NZ, no such right exists….”