“The Environmental Protection Agency (EPA) appears to have put a deeply controversial plan limiting the use of scientific data in policymaking on hold for the time being. The move follows significant outcry from experts and the agency’s own staff….
On its face, that push for transparency might resonate with some — but experts have repeatedly emphasized that confidential data is private for a reason. Making it public could violate patient privacy or industry confidentiality, in many instances breaking the law and potentially allowing for distortions of the information. Limiting the data government officials can use, meanwhile, could hinder efforts to protect both human health and the environment….”
Abstract: Open access libraries operate in a continuum between two distinct organisation models: online retailers versus ‘traditional’ libraries. Online retailers such as Amazon.com are successful in recom-mending additional items that match the specific needs of their customers. The success rate of the recommendation depends on knowledge of the individual customer: more knowledge about persons leads to better suggestions. Thus, to optimally profit from the retailers’ offerings, the client must be prepared to share personal information, leading to the question of privacy.
In contrast, protection of privacy is a core value for libraries. The question is how open access librar-ies can offer comparable services while retaining the readers’ privacy. A possible solution can be found in analysing the preferences of groups of like-minded people: communities. According to Lynch (2002), digital libraries are bad at identifying or predicting the communities that will use their collections. It is however our intention to explore the possibility to uncover sets of documents with a meaningful connection for groups of readers – the communities. The solution depends on examining patterns of usage, instead of storing information about individual readers.
This paper will investigate the possibility to uncover the preferences of user groups within an open access digital library using social networking analysis techniques.
[Less about OA than convenient access to non-OA sources.]
“Resource Access for the 21st Century (RA21) is a joint STM – NISO initiative aimed at optimizing protocols across key stakeholder groups, with a goal of facilitating a seamless user experience for consumers of scientific communication. In addition, this comprehensive initiative is working to solve long-standing, complex, and broadly distributed challenges in the areas of network security and user privacy. Community conversations and consensus building to engage all stakeholders is currently underway in order to explore potential alternatives to IP-authentication, and to build momentum toward testing alternatives among researcher, customer, vendor, and publisher partners.”
[Less about OA than convenient access to non-OA sources.]
“Publishers, libraries, and consumers have all come to the understanding that authorizing access to content based on IP address no longer works in today’s distributed world. The RA21 project hopes to resolve some of the fundamental issues that create barriers to moving to federated identity in place of IP address authentication by looking at some of the products and services available in the identity discovery space today, and determining best practice for future implementations going forward.”
“In 2005, [George Church] launched the Personal Genome Project (PGP), which collects data on a person’s DNA, environmental background, and relevant health and disease information from consenting participants. The premise of the PGP is grounded in open science, meaning that all this data is publicly available to researchers, who then study the relationship between specific DNA sequences and various displayed traits, like having an especially good memory.
This openness is the hallmark of the PGP, described on their website as “a vision and coalition of projects across the world dedicated to creating public genome, health, and trait data.” The PGP seeks to share data for the “greater good” in ways that have been previously “hampered by traditional research practices.” In other words, by being set up so it’s open-access project that allows individuals to freely share their data with researchers, no single researcher can “control” access to the data. By inviting participants to openly share their own personal data, this project allows individuals to directly impact scientific progress….”
“Changes to European legislation mandating public access to information about company owners is likely to stir up trouble when the time comes for privacy-obessed Germany to transpose the rules into national law….”
Klaus Graf argues against the HBZ conclusion that the GDPR required it to block access to the historically important archives of the HBZ mailing lists.
“GDPR has a dual objective, protecting the data subject and, at the same time, increasing the free and lawful flow of data. By adhering to the GDPR principles, the research community is able to ensure maximum protection of personal data while maximizing the potential of opening research to the world.”
“A “PA” (Protected Access) notation may be added to open data badges if sensitive, personal data are available only from an approved third party repository that manages access to data to qualified researchers through a documented process. To be eligible for an open data badge with such a notation, the repository must publicly describe the steps necessary to obtain the data and detailed data documentation (e.g. variable names and allowed values) must be made available publicly. This notation is not available to researchers who state that they will make “data available upon request” and is not available if requests for data sharing are evaluated on any criteria beyond considerations for compliance with proper handling of sensitive data. For example, this notation is not available if limitations are placed on the permitted use of the data, such as for data that are only made available for the purposes of replicating previously published results or for which there is substantive review of analytical results. Review of results to avoid disclosure of confidential information is permissible….”
“Frankl is a blockchain platform and tokenised economy to promote, facilitate, and incentivise the practice of open science. The initial focus of Frankl is cognitive assessment – an area of our expertise, and a research domain that faces particular challenges that are amenable to blockchain solutions.
In Phase I, Frankl will develop app-based cognitive assessments that streamline test administration and improve accessibility for children and adults with physical or cognitive disabilities. Apps will interface with blockchain-based data storage, facilitating data sharing for clinical and research purposes while maintaining privacy of individuals via encryption. Access to the Frankl suite of apps will be via micropayments in Frankl tokens.
In Phase II, Frankl will release the source code for the apps, enabling researchers, clinicians, and independent app developers to build their own cognitive assessment apps on the Frankl platform. In this way, Frankl will create a marketplace to incentivise (via Frankl tokens) the development of new and better cognitive assessments, simultaneously promoting open science and disrupting the forecast (by 2021) USD 8 billion global market for cognitive assessment and training.
This whitepaper outlines the technical specifications for the Frankl platform, the practical path to its creation, and exemplar applications including our first use case – a cognitive assessment specifically designed for autistic children. We provide details of the Frankl token economy and participation, and sketch out our long term vision for the development of Frankl as an interface whereby blockchain technologies facilitate the widespread adoption of open science practices. …”