“I just updated the home page for my book, Open Access (MIT Press, 2012), with more than 80 links to relevant tag libraries from the Open Access Tracking Project (OATP)….”
“David Lewis has recently proposed that libraries devote 2.5% of its total budget to support the common infrastructure needed to create the open scholarly commons….In the early stages of exploring this idea, we want to come to some level agreement about what would in fact count as such an investment, and then build a registry that would allow libraries to record their investments in this area, track their investments over time, and compare their investments with like institutions. The registry would also serve as a guide for those looking for ideas for how to make the best investments for their institution, providing a listing of all ‘approved’ ways to invest in open, and as a place for those seeking investment to be discovered. As a first step towards building such a thing, we are crowdsourcing the creation of the inventory of ways to invest….”
Abstract: Risk analysis and risk governance face a decline in social trust at both the scientific and policy levels. The involvement of society in the process has been proposed as an approach to increasing trust and engagement by making better use of available data and knowledge. In this session, EFSA explored the challenges in building trust and engagement and the latest thinking and methodologies for increasing openness that can help the organisation to move beyond traditional dialogue and towards a more sustainable stakeholder and society interaction. The discussion centred on the needs of EFSA and of target audiences throughout the process, from risk assessment initiation through societal decision-making and communication. The main focus of the session was on methodologies and approaches that would enable EFSA to increase its scientific rigour and build trust from additional inputs gained by opening up its risk assessments at the level of data gathering, data analysis, expertise and innovation. This will require an approach that moves beyond traditional risk assessment practices that rely on a long chain of static information and knowledge such as scientific articles, reviews, expert groups and committees.
Abstract: Since its foundation, EFSA and the Member States have made significant progress in the area of data collection for risk assessment and monitoring. In partnership with competent authorities and research organisations in the Member States, EFSA has become a central hub of the European data on food consumption, chemical occurrence and foodborne outbreaks. Beyond EFSA’s use of these data and sharing of contaminants and food consumption data with the World Health Organization and the Food and Agriculture Organization to support international risk assessment, they remain largely unexploited. In addition, for some of its risk assessments, EFSA also relies on published information, as well as on scientific studies sponsored and submitted by industry. The environment in which the Authority operates has significantly evolved since its foundation. The growth of digital technology has granted scientists and consumers alike faster and more efficient access to data and information. The open data movement, which has entered the sphere of the European Union institutions, is unleashing the potential for reuse of data. In parallel, the work of EFSA is increasingly subject to demands for more openness and transparency across its spectrum of stakeholders. EFSA aims to enhance the quality and transparency of its outputs by giving access to data and promoting the development of collaborative platforms in Europe and internationally. EFSA also plans to work with data providers and organisations funding research to adopt open data concepts and standards; gaining better access to, and making better use of, data from a wider evidence base. During the breakout session on ‘Open Risk Assessment: Data’ at the EFSA 2nd Scientific Conference ‘Shaping the Future of Food Safety, Together’ (Milan, Italy, 14–16 October 2015) opportunities and challenges associated with open data, data interoperability and data quality were discussed by sharing experiences from various sectors within and outside EFSA’s remit. This paper provides an overview of the presentations and discussions during the breakout session.
“Open Humans is a program of the nonprofit Open Humans Foundation and has been funded by the Robert Wood Johnson Foundation and the Knight Foundation. Our 2015 launch was written up in Forbes, Newsweek, Scientific American, and more.
You decide when to share. You have valuable data, and you’ll decide when to share it. The data you provide will be private by default. You can choose which projects to share with. You can also opt to make some (or all) of your data public, so anyone can access and research it!
Studies, projects, and more. Browse our activities list to see the many potential data sources you can add, and interesting projects you can join.
Be a part of research. We’ll recognize your contributions with badges on your profile page, invite you to talk to other community members in our online forums, and periodically post new activities, study updates, and relevant interviews in our newsletters and on our blog….”
“Academia has teamed up with Encyclopedia Britannica to offer access to all of Britannica’s content to Academia Premium users.
Academia is also inviting its members to contribute as authors on Britannica’s Publisher Partner Program. We’ve joined dozens of institutions including UC Berkeley, Northwestern University, the University of Melbourne and others in support of the initiative, which aims to expand Britannica’s free, open access content.”
“Research experiences today are limited to a privileged few at select universities. Providing open access to research experiences would enable global upward mobility and increased diversity in the scientific workforce. How can we coordinate a crowd of diverse volunteers on open-ended research? How could a PI have enough visibility into each person’s contributions to recommend them for further study? We present Crowd Research, a crowdsourcing technique that coordinates open-ended research through an iterative cycle of open contribution, synchronous collaboration, and peer assessment. To aid upward mobility and recognize contributions in publications, we introduce a decentralized credit system: participants allocate credits to each other, which a graph centrality algorithm translates into a collectively-created author order. Over 1,500 people from 62 countries have participated, 74% from institutions with low access to research. Over two years and three projects, this crowd has produced articles at top-tier Computer Science venues, and participants have gone on to leading graduate programs.”
“There’s a vast trove of science out there locked inside the PDF format. From preprints to peer-reviewed literature and historical research, millions of scientific manuscripts today can only be found in a print-era format that is effectively inaccessible to the web of interconnected online services and APIs that are increasingly becoming the digital scaffold of today’s research infrastructure….Extracting key information from PDF files isn’t trivial. …It would therefore certainly be useful to be able to extract all key data from manuscript PDFs and store it in a more accessible, more reusable format such as XML (of the publishing industry standard JATS variety or otherwise). This would allow for the flexible conversion of the original manuscript into different forms, from mobile-friendly layouts to enhanced views like eLife’s side-by-side view (through eLife Lens). It will also make the research mineable and API-accessible to any number of tools, services and applications. From advanced search tools to the contextual presentation of semantic tags based on users’ interests, and from cross-domain mash-ups showing correlations between different papers to novel applications like ScienceFair, a move away from PDF and toward a more open and flexible format like XML would unlock a multitude of use cases for the discovery and reuse of existing research….We are embarking on a project to build on these existing open-source tools, and to improve the accuracy of the XML output. One aim of the project is to combine some of the existing tools in a modular PDF-to-XML conversion pipeline that achieves a better overall conversion result compared to using individual tools on their own. In addition, we are experimenting with a different approach to the problem: using computer vision to identify key components of the scientific manuscript in PDF format….To this end, we will be collaborating with other publishers to collate a broad corpus of valid PDF/XML pairs to help train and test our neural networks….”
The home page of the Open Access Tracking Project. “OATP uses social tagging to capture new developments on open access to research. The OATP mission is (1) to provide a real-time alert service for OA-related news and comment, and (2) to organize knowledge of the field by tag or subtopic. The project publishes a comprehensive primary feed of new OA developments, and hundreds of smaller secondary feeds on OA subtopics, one for each project tag.”