EUREKA by ScienceMatters

“EUREKA is a scientific review and rating platform fuelled by the EUREKA token. Blockchain has the capacity to open science and make research findings immutable, transparent and decentralised. EUREKA revolutionises the scientific publishing and reviewing process by making it more efficient and fair using the EUREKA token to compensate all parties involved. Scientific discoveries can now be openly rated and rewarded based on the quality of the research….

Scientific observations are timestamped, hashed and recorded on the Ethereum blockchain. This gives the author or inventor immediate ownership rights, and ensures scientists’ and researchers’ discoveries are tamper-proof….

Scientific observations undergo crowdsourced, peer-to-peer reviews which are transmitted and recorded on the EUREKA platform. The EUREKA platform will make use of crowdsourced wisdom and reviewers to get fast, accurate evaluations of the work, instead of being restricted to one or two reviewers, as is common practice….

The EUREKA platform’s crowdsourced scoring of scientific work will provide researchers as well as publishers with a new metric that can be used to evaluate submissions more swiftly. Preprints or observations with ratings and reviews will be archived through the EUREKA decentralised and distributed system. In cases where the author wants to publish in a traditional journal, the scores can be transferred to the journals. The test scores are also available to funders, universities and prize or awards committees….”

Home – Blockchain for Science

“To bring science towards reproducible results, autonomous and free data handling and incentivisation of true innovation.

To guide the social, technical, cultural, political, economical and legal impacts of the blockchain (r)evolution to science.

To support scientific communication and education.

To free science from any kind of censorship, central point of failure or other potential deadends.

Allow technical innovation without centralisation.

To provide insights to current stakeholders, science workers, research funders and institutions, and to constructively involve them in the transition process so as to prevent unnecessary friction costs among stakeholders (agnostic of the deep societal entanglement of science and research).

To provide alternative systems that may amend instead of forcefully replace existing systems.

To evolve sciences artificial competitions and markets so as to incentivise outliers and not the mean.

To provide novel, less overhead (and empty ‘conduct’ creating) concepts.

To make them unpredictable for system gamers.

To evolve business models, analyse their impact on the scientific culture and provide roadmaps to autonomous and continuous scientific endeavors and their applicability.

To replace our marketing heavy economy with innovation heavy economy. To solve the innovator’s dilemma.

To make the most efficient and sustainable use of our unique resources on earth, to accommodate an increasing number of humans and finally to make humanity multi-planetary, to prevent extinction of our species and to extend the ultimate frontier to become independent of our earth.

All this with a minimal and potentially self-sustaining structure.”

Project AIUR by Iris.ai: Democratize Science through blockchain-enabled disintermediation

“There are a number of problems in the world of science today hampering global progress. In an almost monopolized industry with terrible incentive misalignments, a radical change is needed. The only way to change this is with a grassroots movement – of researchers and scientists, librarians, scientific societies, R&D departments, universities, students, and innovators – coming together. We need to remove the powerful intermediaries, create new incentive structures, build commonly owned tools to validate all research and build a common Validated Repository of human knowledge. A combination of blockchain and artificial intelligence provides the technology framework, but as with all research, the scientist herself needs to be in the center. That is what we are proposing with Project Aiur, and we hope you will join us….

The outlined core software tool of the community will be the Knowledge Validation Engine (KVE). It will be a fully-fledged technical platform able to pinpoint: ? the building blocks of a scientific text;

? what the reader needs to know to be able to understand the text;

? what are the text’s factual sources; and,

? what is the reproducibility level of the different building blocks.

The platform will take a scientific document in the form of a scientific paper or technical report as an input, and it will provide an analytical report presenting:

? the knowledge architecture of the document;

? the hypotheses tree supporting the presented document’s hypothesis;

? the support level found for each of the hypotheses on the hypotheses tree; and,

? their respective reproducibility. All of this will be based on the knowledge database of scientific documents accessible to the system at any given point in time (knowledge in an Open Access environment). …”

Open Science Comes To Policy Analysis – CEGA – Medium

“This post is co-authored by Fernando Hoces de la Guardia, BITSS postdoctoral scholar, along with Sean Grant (Associate Behavioral and Social Scientist at RAND) and CEGA Faculty Director Ted Miguel. It is cross-posted with the BITSS Blog.

The Royal Society’s motto, “Take nobody’s word for it,” reflects a key principle of scientific inquiry: as researchers, we aspire to discuss ideas in the open, to examine our analyses critically, to learn from our mistakes, and to constantly improve. This type of thinking shouldn’t guide only the creation of rigorous evidence?—?rather, it should extend to the work of policy analysts whose findings may affect very large numbers of people. At the end of the day, a commitment to scientific rigor in public policy analysis is the only durable response to potential attacks on credibility. We, the three authors of this blog?—?Fernando Hoces de la Guardia, Sean Grant, and Ted Miguel?—?recently published a working paper suggesting a parallel between the reproducibility crisis in social science and observed threats to the credibility of public policy analysis. Researchers and policy analysts both perform empirical analyses; have a large amount of undisclosed flexibility when collecting, analyzing, and reporting data; and may face strong incentives to obtaining “desired” results (for example, p-values of <0.05 in research, or large negative/positive effects in policy analysis)….”

What Happens When Science Just Disappears? | WIRED

“KAY DICKERSIN KNEW she was leaping to the front lines of scholarly publication when she joined The Online Journal of Current Clinical Trials. Scientific print-publishing was—and still is—slow and cumbersome, and reading its results sometimes required researchers to go to the library. But as associate editor at this electronic peer-reviewed journal—one of the very first, launched in the summer of 1992—Dickersin was poised to help bring scientists into the new digital age. Dickersin, an epidemiologist, acted as an associate editor, helping researchers publish their work. But the OJCCT was a bit ahead of its time. The journal was sold in 1994 to a publisher that eventually became part of Taylor & Francis, and which stopped the e-presses just a couple years later. And after that happened, its papers—reports, reviews, and meta-analysis of clinical trials—all disappeared. Dickersin wasn’t just sad to lose her editing gig: She was dismayed that the scientific community was losing those archives. “One of my important studies was in there,” she says, “and no one could get it.” Couldn’t, that is, until Dickersin decided to go spelunking for science. For more than a decade, Dickersin’s paper was missing along with about 80 others. Sometimes, the ex-editors would try to find out who had the rights to the articles, whether they could just take copies and put them on their own website. “We don’t want to do that,” they’d always conclude. “We don’t want to get in trouble.” Finally, Dickersin went to the librarians at Johns Hopkins University, where she is a professor, for help—and that’s how she found Portico. Portico is like a Wayback Machine for scholarly publications. The digital preservation service ingests, meta-tags, preserves, manages, and updates content for publishers and libraries, and then provides access to those archives. The company soon signed on to the project and got permission from Taylor & Francis to make the future archives open-access….”

Massive Open Online Course (MOOC) on Data Management for Biomedical Big Data

The Countway Library of Medicine at the Harvard Medical School has received funding from the NIH Big Data to Knowledge (BD2K) Initiative for Research Education, to develop a Massive Open Online Course (MOOC) on Data Management for Biomedical Big Data. This MOOC is is built upon and expands the training materials of the New England Collaborative Data Management Curriculum (NECDMC), developed by several libraries in the New England region. 

The Best Practices for Biomedical Research Data Management course is hosted by the Canvas Network and provides training to librarians, biomedical researchers, undergraduate and graduate biomedical students, and other individuals interested on best practices for discoverability, access, integrity, reuse value, privacy, security, and long term preservation of biomedical research data. The course is free and self-paced….”

Best Practices for Biomedical Research Data Management – Canvas Network | Free online courses | MOOCs

“Biomedical research today is not only rigorous, innovative and insightful, it also has to be organized and reproducible. With more capacity to create and store data, there is the challenge of making data discoverable, understandable, and reusable. Many funding agencies and journal publishers are requiring publication of relevant data to promote open science and reproducibility of research.

In order to meet to these requirements and evolving trends, researchers and information professionals will need the data management and curation knowledge and skills to support the access, reuse and preservation of data.

This course is designed to address present and future data management needs….”

Estimating the reproducibility of psychological science | Science

The co-authors of this work call themselves the Open Science Collaboration.

“No single indicator sufficiently describes replication success, and the five indicators examined here are not the only ways to evaluate reproducibility. Nonetheless, collectively these results offer a clear conclusion: A large portion of replications produced weaker evidence for the original findings despite using materials provided by the original authors, review in advance for methodological fidelity, and high statistical power to detect the original effect sizes. Moreover, correlational evidence is consistent with the conclusion that variation in the strength of initial evidence (such as original P value) was more predictive of replication success than variation in the characteristics of the teams conducting the research (such as experience and expertise). The latter factors certainly can influence replication success, but they did not appear to do so here.

 

Reproducibility is not well understood because the incentives for individual scientists prioritize novelty over replication. Innovation is the engine of discovery and is vital for a productive, effective scientific enterprise. However, innovative ideas become old news fast. Journal reviewers and editors may dismiss a new test of a published idea as unoriginal. The claim that “we already know this” belies the uncertainty of scientific evidence. Innovation points out paths that are possible; replication points out paths that are likely; progress relies on both. Replication can increase certainty when findings are reproduced and promote innovation when they are not. This project provides accumulating evidence for many findings in psychological research and suggests that there is still more work to do to verify whether we know what we think we know….”