“Journals in the developing world face challenges in becoming known and respected in the international research landscape. Siân Harris describes Journal Publishing Practices and Standards, established and managed by African Journals Online and INASP.”
“A “PA” (Protected Access) notation may be added to open data badges if sensitive, personal data are available only from an approved third party repository that manages access to data to qualified researchers through a documented process. To be eligible for an open data badge with such a notation, the repository must publicly describe the steps necessary to obtain the data and detailed data documentation (e.g. variable names and allowed values) must be made available publicly. This notation is not available to researchers who state that they will make “data available upon request” and is not available if requests for data sharing are evaluated on any criteria beyond considerations for compliance with proper handling of sensitive data. For example, this notation is not available if limitations are placed on the permitted use of the data, such as for data that are only made available for the purposes of replicating previously published results or for which there is substantive review of analytical results. Review of results to avoid disclosure of confidential information is permissible….”
“This post is co-authored by Fernando Hoces de la Guardia, BITSS postdoctoral scholar, along with Sean Grant (Associate Behavioral and Social Scientist at RAND) and CEGA Faculty Director Ted Miguel. It is cross-posted with the BITSS Blog.
The Royal Society’s motto, “Take nobody’s word for it,” reflects a key principle of scientific inquiry: as researchers, we aspire to discuss ideas in the open, to examine our analyses critically, to learn from our mistakes, and to constantly improve. This type of thinking shouldn’t guide only the creation of rigorous evidence?—?rather, it should extend to the work of policy analysts whose findings may affect very large numbers of people. At the end of the day, a commitment to scientific rigor in public policy analysis is the only durable response to potential attacks on credibility. We, the three authors of this blog?—?Fernando Hoces de la Guardia, Sean Grant, and Ted Miguel?—?recently published a working paper suggesting a parallel between the reproducibility crisis in social science and observed threats to the credibility of public policy analysis. Researchers and policy analysts both perform empirical analyses; have a large amount of undisclosed flexibility when collecting, analyzing, and reporting data; and may face strong incentives to obtaining “desired” results (for example, p-values of <0.05 in research, or large negative/positive effects in policy analysis)….”
“I describe here a new project – called Appraise – that is both a model and experimental platform for what peer review can and should look like in a world without journals….
The rise of preprints gives us the perfect opportunity to create a new system that takes full advantage of the Internet to more rapidly, effectively and fairly engage the scientific community in assessing the validity, audience and impact of published works….
APPRAISE (A Post-Publication Review and Assessment In Science Experiment)…
It is perhaps easiest to think of Appraise as an editorial board without a journal (and we hope to be a model for how existing editorial boards can transition away from journals). Like journal editorial boards they will curate the scientific literature through the critical process of peer review. However members of Appraise will not be reviewing papers submitted to a journal and deciding whether it should be published. Rather Appraise reviewers are working in service of members of the scientific community, selecting papers they think warrant scrutiny and attention, and reviewing them to help others find, understand and assess published paper….
In the spirit of openness we encourage Appraise members to identify themselves, but recognize that the ability to speak freely sometimes requires anonymity. Appraise will allow members to post reviews anonymously provided that there are no conflicts of interest and the reviewer does not use anonymity as a shield for inappropriate behavior. Whether reviewers are publicly identified or not, Appraise will never tolerate personal attacks of any kind.
We are launching Appraise with a small group of scientists. This is for purely practical purposes – to develop our systems and practices without the challenges of managing a large, open community. But the goal is to as quickly as possible open the platform up to everyone.”
“Brian Nosek, PhD, co-founder and director of the Center for Open Science, welcomed the new standards. “Achieving the ideals of transparency in science requires knowing what one needs to be transparent about,” he said. “These updated standards will improve readers’ understanding of what happened in the research. This will improve both the accuracy of interpretation of the existing evidence, and the ability to replicate and extend the findings to improve understanding.” APA has partnered with the Center for Open Science to advance open science practices in psychological research through open science badges on articles, a data repository for APA published articles and designating the COS’ PsyArXiv as the preferred preprint server for APA titles….”
“How do you encourage researchers to share the data underlying their publications? The journal Psychological Science introduced a digital badge system in 2014 to signify when authors make the data and related materials accompanying their articles openly available. Criteria to earn the Open Data badge include (1) sharing data via a publicly accessible repository with a persistent identifier, such as a DOI, (2) assigning an open license, such as CC-BY or CC0, allowing reuse and credit to the data producer, and (3) providing enough documentation that another researcher could reproduce the reported results (Badges to Acknowledge Open Practices project on the Open Science Framework)….”
“There is no central authority determining the validity of scientific claims. Accumulation of scientific knowledge proceeds via open communication with the community. Sharing evidence for scientific claims facilitates critique, extension, and application. Despite the importance of open communication for scientific progress, present norms do not provide strong incentives for individual researchers to share data, materials, or their research process. Journals can provide such incentives by acknowledging open practices with badges in publications….”
“What are Open Science Badges?
Badges to acknowledge open science practices are incentives for researchers to share data, materials, or to preregister.
Badges signal to the reader that the content has been made available and certify its accessibility in a persistent location….
Badges seem silly. Do they work?
Yes. Implementing these badges dramatically increases the rate of data sharing (Kidwell et al, 2016).
A recent systematic review identified this badging program as the only evidence-based incentive program that this effective at increasing the rates of data sharing (Rowhani-Farid et al., 2017).
View a list of journals and organizations that have adopted badges here….”
“CREDIT is a cloud-enabled SaaS tool for data management to provide an opportunity to authors to register their Additional Research Outputs(AROs) reflecting RAW, REPEAT & NULL/NEGATIVE entities generated at various stages of research workflow to ensure their reusability & gaining credit. Hence contributing towards enriching research articles & reproducible science. CREDIT framework & interface is developed on FAIR data principles….The appearance of these badges happens dynamically, hence creates a possibility that the metrics around the data, when readers engage with it would be fed back to the main published article in real-time (accessible via the badge – Enhancing Discoverability and also giving credits to Authors). And in the near-future we also have plans to roll out Badges that can be embedded in PDF articles….”
“Open Humans is a program of the nonprofit Open Humans Foundation and has been funded by the Robert Wood Johnson Foundation and the Knight Foundation. Our 2015 launch was written up in Forbes, Newsweek, Scientific American, and more.
You decide when to share. You have valuable data, and you’ll decide when to share it. The data you provide will be private by default. You can choose which projects to share with. You can also opt to make some (or all) of your data public, so anyone can access and research it!
Studies, projects, and more. Browse our activities list to see the many potential data sources you can add, and interesting projects you can join.
Be a part of research. We’ll recognize your contributions with badges on your profile page, invite you to talk to other community members in our online forums, and periodically post new activities, study updates, and relevant interviews in our newsletters and on our blog….”