Abstract: In 2019, the Governing Council of the Society for Research in Child Development (SRCD) adopted a Policy on Scientific Integrity, Transparency, and Openness (SRCD, 2019a) and accompanying Author Guidelines on Scientific Integrity and Openness in Child Development (SRCD, 2019b). In this issue, a companion article (Gennetian, Tamis?LeMonda, & Frank) discusses the opportunities to realize SRCD’s vision for a science of child development that is open, transparent, robust, and impactful. In this article, we discuss some of the challenges associated with realizing SRCD’s vision. In identifying these challenges—protecting participants and researchers from harm, respecting diversity, and balancing the benefits of change with the costs—we also offer constructive solutions.
“The Editors are pleased to announce that Health Psychology has adopted the Transparency and Openness Promotion (TOP) Guidelines (Center for Open Science, 2021). We and the other core American Psychological Association (APA) journals are implementing these guidelines at the direction of the APA Publications and Communications Board. Their decision was made with the support of the APA Council of Editors and the APA Open Science and Methodology Committee.
The TOP Guidelines were originally published in Science (Nosek et al., 2015) to encourage journals to incentivize open research practices. They are being implemented by a wide range of scientific publications, including some of the leading behavioral and medical research journals….”
“As the science reform movement has gathered momentum to change research culture and behavior relating to openness, rigor, and reproducibility, so has the critical analysis of the reform efforts. This symposium includes five perspectives examining distinct aspects of the reform movement to illuminate and challenge underlying assumptions about the value and impact of changing practices, to identify potential unintended or counterproductive consequences, and to provide a meta perspective of metascience and open science. It’s meta, all the way up.
Each presenter will provide a 15-minute perspective followed by a concluding discussion among the panelists and a time to address audience questions. Visit cos.io/meta-meta to view session abstracts and speaker info.”
“PsychOpen CAMA enables accessing meta-analytic datasets, reproducing meta-analyses and dynamically updating evidence from new primary studies collaboratively….
A CAMA (Community Augmented Meta Analysis) is an open repository for meta-analytic data, that provides meta-analytic analysis tools….
PsychOpen CAMA enables easy access and automated reproducibility of meta-analyses in psychology and related fields. This has several benefits for the research community:
Evidence can be kept updated by adding new studies published after the meta-analysis.
Researchers with special research questions can use subsets of the data or rerun meta-analyses using different moderators.
Flexible analyses with the datasets enable the application of new statistical procedures or different graphical displays.
The cumulated evidence in the CAMA can be used to get a quick overview of existing research gaps. This may give an idea of which study designs or moderators may be especially interesting for future studies to use limited resources for research in a way to enhance evidence.
Given existing meta-analytic evidence, the necessary sample size of future studies to detect an effect of a reasonable size can be estimated. Moreover, the effect of possible future studies on the results of the existing meta-analytic evidence can be simulated.
PsychOpen CAMA offers tutorials to better understand the reasoning behind meta-analyses and to learn the basic steps of conducting a meta-analysis to empower other researchers to contribute to our project for the benefit of the research community….”
“Recent years have been times of turmoil for psychological science. Depending on whom you ask, the field underwent a “replication crisis” (Shrout and Rodgers 2018) or a “credibility revolution” (Vazire 2018) that might even climax in “psychology’s renaissance” (Nelson, Simmons, and Simonsohn 2018). This article asks what social scientists can learn from this story. Our take-home message is that although differences in research practices make it difficult to prescribe cures across disciplines, much still can be learned from interdisciplinary exchange. We provide nine lessons but first summarize psychology’s experience and what sets it apart from neighboring disciplines….”
“Changes are afoot in the way the scientific community is approaching the practice and reporting of research. Spurred by concerns about the fundamental reliability (i.e., replicability), or rather lack thereof, of contemporary psychological science (e.g., Open Science Collaboration, 2015), as well as how we go about our business (e.g., Gelman & Loken, 2014), several recommendations have been furthered for increasing the rigor of the published research through openness and transparency. The Journal has long prized and published the type of research with features, like large sample sizes (Fraley & Vazire, 2014), that has fared well by replicability standards (Soto, 2019). The type of work traditionally published here, often relying on longitudinal samples, large public datasets (e.g., Midlife in the United States Study), or complex data collection designs (e.g., ambulatory assessment and behavioral coding) did not seem to fit neatly into the template of the emerging transparency practices. However, as thinking in the open science movement has progressed and matured, we have decided to full?throatedly endorse these practices and join the growing chorus of voices that are encouraging and rewarding more transparent work in psychological science. We believe this can be achieved while maintaining the “big tent” spirit of personality research at the Journal with a broad scope in content, methods, and analytical tools that has made it so special and successful all of these years. Moving forward, we will be rigorously implementing a number of procedures for openness and transparency consistent with the Transparency and Open Science Promotion (TOP) Guidelines.
The TOP Guidelines are organized into eight standards, each of which can be implemented at three levels of stringency (Nosek et al., 2015). In what follows, we outline the initial TOP Standards Levels adopted by the Journal and the associated rationale. Generally, we have adopted Level 2 standards, as we believe these strike a desirable balance between compelling a high degree of openness and transparency while not being overly onerous and a deterrent for authors interested in the Journal as an outlet for their work….”
“Our BABCP journals have for some time been supportive of open science in its various forms. We are now taking the next steps towards this in terms of our policies and practices. For some things we are transitioning to the changes (but would encourage our contributors to embrace these as early as possible), and in others we are implementing things straight away. This is part of the global shift to open practices in science, and has many benefits and few, if any, drawbacks. See for example http://www.unesco.or/e//ommunication-and-informatio/ortals-and-platform/oa/pen-science-movement/
One of the main drivers for open science has been the recent ‘reproducibility crisis’, which crystallised long-standing concerns about a range of biases within and across research publication. Open science and research transparency will provide the means to reduce the impact of such biases, and can reasonably be considered to be a paradigm change. There are benefits beyond dealing with problems, however.
McKiernan et al. (2016) for example suggest that ‘open research is associated with increases in citations, media attention, potential collaborators, job opportunities and funding opportunities’. This is, of course, from a researcher-focused perspective. The BABCP and the Journal Editors take the view that open and transparent research practices will have the greatest long-term impact on service users both directly and indirectly through more accurate reporting and interpretation of research and its applications by CBT practitioners. So what are the practical changes we are implementing in partnership with our publisher, Cambridge University Press?…”
Abstract: Replication, an important, uncommon, and misunderstood practice, is making a comeback in psychology. Achieving replicability is a necessary but not sufficient condition for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understanding to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understanding and observed surprising failures to replicate many published findings. Replication efforts also highlighted sociocultural challenges, such as disincentives to conduct replications, framing of replication as personal attack rather than healthy scientific practice, and headwinds for replication contributing to self-correction. Nevertheless, innovation in doing and understanding replication, and its cousins, reproducibility and robustness, have positioned psychology to improve research practices and accelerate progress.
“Given that preprints are here to stay, the field should be devoting resources to getting them certified more quickly as having received some amount of expert scrutiny. This is particularly important, of course, for preprints making claims relevant to the response to the pandemic.
In many cases, one component of this certification is already happening very quickly. More publicly-available peer review is happening today than ever before – just not at our journals. While academic journals typically call on half a handful of hand-picked, often reluctant referees, social media is not as limiting, and lively expert discussions are flourishing at forums like Twitter, Pubpeer, and the commenting facility of preprint servers.
So far, most journals have simply ignored this. As a result, science is now happening on two independent tracks, one slow, and one fast. The fast track is chaotic and unruly, while the slow track is bureaucratic and secretive – at most journals the experts’ comments never become available to readers, and the resulting evaluation by the editor of the strengths and weaknesses of the manuscript are never communicated to readers….
Will we need to reinvent the scientific journal wheel, or will legacy journals catch up with the modern world, by both taking advantage of and adding value to the peer review that is happening on the fast track?”