Only this introductory sentence is OA: “Announcement of hefty article processing charges by prestige titles challenges goals of open access mandate.”
“From its inception, the open access movement has postulated that publishing costs should be controlled by research institutions and funded by redirecting resources after canceling journal subscriptions. In reality, things have proved more complex. Although « transformative agreements” that cover both publishing and reading have rapidly increased the percentage of articles published in open access in some institutions, the details of these agreements are generally kept secret and so their scope is difficult to compare.
Nevertheless, it is clear that making most articles open access but for a fee, if tariffs are not a realistic reflection of actual costs, will explode university library budgets (Harvard estimates this increase at 71%) and mark large differences in the ability to publish. Indeed, this could create a vicious circle whereby well-funded researchers publish more, gain more visibility as well as recognition and, as a result, get more funding.
If Plan S does not explicitly monitor and maintain, within the terms of its open publication requirement, an insurmountable ceiling on publication costs, these perverse effects of budget explosion will be inevitable. This is now where the challenge of communicating public research lies….”
From Google’s English: “Open access is enjoying increasing success . Since 2017, the majority of new articles in all academic disciplines, especially in science, have been published in open access. In 2020, at the request of UNESCO , most publishers removed toll barriers from articles on the COVID-19 pandemic in order to quickly understand the characteristics of the SARS-CoV-2 virus and accelerate the development of vaccines and treatments. In this regard, the COVID-19 pandemic will have made many understand the usefulness, even the absolute necessity, of instant and open communication in the face of a large-scale collective challenge….
Another danger is that, by its binding nature (which is also its strength and its chance to operate), Plan S offers traditional publishers a tempting opportunity to demand publication rights (called APCs, article processing charges) excessively. high, in order not to cover costs, but to compensate for the shortfall in the cancellation of subscriptions….
Coalition S seeks to exert downward pressure on publishing prices by seeking transparency. When a grant recipient’s research is published, Plan S requires publishers to disclose their rates to funders, including the cost of services such as screening, organizing peer review. , improved writing and proofreading. The coalition is committed to sharing this information openly with authors and institutions, in the hope of ensuring some level of price control….
Some authors are also hesitant because of the requirement that they publish in prestigious and high-impact journals to obtain tenure, promotion or the means to carry out their work. In addition, they may fall victim to the misconception that journals which only offer open access articles lack rigor.
In addition, paying to publish in journals which benefit from the prestige of their publishing house creates a flagrant inequality between researchers according to the financial means at their disposal….”
“In this extensive report, published with support from the Ford Foundation in 2016, writer and investor Nadia Eghbal explores the lack of institutional support for public code. She unpacks how the system currently functions and the unique challenges it faces, and provides recommendations for how to address the problem.
As Eghbal outlines, digital infrastructure should be treated as a necessary public good. Free public source code makes it exponentially cheaper and easier for companies to build software, and makes technology more accessible across the globe. However, there is a common misconception that the labor for open source projects is well-funded. In reality, it is largely created and maintained by volunteers who do it to build their reputations, out of a sense of obligation, or simply as a labor of love. The decentralized, non-hierarchical nature of the public coding community makes it difficult to secure pay for coders, yet the work that emerges from it is the foundation for a digital capitalist economy. Increasingly, developers are using shared code without contributing to its maintenance, leaving this infrastructure strained and vulnerable to security breaches….
Eghbal emphasizes that because open source thrives on human rather than financial resources, money alone won’t fix the problem. A nuanced understanding of open source culture, and an approach of stewardship rather than control over digital infrastructure are required. She recommends that efforts to fund and support digital infrastructure embrace decentralization, work with existing software communities, and provide long-term, proactive and holistic support. Increasing awareness of the challenges of sustaining digital infrastructure, making it easier for institutions to contribute time and money, expanding and diversifying the pool of open source contributors, and developing best practices and policies across infrastructure projects will all go a long way in building a healthy and sustainable ecosystem.”
Abstract: In this paper I will outline a worry that citizen science can promote a kind of transparency that is harmful. I argue for the value of secrecy in citizen science. My argument will consist of analysis of a particular community (herpers), a particular citizen science platform (iNaturalist, drawing contrasts with other platforms), and my own travels in citizen science. I aim to avoid a simple distinction between science versus non-science, and instead analyze herping as a rich practice [MacIntyre, 2007]. Herping exemplifies citizen science as functioning simultaneously within and outside the sphere of science. I show that herpers have developed communal systems of transmitting and protecting knowledge. Ethical concerns about secrecy are inherently linked to these systems of knowledge. My over-arching aim is to urge caution in the drive to transparency, as the concepts of transparency and secrecy merit close scrutiny. The concerns I raise are complementary to those suggested by previous philosophical work, and (I argue) resist straightforward solutions.
“There has been significant concern expressed in the repository community about the requirements contained in the Data Repository Selection: Criteria that Matter, which sets out a number of criteria for the identification and selection of data repositories that will be used by publishers to guide authors in terms of where they should deposit their data.
COAR agrees that it is important to encourage and support the adoption of best practices in repositories. And there are a number of initiatives looking at requirements for repositories, based on different objectives such as the FAIR Principles, CoreTrustSeal, the TRUST Principles, and the CARE Principles of Indigenous Data Governance. Recently COAR brought together many of these requirements – assessed and validated them with a range of repository types and across regions – resulting in the publication of the COAR Community Framework for Best Practices in Repositories.
However, there is a risk that if repository requirements are set very high or applied strictly, then only a few well-resourced repositories will be able to fully comply. The criteria set out in Data Repository Selection: Criteria that Matter are not currently supported by most domain or generalist data repositories, in particular the dataset-level requirements. If implemented by publishers, this will have a very detrimental effect on the open science ecosystem by concentrating repository services within a few organizations, further exacerbating inequalities in access to services. Additionally, it will introduce bias against some researchers, for example, researchers who prefer to share their data locally; researchers in the global south; or researchers who want to share their data in a relevant domain repository, so it can be visible to their peers and integrated with other similar datasets….”
Abstract: In this paper I argue that open educational resources (OER), such as open textbooks, are an appropriate and worthwhile response to consider as colleges and universities shift to digital modes of teaching and learning. However, without scrutiny, such efforts may reflect or reinforce structural inequities. Thus, OER can be a mixed blessing, expanding inclusion and equity in some areas, but furthering inequities in others.
As part of the “shifting to digital” special issue, this paper is in response to Hilton (2016). I argue that open educational resources (OER), such as open textbooks, can expand equity and inclusion, but without scrutiny, they may reflect or reinforce, and thus expand, structural inequities.
OER are defined as “teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and repurposing” (Hewlett 2017). Hilton (2016) synthesized the existing literature to examine outcomes associated with instances in which OER replaced commercial textbooks. He reported two major findings. First, students generally performed better when using OER compared to commercial textbooks. The use of OER was not associated with decreases in learning. Second, OER were generally perceived by faculty and students to be as good as, if not better than, traditional textbooks. While this research faces some limitations acknowledged by the author much research since then continues to affirm the author’s original findings (e.g., Clinton and Khan 2019; Hilton 2020).
“During the COVID-19 pandemic, and perhaps thereafter, investigators may continue to want their findings released and shared as rapidly as possible, but such speed to widespread public dissemination vs sharing within a community of specialists most likely to understand the complexities of the science and concerns to public health or without rigorous editorial evaluation and peer review before publication does not come without consequences and potential for harm.29,30 For many investigators, preprints may be considered an initial step along the scientific dissemination and publication pathway, just as abstract, poster, and video research presentations at in-person and virtual scientific meetings have a role in the early sharing and discussion of studies among specialist communities before publication in a journal. While manuscripts previously posted as preprints may be improved following formal submission to a journal and undergoing editorial evaluation, peer review, revision, and editing, others may not be suitable for formal publication because of methodologic flaws, biases, and important limitations. Authors should share preprints during the processes of manuscript submission to journals, just as they do with study protocols and registration reports, to aid journal editors in the evaluation of the quality of the reporting of the study and prioritization for publication. Preprints and preprint servers are here to stay, but perhaps in the immediate future a more selective use of these sites may be warranted, with clinical investigators exercising caution when the focus of a study is on drugs, vaccines, or medical devices and the results of a study may directly affect treatment of patients.”