Open and affordable textbooks: approaches to OER pedagogy | Emerald Insight

Abstract:  Purpose

Following is an overview of the open and affordable textbooks (OAT) program, strategies for outreach, as well as discuss approaches that faculty awardees have taken to designing their courses. This paper aims to address a couple issues such as the effectiveness of open educational resources (OER) resources, the process of creating OER resources and how faculty and instructors have updated their courses and adjusted their pedagogy.

 

Design/methodology/approach

This paper describes five cases where the faculty adopted open pedagogy. They include a general chemistry course, psychiatry clerkship, microbiology lab, a medical Spanish course and a radiology elective in a medical school.

 

Findings

The use of open pedagogy promotes two things: up-to-date resources and practical experience. Since the creation of the Rutgers OAT program, faculty and instructors have been rethinking how they teach their courses. Students enjoy the content more and faculty loves the increase in engagement. As the program continues to grow, the creativity fostered by open pedagogy improves education for everyone involved.

 

Originality/value

The paper offers a general overview of an effective open and affordable program at a public research university. It demonstrated the effectiveness of the program while also offering examples of novel course materials for interested librarians and faculty. It opens the possibility from just finding resources to creating them and how they improve education.

What’s Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers | Fantastic Anachronism

[Some recommendations:]

Ignore citation counts. Given that citations are unrelated to (easily-predictable) replicability, let alone any subtler quality aspects, their use as an evaluative tool should stop immediately.
Open data, enforced by the NSF/NIH. There are problems with privacy but I would be tempted to go as far as possible with this. Open data helps detect fraud. And let’s have everyone share their code, too—anything that makes replication/reproduction easier is a step in the right direction.
Financial incentives for universities and journals to police fraud. It’s not easy to structure this well because on the one hand you want to incentivize them to minimize the frauds published, but on the other hand you want to maximize the frauds being caught. Beware Goodhart’s law!
Why not do away with the journal system altogether? The NSF could run its own centralized, open website; grants would require publication there. Journals are objectively not doing their job as gatekeepers of quality or truth, so what even is a journal? A combination of taxonomy and reputation. The former is better solved by a simple tag system, and the latter is actually misleading. Peer review is unpaid work anyway, it could continue as is. Attach a replication prediction market (with the estimated probability displayed in gargantuan neon-red font right next to the paper title) and you’re golden. Without the crutch of “high ranked journals” maybe we could move to better ways of evaluating scientific output. No more editors refusing to publish replications. You can’t shift the incentives: academics want to publish in “high-impact” journals, and journals want to selectively publish “high-impact” research. So just make it impossible. Plus as a bonus side-effect this would finally sink Elsevier….”

Viral Science: Masks, Speed Bumps, and Guard Rails: Patterns

“With the world fixated on COVID-19, the WHO has warned that the pandemic response has also been accompanied by an infodemic: overabundance of information, ranging from demonstrably false to accurate. Alas, the infodemic phenomenon has extended to articles in scientific journals, including prestigious medical outlets such as The Lancet and NEJM. The rapid reviews and publication speed for COVID-19 papers has surprised many, including practicing physicians, for whom the guidance is intended….

The Allen Institute for AI (AI2) and Semantic Scholar launched the COVID-19 Open Research Dataset (CORD-19), a growing corpus of papers (currently 130,000 abstracts plus full-text papers being used by multiple research groups) that are related to past and present coronaviruses.

Using this data, AI2, working with the University of Washington, released a tool called SciSight, an AI-powered graph visualization tool enabling quick and intuitive exploration
6

 of associations between biomedical entities such as proteins, genes, cells, drugs, diseases, and patient characteristics as well as between different research groups working in the field. It helps foster collaborations and discovery as well as reduce redundancy….

The research community and scientific publishers working together need to develop and make accessible open-source software tools to permit the dual-track submission discussed above. Repositories such as Github are a start….”

How reliable and useful is Cabell’s Blacklist ? A data-driven analysis

“In scholarly publishing, blacklists aim to register fraudulent or deceptive journals and publishers, also known as “predatory”, to minimise the spread of unreliable research and the growing of fake publishing outlets. However, blacklisting remains a very controversial activity for several reasons: there is no consensus regarding the criteria used to determine fraudulent journals, the criteria used may not always be transparent or relevant, and blacklists are rarely updated regularly. Cabell’s paywalled blacklist service attempts to overcome some of these issues in reviewing fraudulent journals on the basis of transparent criteria and in providing allegedly up-to-date information at the journal entry level. We tested Cabell’s blacklist to analyse whether or not it could be adopted as a reliable tool by stakeholders in scholarly communication, including our own academic library. To do so, we used a copy of Walt Crawford’s Gray Open Access dataset (2012-2016) to assess the coverage of Cabell’s blacklist and get insights on their methodology. Out of the 10,123 journals that we tested, 4,681 are included in Cabell’s blacklist. Out of this number of journals included in the blacklist, 3,229 are empty journals, i.e. journals in which no single article has ever been published. Other collected data points to questionable weighing and reviewing methods and shows a lack of rigour in how Cabell applies its own procedures: some journals are blacklisted on the basis of 1 to 3 criteria – some of which are very questionable, identical criteria are recorded multiple times in individual journal entries, discrepancies exist between reviewing dates and the criteria version used and recorded by Cabell, reviewing dates are missing, and we observed two journals blacklisted twice with a different number of violations. Based on these observations, we conclude with recommendations and suggestions that could help improve Cabell’s blacklist service.”

Problems With Open Access Publishing in Radiology : American Journal of Roentgenology : Ahead of Print (AJR)

Abstract:  OBJECTIVE. Open access publishing has grown exponentially and can be a means of increasing availability of scientific knowledge to readers who cannot afford to pay for access. This article discusses problems that can occur with open access and offers suggestions for ameliorating the problems facing radiology research because of poor-quality journals.

CONCLUSION. Open access literature has loosed an avalanche of information into the radiology world, much of which has not been validated by careful peer review. To maintain academic integrity and serve our colleagues and patients, radiologists need to guard against shoddy science published in deceptive journals.

Innovating editorial practices: academic publishers at work | Research Integrity and Peer Review | Full Text

“Journals independent of large commercial publishers tend to have less hierarchically structured processes, report more flexibility to implement innovations, and to a greater extent decouple commercial and editorial perspectives….

This is in line with the dominant view on the changing status of publishers, which requires the publishers to take considerable action. Part of this action, driven by the shift towards more open access publishing, consists of a changing focus on ‘what the community wants’. While previously librarians would be the main source and spokesmen of ‘what the community wants’, publishers are quickly shifting their attention towards needs and desires of researchers, either in their role as authors or as reviewers, as exemplified in initiatives such as Publons, the use of the Journal Impact Factor to advertise to authors, publisher-facilitated preprint servers, or the appearance of mega-journals. This aligns with the publishers’ business needs: in the subscription-model, librarians were involved in deciding which subscriptions to buy, but in the open access model researchers themselves are more directly involved in deciding where to publish and hence where to spend money on publishing.

A stronger focus on transparency constitutes another trend among publishers that is fueled by external changes in the publishing landscape. By being more transparent about publishing work, for instance by showing how many reviewers had to be invited, publishers can demonstrate the effort that goes into the review process, thereby showing their added value: “We need to do a better job in showing how we have added value. Being open about review and the system is a way of doing this” (senior manager)….”

Balancing Scientific Rigor With Urgency in the Coronavirus Disease 2019 Pandemic | Open Forum Infectious Diseases | Oxford Academic

“In the midst of the worst pandemic in a century, the medical community must contend with an unprecedented deluge of scientific information. Coronavirus disease 2019 (COVID-19) has stretched the capacity of journals to ensure rapid dissemination of studies to inform the response to the pandemic while maintaining quality standards. At the same time, the ecosystem of knowledge dissemination is changing, with the rise of nonpeer-reviewed pathways, including the use of preprint servers and the apparent trend of publication by press release. We argue that peer-reviewed journals are more critical than ever, and that it is imperative that journals not abandon principles of scientific rigor in favor of urgency….”

On the Importance of Data Transparency: Patterns

“We all need to shoulder this burden. Researchers have the responsibility to ensure that their conclusions are backed up by their data and that their data are in a state where it is easy for those with domain expertise to understand. Peer reviewers must ask questions about the data in their reviews and dig a little deeper into the databases, if something doesn’t seem right. Journal editors need to ensure that these checks on data are carried out before publication and that the policies on data accessibility statements are adhered to.

The lack of transparency around data and the resulting retraction of peer-reviewed papers show that we cannot afford to ignore everyday issues regarding accessibility of data any longer. The good news is that these issues are well understood and policies are already in place to deal with them. What we need to do now is find ways of making it easy and quick to abide by those policies, and that will require time, investment, and a willingness to engage from the entire research community.

The road to scientific transparency is long, but we’re already on our way.”

Science at Warp Speed: Medical Research, Publication, and Translation During the COVID-19 Pandemic | SpringerLink

“In response to the COVID-19 pandemic, there has been a rapid growth in research focused on developing vaccines and therapies. In this context, the need for speed is taken for granted, and the scientific process has adapted to accommodate this. On the surface, attempts to speed up the research enterprise appear to be a good thing. It is, however, important to consider what, if anything, might be lost when biomedical innovation is sped up. In this article we use the case of a study recently retracted from the Lancet to illustrate the potential risks and harms associated with speeding up science. We then argue that, with appropriate governance mechanisms in place (and adequately resourced), it should be quite possible to both speed up science and remain attentive to scientific quality and integrity….”