An Open Letter to MDPI publishing | Dan Brockington

“Dear MDPI,

 

Your journal publications have grown dramatically, and quite extraordinarily. But there are sceptics who suggest that this reflects low standards and distorting financial incentives. I was one of them. To prove my views I explored trends in publications of 140 journals for which data were available from 2015 onwards. But doing so proved me wrong; I could not sustain my sceptical view. Instead I think I have a better understanding of why researchers are so keen to publish with you. But my exploration also makes plain challenges that you will face in the future, that are caused by your remarkable success. In this letter I describe your growth, the lack of substance to sceptics’ criticism and the challenges which your success creates. I also describe potential solutions to them. Here is the word version of it….”

Motivations, understandings, and experiences of open?access mega?journal authors: Results of a large?scale survey – Wakeling – 2019 – Journal of the Association for Information Science and Technology – Wiley Online Library

Abstract:  Open?access mega?journals (OAMJs) are characterized by their large scale, wide scope, open?access (OA) business model, and “soundness?only” peer review. The last of these controversially discounts the novelty, significance, and relevance of submitted articles and assesses only their “soundness.” This article reports the results of an international survey of authors (n = 11,883), comparing the responses of OAMJ authors with those of other OA and subscription journals, and drawing comparisons between different OAMJs. Strikingly, OAMJ authors showed a low understanding of soundness?only peer review: two?thirds believed OAMJs took into account novelty, significance, and relevance, although there were marked geographical variations. Author satisfaction with OAMJs, however, was high, with more than 80% of OAMJ authors saying they would publish again in the same journal, although there were variations by title, and levels were slightly lower than subscription journals (over 90%). Their reasons for choosing to publish in OAMJs included a wide variety of factors, not significantly different from reasons given by authors of other journals, with the most important including the quality of the journal and quality of peer review. About half of OAMJ articles had been submitted elsewhere before submission to the OAMJ with some evidence of a “cascade” of articles between journals from the same publisher.

Motivations, understandings, and experiences of open?access mega?journal authors: Results of a large?scale survey – Wakeling – 2019 – Journal of the Association for Information Science and Technology – Wiley Online Library

Abstract:  Open?access mega?journals (OAMJs) are characterized by their large scale, wide scope, open?access (OA) business model, and “soundness?only” peer review. The last of these controversially discounts the novelty, significance, and relevance of submitted articles and assesses only their “soundness.” This article reports the results of an international survey of authors (n = 11,883), comparing the responses of OAMJ authors with those of other OA and subscription journals, and drawing comparisons between different OAMJs. Strikingly, OAMJ authors showed a low understanding of soundness?only peer review: two?thirds believed OAMJs took into account novelty, significance, and relevance, although there were marked geographical variations. Author satisfaction with OAMJs, however, was high, with more than 80% of OAMJ authors saying they would publish again in the same journal, although there were variations by title, and levels were slightly lower than subscription journals (over 90%). Their reasons for choosing to publish in OAMJs included a wide variety of factors, not significantly different from reasons given by authors of other journals, with the most important including the quality of the journal and quality of peer review. About half of OAMJ articles had been submitted elsewhere before submission to the OAMJ with some evidence of a “cascade” of articles between journals from the same publisher.

The Challenges of Sharing Data in an Era of Politicized Science | Medical Journals and Publishing | JAMA | JAMA Network

“Virtually every time JAMA publishes an article on the effects of pollution or climate change on health, the journal immediately receives demands from critics to retract the article for various reasons. Some individuals and groups simply do not believe that pollution or climate change affects human health….

Although the sharing of data may have numerous benefits, it also comes with substantial challenges particularly in highly contentious and politicized areas, such as the effects of climate change and pollution on health, in which the public dialogue appears to be based on as much fiction as fact. The sharing of data, whether mandated by funders, including foundations and government, or volunteered by scientists who believe in the principle of data transparency, is a complicated issue in the evolving world of science, analysis, skepticism, and communication. Above all, the scientific process—including original research and reanalysis of shared data—must prevail, and the inherent search for evidence, facts, and truth must not be compromised by special interests, coercive influences, or politicized perspectives. There are no simple answers, just words of caution and concern.”

Room for everyone’s talent: Toward a new balance in the recognition and reward of academics

Dutch public knowledge institutions and funders call for a modernization of the academic system of recognition and rewards, in particular in five key areas: education, research, impact, leadership and (for university medical centres) patient care. Sicco de Knecht writes, for ScienceGuide, that a culture change and national and international cooperation is required to achieve such modernization. 

“Many academics feel there is a one-sided emphasis on research performance, frequently leading to the undervaluation of the other key areas such as education, impact, leadership and (for university medical centres) patient care. This puts strain on the ambitions that exist in these areas. The assessment system must be adapted and improved in each of the areas and in the connections between them.”

Attitudes of North American Academics toward Open Access Scholarly Journals

Abstract:  In this study, the authors examine attitudes of researchers toward open access (OA) scholarly journals. Using two-step cluster analysis to explore survey data from faculty, graduate students, and postdoctoral researchers at large North American research institutions, two different cluster types emerge: Those with a positive attitude toward OA and a desire to reach the nonscholarly audience groups who would most benefit from OA (“pro-OA”), and those with a more negative, skeptical attitude and less interest in reaching nonscholarly readers (“non-OA”). The article explores these cluster identities in terms of position type, subject discipline, and productivity, as well as implications for policy and practice.

UNESCO Recommendation on Open Educational Resources (OER)

“[T]he UNESCO OER Recommendation has five objectives: (i) Building capacity of stakeholders to create access, use, adapt and redistribute OER; (ii) Developing supportive policy; (iii) Encouraging inclusive and equitable quality OER; (iv) Nurturing the creation of sustainability models for OER; and (v) Facilitating international cooperation….”

A look at prediction markets | Research Information

“Assessing the quality of research is difficult. Jisc and the University of Bristol are partnering to develop a tool that may help institutions improve this process.  

To attract government funding for their crucial research, UK universities are largely reliant on good ratings from the Research Excellent Framework (REF) – a process of expert review designed to assess the quality of research outputs. REF scores determine how much government funding will be allocated to their research projects. For instance, research that is world-leading in terms of originality, significance and rigour, will be scored higher than research that is only recognised nationally. 
 
Considerable time is spent by universities trying to figure out which research outputs will be rated highest (4*) on quality and impact. The recognised ‘gold standard’ for this process is close reading by a few internal academics, but this is time-consuming, onerous, and subject to the relatively limited perspective of just a few people.  …

Prediction markets capture the ‘wisdom of crowds’ by asking large numbers of people to bet on outcomes of future events – in this case how impactful a research project will be in the next REF assessment. It works a bit like the stock market, except that, instead of buying and selling shares in companies, participants buy and sell virtual shares online that will pay out if a particular event occurs – for instance, if a paper receives a 3* or above REF rating.  …”