“Science drives innovation. Unfortunately, scientific knowledge is locked behind closed doors. The current system requires academics to publish in high impact journals. This is inefficient knowledge sharing. It is slow, bureaucratic and requires academics to give away their copyright. Above all, it is very expensive. Each university has to pay 2-7 millions of euros per year in public money to obtain access to that research which was paid by them in the first place. It is a 32 billion market, controlled by five publishers who have a higher profit margin than Google. Money that could have been spent on research.
Imagjn open knowledge, where everybody has access to all scientific papers without artificial barriers such as paywalls. To do that, we have to change the rules in how we judge scientific impact. We should no longer focus on where someone publishes. Instead, we should focus on what someone publishes. Therefore, We want to move from a Journal Impact Factor to an Open Impact Factor, controlled and owned by academics. We develop a platform that simplifies writing, citing, reviewing and publishing scientific papers, making knowledge freely available to anyone….”
“At least one in three research-intensive universities in North America examined by a study leaned on the journal impact factor of periodicals that academics had published in when making decisions on promotion and tenure, but the true proportion may be much higher.
The study, believed to be the first to examine the use of the journal impact factor in academic performance reviews, warns that there is an “undue reliance” on the controversial metric….
Among the documents from 57 research-intensive institutions considered by the study, 23 (40 per cent) referred to journal impact factors, with 19 of these mentions (83 per cent of the subtotal) being supportive. Only three of the mentions expressed caution about use of journal impact factors.
Of the documents that did refer to journal impact factors, 14 associated the metric with research quality, while eight tied it to impact and a further five referred to prestige or reputation.
The overall results, including large numbers of universities that offer few doctoral degrees, found that 23 per cent of review, promotion and tenure policies mentioned the journal impact factor, with 87 per cent of these mentions being supportive….”
“Almost half of research-intensive universities consider journal impact factors when deciding whom to promote, a survey of North American institutions has found.
About 40% of institutes with a strong focus on research mention impact factors in documents used in the review, promotion and tenure process, according to the analysis, which examined more than 800 documents across 129 institutions in the United States and Canada.
Less than one-quarter of the institutions mentioned impact factor or a closely related term such as “high impact journal” in their documents. But this proportion rose to 40% for the 57 research-intensive universities included in the survey. By contrast, just 18% of universities that focused on master’s degrees mentioned journal impact factors (see ‘High impact’).
In more than 80% of the mentions at research-heavy universities, the language in the documents encouraged the use of the impact factor in academic evaluations. Only 13% of mentions at these institutions came with any cautionary words about the metric. The language also tended to imply that high impact factors were associated with better research: 61% of the mentions portrayed the impact factor as a measure of the quality of research, for example, and 35% stated that it reflected the impact, importance or significance of the work….”
“What might happen if the provost of a highly visible research university that had recently reconfirmed its public-facing mission gathered the entire campus together – deans, department chairs and faculty – in rethinking the university’s promotion and tenure standards from top to bottom? What might become possible if that provost were to say that our definitions of “excellence” in research, teaching and service must have that public-facing mission at their heart? What might be possible if that public mission really became Job One?
The provost paused. Then he gave his answer: “Any institution that did that would immediately lose competitiveness within its cohort.” …
The pursuit of prestige is not the problem in and of itself, and excellence is, of course, something to strive for. In fact, friendly competition can push us all to do better. But excellence and prestige and the competitiveness that fuels their pursuit are too often based in marketing – indeed, in the logic of the market – rather than in the actual purposes of higher education. It’s a diversion from the on-the-ground work of producing and sharing knowledge that can result in misplaced investments and misaligned priorities….”
Abstract: The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.
“Mr Chi said that when he joined Elsevier 14 years ago, he “saw that we were publishing a lot because that was a way to make more money, but that wasn’t really serving the long-term benefit of our company or the community of researchers”. Acting on that insight, he said, the publishing giant decided to publish fewer papers but of higher quality.
As a consequence, Elsevier “lost several percentage points of market share in those 14 years” but “gained about 25 per cent in the FWCI [field-weighted citation index], which means that we really raised the quality of the papers we publish”.
While he stressed that he was “not at all against the goal of the open access model”, Mr Chi said that Elsevier’s emphasis on high-grade work in effect opened a publishing niche and “left others to fill the vacuum”, which they did by publishing “without the quality control” and sometimes “without peer review”. …”
“Pressure increases on publishers to move more quickly to open access, but this leaves many questions unanswered
For the past decade, libraries have battled declining university budgets and increasing serials expenditures. With each Big Deal package renewal or cancellation, librarians and publishers have asked themselves: Did I make the best deal? Did I make the right deal? Recent developments in open access (OA) promise to bring major reform to academic publishing and, with that, new challenges and opportunities to the way that librarians and publishers choose to deal….”
“The Court finds that a permanent injunction against Defendants is appropriate under the circumstances to enjoin them from engaging in similar misleading and deceptive activities. Here, Defendants did not participate in an isolated, discrete incident of deceptive publishing, but rather sustained and continuous conduct over the course of years. An injunction is therefore necessary to prevent future misconduct and protect the public interest….
Where, as here, consumers suffer broad economic injury resulting from a defendant’s violations of the FTC Act, equity requires monetary relief in the full amount lost by consumers….Accordingly, the Court finds Defendants jointly and severally liable for restitution in the amount of $50,130,811.00….”
“Heard of “Plan S”? You will. Plan S arose from the work of an international group called Coalition S. Their aim is to have all published research available open access immediately on publication. The coalition has some powerful membership organizations, mainly across Europe but in some other countries too. Coverage is not yet universal, and some key organizations have not signed up. However, the coalition has one powerful financial backer in the Bill & Melinda Gates Foundation and, given the widespread—and sometimes misplaced—enthusiasm for open access, this is likely to gather momentum. On the face of it “Plan S” seems entirely laudable and altruistic, however, it raises a number of issues for both researchers and publishers….”
“Initially PLOS ONE was a “club” of radicals who could afford to experiment with a new publishing model. This resulted in a higher than expected initial JIF and a massive influx of new authors, who were attracted to this (now) “proven” publishing model. Consequently, article processing times expanded (congestion), the initial sense of community became harder to maintain and the influx of articles ultimately reduced the JIF, leading to the flight of authors that were just seeking access to the prestige of the journal. The journal then shifted from a community (if not properly a knowledge club, as the disciplines were too disparate) to a social network market, which it could not sustain.
Scientific Reports follows a similar trajectory, but for different reasons. Initial submissions were not driven by a desire to be radical or progressive, as the concept of a mega-journal was already proven. Rather, Scientific Reports launched as a social network market, providing access to the prestige of the Nature brand. This model in turn became unsustainable, as the journal developed its own reputation and niche, which had been carefully planned through the naming (which does not include the name “Nature”) to avoid any dilution of the existing Nature brand.
What does this mean for Open Access and for initiatives like PlanS? Note that the club-theoretic model is ambivalent about how payments are made. We see similar patterns of growth and decline for subscription and APC journals alike. However the model is arguably better configured to understand how to create knowledge-value efficiently, because it asks how a community can be created and sustained, and how open access to membership can both stimulate and dilute knowledge-making itself. In our next post, we will discuss the implications of our model for planning a transition to full open access.”