Megajournals, Quality Standards and Selectivity: Gaussian Facts of Life

SUMMARY: It is obvious that broad-spectrum, low-selectivity, pay-to-publish mega-journals — whether Open Access or not Open Access — can help meet many researchers’ need to publish today, but it is certainly not true that that’s the only way, the best way, or the most economical way to provide Open Access to their articles.

Like height, weight, and just about every other biological trait (including every field of human performance), scholarly/scientific quality is normally distributed (the “bell” curve), most of it around average, tapering toward the increasingly substandard in the lower tail of the bell curve and toward increasing excellence in the upper tail.

For some forms of human performance — e.g., driving or doctoring — we are satisfied with a pass/fail license cut-off.

For others, such as sports or musical performance, we prefer finer-grained levels, with a hierarchy of increasingly exacting — hence selective — performance standards.

But, as a matter of necessity with a finite (though growing) population and a bell curve with tapered tails, the proportion (and hence the number) of candidates and works that can meet higher and higher performance standards gets smaller and smaller.

Not only can everyone and everything not be in the top 10% or the top 1% or the top 0.1%, but because the bell curve’s tail is tapered (it is a bell, not a pyramid), the proportion that can meet higher and higher standards shrinks even faster than a straight line.

Scholars and scientists’ purpose in publishing in peer-reviewed journals — indeed, the purpose of the “publish-or-perish” principle itself — had always been two-fold: (1) to disseminate findings to potential users (i.e., mostly other scholars and scientists) and (2) to meet and mark a hierarchy of quality levels with each individual journal’s name and its track-record for the rigor of its peer review standards (so users at different levels can decide what to read and trust and so quality can be assessed and rewarded by employers and funders).

In principle (though not yet in practice), journals are no longer needed for the first of these purposes, only the second — but for that, they need to continue to be selective, ensuring that the hierarchy of quality standards continues to be met and marked.

It is obvious that broad-spectrum, low-selectivity, pay-to-publish mega-journals — whether OA or not OA — can help meet many researchers’ need to publish today, but it is certainly not true that that’s the only way, the best way, or the most economical way to provide OA for their articles:

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).

ABSTRACT: Plans by universities and research funders to pay the costs of Open Access Publishing (“Gold OA”) are premature. Funds are short; 80% of journals (including virtually all the top journals) are still subscription-based, tying up the potential funds to pay for Gold OA; the asking price for Gold OA is still high; and there is concern that paying to publish may inflate acceptance rates and lower quality standards. What is needed now is for universities and funders to mandate OA self-archiving (of authors’ final peer-reviewed drafts, immediately upon acceptance for publication) (“Green OA”). That will provide immediate OA; and if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions) that will in turn induce journals to cut costs (print edition, online edition, access-provision, archiving), downsize to just providing the service of peer review, and convert to the Gold OA cost-recovery model; meanwhile, the subscription cancellations will have released the funds to pay these residual service costs. The natural way to charge for the service of peer review then will be on a “no-fault basis,” with the author’s institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

Peer review itself, however, will, like homeostasis, always “defend” a level, whether that level is methodological soundness alone, methodological soundness and originality, methodological soundness, originality and importance, or what have you. The more exacting the standard, the fewer the papers that will be able to meet it. Perhaps the most important function of peer review is not the “marking” of a paper’s having met the standard, but helping the paper to reach the standard, through referee feedback, adjudicated by an editor, sometimes involving several rounds of revision and re-refereeing.

Since peer review is an active, dynamical process of correction and improvement, it is not like the passive assignment of a letter grade to a finished work — A, B, C, D. Rather, an author picks a journal that “defends” a target grade (A or B or C or D), submits the paper to that journal for refereeing, and then tries to improve the paper so as to meet the referees’ recommendations (if any) by revising it.

There are, in other words, A, B, C and D journals, the A+ and A journals being the highest-standard and most selective ones, and hence the least numerous in terms of both titles and articles, for the Gaussian reasons described above.

A mega-journal, in contrast, is equivalent to one generic pass/fail grade (often in the hope that the “self-corrective” nature of science and scholarship will eventually take care of any further improvement and sorting that might be needed — after publication, though “open peer review”).

Maybe one day scholarly publication will move toward a model like that — or maybe it won’t (because users require more immediate quality markers, and/or because the post-publication marking is too uncertain and unreliable).

But what’s needed today is open access to the peer-reviewed literature, published in A, B, C and D journals, such as it is, not to a pass/fail subset of it.

Hence pass/fail mega-journals are a potential supplement to the status quo, but not a substitute for it.

Stevan Harnad