The Only Substitute for Metrics is Better Metrics

Comment on: Mryglod, Olesya, Ralph Kenna, Yurij Holovatch and Bertrand Berche (2014) Predicting the results of the REF using departmental h-index: A look at biology, chemistry, physics, and sociology. LSE Impact Blog 12(6)


The man who is ready to prove that metaphysical knowledge is wholly impossible? is a brother metaphysician with a rival theory? Bradley, F. H. (1893) Appearance and Reality

The topic of using metrics for research performance assessment in the UK has a rather long history, beginning with the work of Charles Oppenheim.

The solution is neither to abjure metrics nor to pick and stick to one unvalidated metric, whether it?s the journal impact factor or the h-index.

The solution is to jointly test and validate, field by field, a battery of multiple, diverse metrics (citations, downloads, links, tweets, tags, endogamy/exogamy, hubs/authorities, latency/longevity, co-citations, co-authorships, etc.) against a face-valid criterion (such as peer rankings).



      See also: “On Metrics and Metaphysics” (2008)

Oppenheim, C. (1996). Do citations count? Citation indexing and the Research Assessment Exercise (RAE). Serials: The Journal for the Serials Community, 9(2), 155-161.

Oppenheim, C. (1997). The correlation between citation counts and the 1992 research assessment exercise ratings for British research in genetics, anatomy and archaeology. Journal of documentation, 53(5), 477-487.

Oppenheim, C. (1995). The correlation between citation counts and the 1992 Research Assessment Exercise Ratings for British library and information science university departments. Journal of Documentation, 51(1), 18-27.

Oppenheim, C. (2007). Using the h-index to rank influential British researchers in information science and librarianship. Journal of the American Society for Information Science and Technology, 58(2), 297-301.

Harnad, S. (2001) Research access, impact and assessment. Times Higher Education Supplement 1487: p. 16.

Harnad, S. (2003) Measuring and Maximising UK Research Impact. Times Higher Education Supplement. Friday, June 6 2003

Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.

Hitchcock, Steve; Woukeu, Arouna; Brody, Tim; Carr, Les; Hall, Wendy and Harnad, Stevan. (2003) Evaluating Citebase, an open access Web-based citation-ranked search and impact discovery service Technical Report, ECS, University of Southampton.

Harnad, S. (2004) Enrich Impact Measures Through Open Access Analysis. British Medical Journal BMJ 2004; 329:

Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton.  

Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072.

Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight 17-18.

Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3).

Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance

Harnad, S. (2008) Self-Archiving, Metrics and Mandates. Science Editor 31(2) 57-59

Harnad, S., Carr, L. and Gingras, Y. (2008) Maximizing Research Progress Through Open Access Mandates and Metrics. Liinc em Revista 4(2).

Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007)

Harnad, S. (2009) Multiple metrics required to measure research performance. Nature (Correspondence) 457 (785) (12 February 2009)

Harnad, S; Carr, L; Swan, A; Sale, A & Bosc H. (2009) Maximizing and Measuring Research Impact Through University and Research-Funder Open-Access Self-Archiving Mandates. Wissenschaftsmanagement 15(4) 36-41

Progressive vs Treadwater Fields

There are many reasons why grumbling about attempts to replicate are unlikely in the physical or even the biological sciences, but the main reason is that in most other sciences research is cumulative:

Experimental and observational findings that are worth knowing are those on which further experiments and observations can be built, for an ever fuller and deeper causal understanding of the system under study, whether the solar system or the digestive system. If the finding is erroneous, the attempts to build on it collapse. Cumulative replication is built into the trajectory of research itself ? for those findings that are worth knowing.

In contrast, if no one bothers to build anything on it, chances are that a finding was not worth knowing (and so it matters little whether it would replicate or not, if tested again).

Why is it otherwise in many areas of Psychology? Why do the outcomes of so many one-shot, hit-and-run studies keep being reported in textbooks?

Because so much of Psychology is not cumulative explanatory research at all. It is helter-skelter statistical outcomes that manage to do two things: (1) meet a crierion for statistical significance (i.e., a low probability that they occurred by chance) and (2) are amenable to an attention-catching interpretation.

No wonder that their authors grumble when replicators spoil the illusion.

Yes, open access, open commentary and crowd-sourcing are needed in all fields, for many reasons, but for one reason more in hit-and-run fields.

Spurning the Better to Keep Burning for the Best

Björn Brembs (as interviewed by Richard Poynder) is not satisfied with “read access” (free online access: Gratis OA): he wants “read/write access” (free online access plus re-use rights: Libre OA).

The problem is that we are nowhere near having even the read-access that Björn is not satisfied with.

So his dissatisfaction is not only with something we do not yet have, but with something that is also an essential component and prerequsite for read/write access. Björn wants more, now, when we don’t even have less.

And alas Björn does not give even a hint of a hint of a practical plan for getting read/write access instead of “just” the read access we don’t yet have.

All he proposes is that a consortium of rich universities should cancel journals and take over.

Before even asking what on earth those universities would/should/could do, there is the question of how their users would get access to all those cancelled journals (otherwise this “access” would be even less than less!). Björn’s reply — doubly alas — uses the name of my eprint-request Button in vain:

The eprint-request Button is only legal, and only works, because authors are providing access to individual eprint requestors for their own articles. If the less-rich universities who were not part of this brave take-over consortium of journal-cancellers were to begin to provide automatic Button-access to all those extra-institutional users, their institutional license costs (subscriptions) would sky-rocket, because their Big-Deal license fees are determined by publishers on the basis of the size of each institution’s total usership, which would now include all the users of all the cancelling institutions, on Björn’s scheme.

So back to the work-bench on that one.

Björn seems to think that OA is just a technical matter, since all the technical wherewithal is already in place, or nearly so. But in fact, the technology for Green Gratis (“read-only”) OA has been in place for over 20 years, and we are still nowhere near having it. (We may, optimistically, be somewhere between 20-30%, though certainly not even the 50% that Science-Metrix has optimistically touted recently as the “tipping point” for OA — because much of that is post-embargo, hence Delayed Access (DA), not OA.

Björn also seems to have proud plans for post-publication “peer review” (which is rather like finding out whether the water you just drank was drinkable on the basis of some crowd-sourcing after you drank it).

Post-publication crowd-sourcing is a useful supplement to peer review, but certainly not a substitute for it.

All I can do is repeat what I’ve had to say so many times across the past 20 years, as each new generation first comes in contact with the access problem, and proposes its prima facie solutions (none of which are new: they have all been proposed so many times that they — and their fatal flaws — have already have each already had their own FAQs for over a decade.) The watchword here, again, is that the primary purpose of the Open Access movement is to free the peer-reviewed literature from access-tolls — not to free it from peer-review. And before you throw out the peer review system, make sure you have a tried, tested, scalable and sustainable system with which to replace it, one that demonstrably yields at least the same quality (and hence usability) as the existing system does.

Till then, focus on freeing access to the peer-reviewed literature such as it is.

And that’s read-access, which is much easier to provide than read-write access. None of the Green (no-embargo) publishers are read-write Green: just read-Green. Insisting on read-write would be an excellent way to get them to adopt and extend embargoes, just as the foolish Finch preference for Gold did (and just as Rick Anderson‘s absurd proposal to cancel Green (no-embargo) journals would do).

And, to repeat: after 20 years, we are still nowhere near 100% read-Green, largely because of phobias about publisher embargoes on read-Green. Björn is urging us to insist on even more than read-Green. Another instance of letting the (out-of-reach) Best get in the way of the (within-reach) Better. And that, despite the fact that it is virtually certain that once we have 100% read-Green, the other things we seek — read-write, Fair-Gold, copyright reform, publishing reform, perhaps even peer review reform — will all follow, as surely as day follows night.

But not if we contribute to slowing our passage to the Better (which there is already a tried and tested means of reaching, via institutional and funder mandates) by rejecting or delaying the Better in the name of holding out for a direct sprint to the Best (which no one has a tried and tested means of reaching, other than to throw even more money at publishers for Fool’s Gold). Björn’s speculation that universities should cancel journals, rely on interlibrary loan, and scrap peer-review for post-hoc crowd-sourcing is certainly not a tried and tested means!

As to journal ranking and citation impact factors: They are not the problem. No one is preventing the use of article- and author-based citation counts in evaluating articles and authors. And although the correlation between journal impact factors and journal quality and importance is not that big, it’s nevertheless positive and significant. So there’s nothing wrong with libraries using journal impact factors as one of a battery of many factors (including user surveys, usage metrics, institutional fields of interest, budget constraints, etc.) in deciding which journals to keep or cancel. Nor is there anything wrong with research performance evaluation committees using journal impact factors as one of a battery of many factors (alongside article metrics, author metrics, download counts, publication counts, funding, doctoral students, prizes, honours, and peer evaluations) in assessing and rewarding research progress.

The problem is neither journal impact factors nor peer review: The only thing standing between the global research community and 100% OA (read-Green) is keystrokes. Effective institutional and funder mandates can and will ensure that those keystrokes are done. Publisher embargoes cannot stop them: With immediate-deposit mandates, 100% of articles (final, refereed drafts) are deposited in the author’s institutional repository immediately upon acceptance for publication. At least 60% of them can be made immediately OA, because at least 60% of journals don’t embargo (read-Green) OA; access to the other 40% of deposits can be made Restricted Access, and it is there that the eprint-request Button can provide Almost-OA with one extra keystroke from the would-be user to request it and one extra keystroke from the author to fulfill the request.

That done, globally, and we can leave it to nature (and human nature) to ensure that the “Best” (100% immediate OA, subscription collapse, conversion to Fair Gold, all the re-use rights users need, and even peer-review reform) will soon follow.

But not as long as we continue spurning the Better and just burning for the Best.

Stevan Harnad

Paid-Gold OA, Free-Gold OA & Journal Quality Standards

Peter Suber has pointed out that “About 50% of articles published in peer-reviewed OA journals are published in fee-based journals” (as reported by Laakso & Bjork 2012).

Laakso & Bjork also report that “[12% of] articles published during 2011 and indexed in the most comprehensive article-level index of scholarly articles (Scopus) are available OA through journal publishers… immediately…”.

That’s 12% immediate Gold-OA for the (already selective) SCOPUS sample. The percentage is still smaller for the more selective Thomson-Reuters/ISI sample. 

I think it cannot be left out of the reckoning about paid-Gold OA vs. free-Gold OA that: 

(#1) most articles are not published as Gold OA at all today (neither paid-Gold nor free-Gold)

(#2) the articles of the quality that users need and want most are much less likely to be published as Gold OA (whether paid-Gold or free-Gold) today, and, most important,

(#3) the Gold OA articles of the quality that users need and want most today are less likely to be the free-Gold ones than the paid-Gold ones (even though the junk journals on Jeffrey Beall’s “predatory” Gold OA journal list are all paid-Gold).

#2 and #3 are hypotheses, but I think they can be tested objectively.

A test for #2 would be to compare the download and citation counts (not the journal impact factors) for Gold OA (including hybrid Gold) articles vs non-Gold subscription journal articles (excluding the ones that have been made Green OA) within the same subject (and language!) area.

A test for #3 would be to compare the download and citation counts (not the journal impact factors) for paid-Gold (including hybrid Gold) vs free-gold articles within the same subject (and language!) area.

I mention this because I think just comparing the number of paid-Gold vs. free-Gold journals without taking quality into account could be misleading.

Comparing Carrots and Lettuce

These are comments on Stephen Curry’s
The inexorable rise of open access scientific publishing“.


Our (Gargouri, Lariviere, Gingras, Carr & Harnad) estimate (for publication years 2005-2010, measured in 2011, based on articles published in the c. 12,000 journals indexed by Thomson-Reuters ISI) is 35% total OA in the UK (10% above the worldwide total OA average of 25%): This is the sum of both Green and Gold OA.

Our sample yields a Gold OA estimate much lower than Laakso & Björk‘s. Our estimate of about 25% OA worldwide is composed of 22.5% Green plus 2.5% Gold. And the growth rate of neither Gold nor (unmandated) Green is exponential.

There are a number of reasons neither “carrots vs. lettuce” nor “UK vs. non-UK produce” nor L&B estimates vs. G et al estimates can be compared or combined in a straightforward way.

Please take the following as coming from a fervent supporter of OA, not an ill-wisher, but one who has been disappointed across the long years by far too many failures to seize the day — amidst surges of “tipping-point” euphoria — to be ready once again to tout triumph.

First, note that the hubbub is yet again about Gold OA (publishing), even though all estimates agree that there is far less of Gold OA than there is of Green OA (self-archiving), and even though it is Green OA that can be fast-forwarded to 100%: all it takes is effective Green OA mandates (I will return to this point at the end).

So Stephen Curry asks why there is a discrepancy between our (Gargouri et al) estimates of Gold OA — in the UK and worldwide (c. <5%) -- the estimates of Laakso & Björk (17%). Here are some of the multiple reasons (several of them already pointed out by Richard van Noorden in his comments too):

1. Thomson-Reuters ISI Subset: Our estimates are based solely on articles in the Thomson-Reuters ISI database of c. 12,000 journals. This database is more selective than the SCOPUS database on which L&B’s sample is based. The more selective journals have higher quality standards and are hence the ones that both authors and users prefer.

(Without getting into the controversy about journal citation impact factors, another recent L&B study has shown that the higher the journal’s impact factor, the less likely that the journal is Gold OA. — But let me add that this is now likely to change, because of the perverse effects of the Finch Report and the RCUK OA Policy: Thanks to the UK’s announced readiness to divert UK research funds to double-paying subscription journal publishers for hybrid Gold OA, most journals, including the top journals, will soon be offering hybrid Gold OA — a very pricey way to add the UK’s 6% of worldwide research output to the worldwide Gold OA total: The very same effect could be achieved free of extra cost if RCUK instead adopted a compliance-verification mechanism for its existing Green OA mandates.)

2. Embargoed “Gold OA”: L&B included in their Gold OA estimates “OA” that was embargoed for a year. That’s not OA, and certainly should not be credited to the total OA for any given year — whence it is absent — but to the next year. By that time, the Green OA embargoes of most journals have already expired. So, again, any OA purchased in this pricey way — instead of for a few extra cost-free keystrokes by the author, for Green — is more of a head-shaker than occasion for heady triumph.

3. 1% Annual Growth: The 1% annual growth of Gold OA is not much headway either, if you do the growth curves for the projected date they will reach 100%! (The more heady Gold OA growth percentages are not Gold OA growth as a percentage of all articles published, but Gold OA growth as a percentage of the preceding year’s Gold OA articles.)

4. Green Achromatopsia: The relevant data for comparing Gold OA — both its proportion and its growth rate — with Green come from a source L&B do not study, namely, institutions with (effective) Green OA mandates. Here the proportions within two years of mandate adoption (60%+) and the subsequent growth rate toward 100% eclipse not only the worldwide Gold OA proportions and growth rate, but also the larger but still unimpressive worldwide Green OA proportions and growth rate for unmandated Green OA (which is still mostly all there is).

5. Mandate Effectiveness: Note also that RCUK’s prior Green OA mandate was not an effective one (because it had no compliance verification mechanism), even though it may have increased UK OA (35%) by 10% over the global average (25%).

Stephen Curry: “A cheaper green route is also available, whereby the author usually deposits an unformatted version of the paper in a university repository without incurring a publisher’s charge, but it remains to be seen if this will be adopted in practice. Universities and research institutions are only now beginning to work out how to implement the new policy (recently clarified by the RCUK).”

Well, actually RCUK has had Green OA mandates for over a half-decade now. But RCUK has failed to draw the obvious conclusion from its pioneering experiment — which is that the RCUK mandates require an effective compliance-verification mechanism (of the kind that the effective university mandates have — indeed, the universities themselves need to be recruited as the compliance-verifiers).

Instead, taking their cue from the Finch Report — which in turn took its cue from the publisher lobby — RCUK is doing a U-turn from its existing Green OA mandate, and electing to double-pay publishers for Gold instead.

A much more constructive strategy would be for RCUK to build on its belated grudging concession (that although Gold is RCUK’s preference, RCUK fundees may still choose Green) by adopting an effective Green OA compliance verification mechanism. That (rather than the obsession with how to spend “block grants” for Gold) is what the fundees’ institutions should be recruited to do for RCUK.

6. Discipline Differences: The main difference between the Gargouri, Lariviere, Gingras, Carr & Harnad estimates of average percent Gold in the ISI sample (2.5%) and the Laakso & Bjork estimates (10.3% for 2010) probably arise because L&B’s sample included all ISI articles per year for 12 years (2000-2011), whereas ours was a sample of 1300 articles per year, per discipline, separately, for each of 14 disciplines, for 6 years (2005-2010: a total of about 100,000 articles).

7. Biomedicine Preponderance? Our sample was much smaller than L&B’s because L&B were just counting total Gold articles, using DOAJ, whereas we were sending out a robot to look for Green OA versions on the Web for each of the 100,000 articles in our sample. It may be this equal sampling across disciplines that leads to our lower estimates of Gold: L&B’s higher estimate may reflect the fact that certain disciplines are both more Gold and publish more articles (in our sample, Biomed was 7.9% Gold). Note that both studies agree on the annual growth rate of Gold (about 1%)

8. Growth Spurts? Our projection does not assume a linear year-to-year growth rate (1%), it detects it. There have so far been no detectable annual growth spurts (of either Gold or Green). (I agree, however, that Finch/RCUK could herald one forthcoming annual spurt of 6% Gold (the UK’s share of world research output) — but that would be a rather pricey (and, I suspect, unscaleable and unsustainable) one-off growth spurt. )

9. RCUK Compliance Verification Mechanism for Green OA Deposits: I certainly hope Stephen Curry is right that I am overstating the ambiguity of the RCUK policy!

But I was not at all reassured at the LSHTM meeting on Open Access by Ben Ryan’s rather vague remarks about monitoring RCUK mandate compliance, especially compliance with Green. After all that (and not the failure to prefer and fund Gold) was the main weakness of the prior RCUK OA mandate.

Stevan Harnad