Open-access scientific publishing is gaining ground

At the beginning of April, Research Councils UK, a conduit through which the government transmits taxpayers’ money to academic researchers, changed the rules on how the results of studies it pays for are made public. From now on they will have to be published in journals that make them available free—preferably immediately, but certainly within a year.
In February the White House Office of Science and Technology Policy told federal agencies to make similar plans. A week before that, a bill which would require free access to government-financed research after six months had begun to wend its way through Congress. The European Union is moving in the same direction. So are charities. And SCOAP3, a consortium of particle-physics laboratories, libraries and funding agencies, is pressing all 12 of the field’s leading journals to make the 7,000 articles they publish each year free to read. For scientific publishers, it seems, the party may soon be over.


It has, they would have to admit, been a good bash. The current enterprise—selling the results of other people’s work, submitted free of charge and vetted for nothing by third parties in a process called peer review, has been immensely profitable. Elsevier, a Dutch firm that is the world’s biggest journal publisher, had a margin last year of 38% on revenues of £2.1 billion ($3.2 billion). Springer, a German firm that is the second-biggest journal publisher, made 36% on sales of €875m ($1.1 billion) in 2011 (the most recent year for which figures are available). Such firms are now, though, faced with competitors set up explicitly to cover only their costs. Some rely on charity, but many have a proper business model: academics pay a fee to be published. So, on the principle of “if you can’t beat ’em, join ’em”, commercial publishers, too, are setting up open-access subsidiaries.
Open for business
The biggest is BioMed Central, part of Springer. It was founded in 2000 and in February it published its 150,000th paper and also launched its 250th periodical, catchily entitled theJournal of Venomous Animals and Toxins Including Tropical Diseases. Days later Nature Publishing Group (NPG), which owns Nature and 81 other journals, and which itself belongs to the Georg von Holtzbrinck Publishing Group, another German firm, bought a majority stake in Frontiers, a Swiss open-access platform with 30 titles in 14 scientific fields. In combination, NPG and Frontiers publish 46 open-access journals, and 7,300 free papers a year.
In the past year Elsevier has more than doubled the number of open-access journals it publishes, to 39. And even in those that usually charge readers (such as Cell and theLancet), paying a publication fee makes a paper available free immediately.
Outsell, a Californian consultancy, estimates that open-access journals generated $172m in 2012. That was just 2.8% of the total revenue journals brought their publishers (some $6 billion a year), but it was up by 34% from 2011 and is expected to reach $336m in 2015. The number of open-access papers is forecast to grow from 194,000 (out of a total of 1.7m publications) to 352,000 in the same period.
Open-access publishers are also looking at new ways of doing business. Frontiers, for example, does not try to judge a paper’s significance during peer review, only its accuracy—an approach also adopted by the Public Library of Science (PLoS), a non-commercial organisation based in San Francisco that was one of the pioneers of open-access publishing. It thus accepts 80-90% of submissions.
Instead, a Frontiers paper’s merit is gauged after publication, using measures like the number of downloads. Frontiers also doubles as a social network for researchers to share news, job offers and information about conferences and events. This network currently has around 70,000 members.
PeerJ, founded last year, makes an even more dramatic departure from tradition. Rather than being charged publication fees, authors pay a one-off membership fee, which ranges from $99 to $298, depending on how many papers they want to publish each year. All co-authors must be members. The firm also deals neatly with the question of peer review. Members must review at least one paper a year.
Non-commercial open-access publishers, though, are fighting back. The Wellcome Trust (a British medical charity), the Max Planck Society (which runs a lot of German research institutes) and the Howard Hughes Medical Institute (an American charity) have set upeLife, a peer-reviewed journal that does not charge publication fees. And in January Jean-Pierre Demailly, of the University of Grenoble, in France, and a handful of fellow mathematicians launched the Episciences Project. This aims to show that researchers themselves can turn out refereed papers cheaply, bypassing traditional purveyors.
Episciences will piggyback on ArXiv, an online repository beloved of physicists and mathematicians—who often post work there as “preprints” before submitting it to journals. ArXiv is hosted by Cornell University at a cost of $830,000 a year. Tacking on an “epijournal”, so that refereed papers would sit alongside the preprints, should not add much to that.
Matthew Cockerill, BioMed Central’s boss, though, points out that Episciences’s publishing model may have its drawbacks. Academics who bypass publishers become publishers themselves. And that will be harder to do as the operation grows.
Who pays for lunch?
Other aspects of open-access publishing also draw polite scepticism from incumbents. The promiscuous approach of Frontiers and PLoS, for example, is at odds with the rejection by publications like Nature and its American counterpart, Science, of over 90% of submitted manuscripts. It is this selectivity that gives these journals their prestige. At the moment, publication in Nature, Science and a handful of similar journals is like a sprinkling of fairy dust. Everyone knows how tough it is to get in, so papers that do so are assumed to be special. This will be hard for open-access publications to emulate.
The rejected papers all have to be scrutinised, though—and even though peer review is free, this involves staff time and other costs. According to Nature, the cost per published paper is $40,000. If Nature is to stay in business in anything like its current form, someone will have to pay that.
Whether anyone will want to, remains to be seen. Budgets are tight, and pressure for access to be open is growing. Intangible blessings of the sort bestowed by prestigious journals can vanish rapidly. Where the game will end is anybody’s guess.

For the Sake of Inquiry and Knowledge — The Inevitability of Open Access

It’s difficult to have a measured conversation about open access — the term widely used to refer to unrestricted online access to articles published in scholarly journals. People who believe that free and unrestricted access to peer-reviewed journal articles will undermine the viability of scholarly journal publishing disagree sharply with those who believe that only open access can expedite research advances and ensure the availability of that same scholarly literature. Arguments for and against open access tend to focus on implementation details, ignoring the powerful motivations underlying the phenomenon.

The open-access movement cannot be appreciated without an understanding of the complex and interdependent system that produces, evaluates, and distributes scholarly research results. For the past 60 years, five stakeholder communities have contributed to the system that enables the production of peer-reviewed research literature. In the simplest terms: funding agencies and foundations provide funds to conduct research; universities and other research organizations host the intellects who conduct the research, maintain the research facilities, and educate and train future researchers; authors, with no expectation of monetary compensation, write research articles describing their research findings; publishers accept contributed research papers on condition of copyright transfer, facilitate the editorial process, and manage the production and distribution processes needed for disseminating the articles; and libraries use institutional funds to purchase, organize, and preserve this publisher output and make it available for current and future research and teaching.
In a system this interdependent, destabilization at any one point perturbs critically important relationships. The advent of the Internet and digital formats was just such a disruption. Initially greeted with enthusiasm on all sides, the transition to digital formats and network distribution channels did not play out as all the stakeholders anticipated or would have liked. As publishers introduced restrictive contractual business models, raised prices (often disproportionally), experimented with digital rights management, and advocated for federal legislation favorable to their own business interests, other stakeholders became concerned about balance in the system and began to look for alternatives.
Authors in this system write to have impact, not for royalties. A distribution system that controls and constrains access to articles is anathema to researchers who seek wide influence rather than remuneration. Alternative options, which could fulfill the promise of the Internet as a tool for open and compatible digital publishing, gained early support in discussions. In 2002, the Declaration of the Budapest Open Access Initiative1 was the first formal call to action, followed the next year by both the Bethesda Statement on Open Access Publishing2 and the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities.3 The central concept of each of these calls to action was simple: peer-reviewed research articles, donated for publication by authors with no expectation of compensation, should be available online, free, and with the smallest possible number of usage restrictions.
A vision of open access to research results is not new. In July 1945, writing in the Atlantic Monthly, Vannevar Bush, then director of the U.S. Office of Scientific Research and Development, described just such an environment in his essay “As We May Think.” A staunch advocate of federal support for research in the physical and medical sciences, Bush challenged his fellow scientists and engineers to turn their postwar attention to the task of “making more accessible [the] bewildering store of knowledge.” Bush’s firm belief, which is still shared by academic authors, was that “a record if it is to be useful to science must be continuously extended, it must be stored, and above all it must be consulted.”
The extent to which access to knowledge is constrained and controlled by publishers’ business models is at the heart of the discontent researchers have for the current journal-publishing system. Peter Suber, a leading advocate of open access, articulates the view from the academy as follows: The “problem is that we donate time, labor, and public money to create new knowledge and then hand control over the results to businesses that believe, correctly or incorrectly, that their revenue and survival depend on limiting access to that knowledge.”4 Today, as in 1945, barriers to access to current and past knowledge are viewed by researchers as profoundly at odds with the advancement of knowledge.
Yet producing high-quality peer-reviewed articles has a cost. The fact that faculty members and researchers donate to publishers the ownership of their research articles — as well as their time and effort as reviewers — does not mean that there are no expenses associated with the production of high-quality publications. For all its known flaws, no one wants to destroy peer-reviewed publication. But the nonpublisher stakeholders in the scholarly communication system can no longer support the prices and access constraints desired by traditional publishers.
Discontent with the system extends well beyond authors. Government agencies have good reason to want the research they fund with taxpayer money to be broadly accessible and rapidly built upon; indeed, some would argue that public funders have an ethical imperative to demand open access. Charitable foundations similarly want to share the fruits of their investments in research and, like governments, need to be able to assess the impact and effectiveness of their funding. Recent policy decisions by Research Councils UK and the European Union5 demonstrate a broad and compelling international interest in increasing access to publicly funded research results.
Over the past decade, researchers, research institutions, and funding entities have been experimenting with channels of scholarly communication that serve as alternatives to traditional publishing. Many academic disciplines now utilize large open-access databases (such as arXiv and SSRN, the Social Science Research Network) to share research articles in the pre–peer-review stage. Hundreds of academic institutions and funding agencies now host open repositories of post–peer-reviewed articles that have been authored by grantees or members of their communities. Search engines, which are increasingly popular avenues to scholarly content, facilitate discovery and document use.
These and other experiments and alternatives to traditional publishing are leading the way to a digital, Internet-based, more open publishing system for peer-reviewed journals. The Directory of Open Access Journals (www.doaj.org) lists more than 8000 open-access journals, many of which are highly regarded according to conventional metrics of excellence. Emerging business models include publication fees paid by authors once an article has been accepted for publication, direct support from research grants, and contributions from research institutions willing to contribute financially to publication systems for more openly accessible articles.
Research culture is far from monolithic. Systems that underpin scholarly communication will migrate to open access by fits and starts as discipline-appropriate options emerge. Meanwhile, experiments will be run, start-ups will flourish or perish, and new communication tools will emerge, because, as the Bethesda Open Access Statement puts it, “an old tradition and a new technology have converged to make possible an unprecedented public good. The old tradition is the willingness of scientists and scholars to publish the fruits of their research in scholarly journals without payment, for the sake of inquiry and knowledge. The new technology is the internet. The public good they make possible is the world-wide electronic distribution of the peer-reviewed journal literature and completely free and unrestricted access to it by all scientists, scholars, teachers, students, and other curious minds.”
There is no doubt that the public interests vested in funding agencies, universities, libraries, and authors, together with the power and reach of the Internet, have created a compelling and necessary momentum for open access. It won’t be easy, and it won’t be inexpensive, but it is only a matter of time.
Listen to an interview with Dr. Martin Frank and Prof. Michael Carroll on traditional and open-access scientific publishing.

Big changes to Open Access mandates

This year has seen huge changes made in the arguments for and against Open Access. The Wellcome Trust, one of the biggest funders of biomedical research in the UK, announced it would be enforcing its mandate for Gold Open Access (where research is freely available from the publisher; the model used by iMedPub more strictly. The Finch Report, commissioned by the British government’s Science Minister recommended a move to Gold Open Access (OA), although there were some concerns over how this will be funded. Finally, the UK Research Councils introduced stricter conditions on making research available either through author-pays OA or by depositing in online repositories (known as Green OA).
This is all very UK-centric, but the same trends are emerging throughout the world, with petitions plaguing the White House and European funders mandating some form of Open Access by 2016. The UK’s Department for International Development has announced it will fund OA for the research it funds internationally, with the International Development Secretary, Andrew Mitchell, pointing out, 2the most groundbreaking research is of no use to anyone if it sits on a shelf gathering dust”.
iMedPub’s policy is to ensure that all publications in all journals comply with these Open Access mandates, through deposition in PubMed Central, free online access without restriction, and the freedom for anyone to use and reuse the published data, subject to correct attribution, thanks to Creative Commons licensing. Your Journal Development Editor can provide more information on how this applies to your journal and authors.

Open access versus subscription journals: a comparison of scientific impact

Abstract

Background

In the past few years there has been an ongoing debate as to whether the proliferation of open access (OA) publishing would damage the peer review system and put the quality of scientific journal publishing at risk. Our aim was to inform this debate by comparing the scientific impact of OA journals with subscription journals, controlling for journal age, the country of the publisher, discipline and (for OA publishers) their business model.

Methods

The 2-year impact factors (the average number of citations to the articles in a journal) were used as a proxy for scientific impact. The Directory of Open Access Journals (DOAJ) was used to identify OA journals as well as their business model. Journal age and discipline were obtained from the Ulrich’s periodicals directory. Comparisons were performed on the journal level as well as on the article level where the results were weighted by the number of articles published in a journal. A total of 610 OA journals were compared with 7,609 subscription journals using Web of Science citation data while an overlapping set of 1,327 OA journals were compared with 11,124 subscription journals using Scopus data.

Results

Overall, average citation rates, both unweighted and weighted for the number of articles per journal, were about 30% higher for subscription journals. However, after controlling for discipline (medicine and health versus other), age of the journal (three time periods) and the location of the publisher (four largest publishing countries versus other countries) the differences largely disappeared in most subcategories except for journals that had been launched prior to 1996. OA journals that fund publishing with article processing charges (APCs) are on average cited more than other OA journals. In medicine and health, OA journals founded in the last 10 years are receiving about as many citations as subscription journals launched during the same period.

Conclusions

Our results indicate that OA journals indexed in Web of Science and/or Scopus are approaching the same scientific impact and quality as subscription journals, particularly in biomedicine and for journals funded by article processing charges.



Authors: Bo-Christer Björk1* and David Solomon2

1 Hanken School of Economics, Helsinki, Finland
2 College of Human Medicine, Michigan State University, East Lansing, MI, USA

For all author emails, please log on.

BMC Medicine 2012, 10:73 doi:10.1186/1741-7015-10-73

The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1741-7015/10/73

Background

Emergence and growth of open access

Over the last 20 years the publishing of scientific peer-reviewed journal articles has gone through a revolution triggered by the technical possibilities offered by the internet. Firstly, electronic publishing has become the dominant distribution channel for scholarly journals. Secondly, the low cost of setting up new electronic journals has enabled both scholars and publishers to experiment with new business models, where anybody with internet access can read the articles (‘open access’ or OA) and the required resources to operate journals are collected by other means than charging readers. Similarly, increased availability can be achieved by scientists uploading the prepublication versions of their articles published in subscription journals to OA web repositories such as PubMed Central. The majority of publishers now allow some form of archiving in their copyright agreements with authors, sometimes requiring an embargo period. Major research funders such as the National Institutes of Health (NIH) and the Wellcome Trust have started requiring OA publishing from their grantees either in open access journals (gold OA) or repositories (green OA). A recent study showed that 20.4% of articles published in 2008 were freely available on the web, in 8.5% of the cases directly in journals and in 11.9% in the form of archived copies in some type of repository [1].
In the latter half of the 1990s when journals created by individual scientists were dominating OA publishing, these journals were not considered by most academics a serious alternative to subscription publishing. There were doubts about both the sustainability of the journals and the quality of the peer review. These journals were usually not indexed in the Web of Science, and initially they lacked the prestige that academics need from publishing. Quite often their topics were related to the internet and its possibilities, as exemplified by the Journal of Medical Internet Research, which in 15 years has managed to become a leading journal in its field.
A second wave of OA journals consisted of established subscription journals, mainly owned by societies. These publishers decided to make the electronic version of their journal(s) freely accessible. Such journals are particularly important in certain regions of the world for example, Latin America and Japan, where portals such as Scielo and J-stage host hundreds of journals at no cost to the publishers. One of the earliest journals to make its electronic version OA was BMJ, which since 1998 has made its research articles freely available.
The third wave of OA journals was started by two new publishers, BioMedCentral and Public Library of Science (PLoS). They pioneered the use of article processing charges (APCs) as the central means of financing professional publishing of OA journals. Since 2000 the importance of the APC business model for funding OA publishing has grown rapidly. BioMedCentral was purchased in 2008 by Springer and over the last couple of years almost all leading subscription publishers have started full open access journals funded by APCs. The leading scientific OA journals using the APC model tend to charge between US$2,000 and US$3,000 for publishing but overall the average APC was US$900 in 2010 across all journals charging APCs listed in the Directory of Open Access Journals [2]. In many fields the payment of such charges is a substantial barrier to submissions. In a broad survey of authors who had published in scholarly journals, 39% of respondents who hadn’t published in OA journals mentioned problems in funding article-processing fees as a reason [3].
Subscription publishers have also tried an OA option called hybrid journals where authors can pay fees (typically in the range of US$3,000) to have the electronic versions of their articles OA as part of what is otherwise a subscription journal. The uptake for hybrid journals in general has been very limited at about 1% to 2% for the major publishers [4].

Does OA threaten to undermine scientific peer review?

The starting point for this study are the claims made, often by publishers and publishers’ organizations, that the proliferation of OA would set in motion changes in the publishing system which would seriously undermine the current peer review system and hence the quality of scientific publishing. Suber has written an excellent overview of this discussion [5]. Lobbying using this argument has in particular been directed against government mandates for OA such as implemented by the NIH for their grantees. It is claimed that the resulting increase in posting of manuscript copies to OA repositories would lead to wide-scale cancellation of subscriptions putting traditional publishers, both commercial and society in jeopardy and in the long run result in an erosion of scientific quality control. This scenario is based on the assumption that the OA publishers would take over an increasing part of the publishing industry and would not provide the same level of rigorous peer review as traditional subscription publishers, which would result in a decline in the quality of scholarly publishing. The NIH have documented that their mandate has not in fact caused any harm to publishers [6].
The critique has in particular been focused on OA publishers that charge authors APCs. Superficially such publishers would seem to be inclined to accept substandard articles since their income is linearly dependent on the number of papers they publish. There have in fact been reports of some APC-funded OA publishers with extremely low quality standards [7]. Reports of such cases in the professional press such as the recent article ‘Open access attracts swindlers and idealists’ [8] in the Finnish Medical Journal, a journal read by the majority of practicing physicians in Finland, can by the choice of title alone contribute to a negative image of OA publishing. The founding of the Open Access Scholarly Publishers Association, which in particular strives to establish quality standards for OA journals, was in part a reaction by reputable OA publishers to the appearance of such publishers on the market.
One of the questions in the above-mentioned survey of scholarly authors [3], dealt with the ‘myths’ about open access, including the quality issue. On a Likert scale researchers in general tended to disagree with the statements ‘Open access undermines the system of peer review’ and ‘Open access publishing leads to an increase in the publication of poor quality research’ (results reported in Figure 4; [3]). It thus seems that a majority of scholars or at least those who completed this very widely disseminated survey did not share this negative perception of the quality of OA publishing.

Aim of this study

Scientific quality is a difficult concept to quantify. In general terms very rigorous peer review procedures should raise the quality of journals by screening out low quality articles and improving manuscripts via the reviewers’ comments. In this respect one could assume that the novel peer review procedures used by certain OA journals such as PLoS ONE should lower the quality. However, such journals essentially leave it to the readers to affirm the quality through metrics such as the number of citations per article. In practice the only proxy for the quality that is generally accepted and widely available across journals are citation statistics. In the choice of title for this article we have hence consciously avoided the term scientific ‘quality’ and chose to use ‘impact’ instead, which is closely related to citations such as in the impact factor used in Journal Citation Reports.
It has now been 20 years since the emergence of the first OA journals and 10 years since the launch of the first major OA journals funded by APCs. The number of peer-reviewed articles published in OA journals was already around 190,000 in 2009 and growing at the rate of 30% per annum [9]. Roughly half of the articles are published in journals charging APCs [2]. Enough time has also passed so that the qualitatively better OA journals and in particular journals that have been OA from their inception are now being indexed by major citation indexes such as the Web of Science and Scopus. In the last few years academic search engines such as Google Scholar have also emerged, but the data generated by these automated searches is too unstructured to be used for a study of the citation counts of large numbers of articles or full journals. In contrast both the Journal Citation Reports (JCR), and SCOPUS via the data available on the SCImago portal provide aggregated data in the form of impact factors, which can be used for comparing OA and subscription journals.
This provides empiric data enabling us to ask meaningful questions such as: ‘How frequently are articles published in OA journals cited compared to articles in non-OA journals?’. Although the citation level cannot directly be equated to scientific quality, it is widely accepted as a proxy for quality in the academic world, and is the only practical way of getting comprehensive quantitative data concerning the impact of journals and the articles they contain. The aim of this study was thus to compare OA and subscription journals in terms of the average number of citations received both at the journal and article level.

Earlier studies

Over the past 10 years there have been numerous studies reporting that scientific articles that are freely available on the internet are cited more frequently than articles only available to subscribers (for overviews see Swan [10] and Wagner [11]). Most of these studies have been conducted by comparing articles in subscription journals where some authors have made their articles freely available in archives. Gargouri et al. [12] found a clear citation advantage of the same size both for articles where the author’s institution mandated OA, and for articles archived voluntary. They also found that the citation advantage was proportionally larger for highly cited articles. Some authors claim that when eliminating factors such as author’s selecting their better work for OA dissemination, the advantage, at least concerning citations in Web of Science journals is low or even non-existent. Evans and Reimar using extensive Web of Science data report an overall global effect of 8% more citations, but with a clearly higher level of around 20% for developing countries [13]. Davis, in a randomized trial experiment involving 36 mainly US-based journals, found no citation effect but a positive effect on downloads [14]. His study was however limited to high-impact journals with wide subscription bases.
Assuming that there is some level of citation advantage, this would mean that the articles published in full OA journals would receive an additional citation advantage beyond their intrinsic quality from their availability. In practice it would, however, be very difficult to separate out the effects of these two underlying factors. A share of the articles in subscription journals (approximately 15%) also benefit from the increased citations due to the existence of freely available archival copies as noted for instance by Gargouri et al. [12]. If there was a consensus of the citation advantage for being freely available, it would be possible to correct for this effect. Since the estimates of this factor vary so much across studies, we are hesitant to attempt such a correction.
However, we don’t necessarily need to explicitly take this factor into account when assessing the quality level of the global OA journal corpus. If articles in them on average get as many citations as articles in subscription journals, then their overall scientific impact (as measured by getting cited) is also equal. OA is just one of several factors influencing the citation levels of particular journals, others being the prestige of the journals, the interest of the topics of the articles, the quality of the layout for easy reading, timeliness of publication and so on.
Journals that were launched as OA from relatively new publishers such as PLoS or BMC have disadvantages in other respects. They lack the established reputation of publishers that have been in business for decades. The reputation of these journals is also hindered by a large, though shrinking, number of researchers who believe that electronic-only OA journals are somehow inferior to their more established subscription counterparts. In this study we will therefore make no attempt to look separately at the citation effect of OA, due to the complexity of the issue and the lack of a reliable estimate of the effect.
There are a few previous studies that have tried to determine the overall quality of OA journal publishing as compared to traditional subscription publishing. McVeigh studied the characteristics of the 239 OA journals included in the 2003 Journal Citation Reports [15]. Her report contains very illustrative figures showing the positions of these journals in the ranking distribution within their respective scientific disciplines. Overall, OA journals were represented more heavily among the lower-ranking journals, but there were also 14 OA journals in the top 10% in their disciplines. She also mentions that 22,095 articles were published in these OA journals in 2003. In considering the results from this early study it is important to bear in mind the highly skewed regional and age distributions of the journals in question. Only 43% of the OA journals were published in North America or Western Europe, and the vast majority of the journals were old established journals that had recently decided to make their electronic content openly available.
Giglia [16] set out to duplicate the McVeigh study, to the extent possible. Giglia was now able to rely solely on the DOAJ index for info about which journals were OA and identified 385 titles to study, using JCR from 2008 as the starting point. Giglia studied the distribution of titles in different percentiles of rank in their discipline using the same breakdown as McVeigh. All in all the results were not much different from the earlier study. Giglia found that 38% of the 355 OA journals in Science Citation Index and 54% of the 30 OA journals in Social Science Citation Index were in the top half ranks in JCR.
Miguel et al. [17] focused on studying how well represented gold and green OA journals were in citation indexes. They were able to combine DOAJ data with data from the SCOPUS citation database, which covers more journals than JCR, and could also use the average citation counts from the SCImago database. The results highlighted how OA journals have achieved a share of around 15% of all SCOPUS indexed journals for Asia and Africa and a remarkable 73% for Latin America. Of particular interest for this study was that some of the figures in the article showed the average number of citations per document in a 2-year window (calculated over journals) for particular journal categories. Thus the overall average number of citations was around 0.8 for OA journals, 1.6 for subscription journals allowing green posting and 0.8 for subscription journals not allowing green posting. They found highly differentiated average citation levels for nine different broad disciplines. They also found very clear differences in the citation levels between regions, with North American and European OA journals performing at a much higher level than journals from other parts of the world. Both in the disciplinary and regional breakdowns the non-OA journals followed the same patters, so that the relative performance of OA journals to non-OA journals was relatively stable.

Methods

The data for this study were obtained from four databases. These included Ulrichsweb, Journal Citation Reports 2010 (JCR), SCImago Journal & Country Rank (SCImago), and the Directory of Open Access Journals (DOAJ). SCImago and DOAJ are openly available and provide their data in an easily downloaded format. Both our institutions have subscriptions to the electronic versions of Ulrichsweb and JCR, and it was possible to use our institutional access to these databases to obtain the information needed.
Ulrichsweb is a database of detailed information on more than 300,000 periodicals of all types. The JCR is the 2010 version of a database concerning the articles published and the citations received by the peer-reviewed journals indexed in the Web of Science citation index, a database of selected high quality scholarly journals maintained by Thomson Reuters. This study largely focuses on the average number of citations received by a journal over the most recent 2-year period, commonly called an impact factor. SCImago provides open access to similar metrics for citations concerning journals included in the Scopus Citation Database maintained by Elsevier. Scopus is similar to Web of Science but provides data on a larger number of journals. The DOAJ is a database of open access journals that provides basic information about the journals as well as immediate unrestricted access to full text articles for some of these journals. Of these services, Web of Science whose citation index is provided through the JCR has the strictest inclusion criteria, followed by Scopus. DOAJ accepts all journals that fulfill certain criteria concerning the open accessibility and the peer review, whereas Ulrichsweb is open for any journal to self-report their data.
A limitation of this method is that journals not indexed in Web of Science or Scopus cannot be included, since there is no way to obtain citation data in a systematic way. Google scholar could be used to study citations in that index to individual journals but the process is extremely labor intensive and cannot be performed for large numbers of journals.
Studies have shown a high degree of correlation between the citation metrics of JCR and Scopus, although their absolute values differ. For instance Pislyakov [18] studied the citedness of 20 leading economics journals using data from both JCR and Scopus and found that the correlation between the Impact factors of these two indexes was 0.93 (Pearson). Sicilia et al. [19] also found a strong correlation between the two measures for computer science journals. Hence either one provides a good measure for the level of citations.
We used this mix of sources because we needed a number of data items for our analysis that could not be obtained from just one database. Ulrichsweb was used to obtain the start year for each journal as well as the up to five discipline categories in which it was classified. It was also used to identify the country of origin of the publisher. Being listed in the DOAJ was used as an indicator of whether a journal was open access and to determine if a journal charged APCs. The JCR was used to obtain the 2-year impact factor for each journal as well as the number of articles published in it in the most recent year available in the report, 2010. SCImago was used to obtain the 2-year citation count divided by number of articles published for Scopus indexed journals (in essence similar to the JCR impact factor) and the number of articles published in 2011.
To create a merged data set for analysis we started with the Ulrichsweb database, first narrowing the database to only journals that were: abstracted or indexed, currently active, academic/scholarly, refereed, and formatted as online and/or in print.
We selected all journals within those limits that were listed in the following discipline categories (based on the discipline coding used by Ulrichsweb): arts and literature; biological science; business and economics; chemistry; earth, space and environmental sciences; education; mathematics; medicine and health; physics; social sciences; technology and engineering. While there were other disciplines categorized in Ulrichsweb, these in our view captured the major scholarly disciplines. Many journals were listed under multiple disciplines. We recorded each discipline listed for each journal. The maximum for any journal was five. The data were retrieved in January 2012.
We then merged data from the other three databases to the journals identified in Ulrichsweb using either the International Standard Serial Number (ISSN) or the Electronic International Standard Serial Number (EISSN) as the identifier. There were 23,660 journals identified in Ulrichsweb meeting the criteria within the 11 disciplines of which 12,451 (52.6%) were in the SCImago database as of January 2012, 8,256 (35.0%) were in the JCR 2010 and 2,530 (10.7%) were in the DOAJ as retrieved from their web site in August 2011.
Citation metrics of OA and subscription journals were analyzed in two different ways. Firstly they were analyzed with journals as the unit of analysis, which was at the level the data were retrieved from the four databases. We also estimated the citation metrics of the articles published. This was performed by weighting the journal level citation metrics by the number of articles published in each journal per year using article counts provided by the JCR and SCImago databases. This lends more or less weight to each journal based on the number of articles that were published within the journal. We feel this adds a new and important dimension to the analysis as compared to earlier studies.
In the data collection and analysis process we found some problems with the SCImago data. The site allows downloading the basic article numbers and citation data for all journals as one Microsoft Excel file with the most current year’s data. The data on impact factors and number of articles was for 2011 but it seems that the article and citation counts are not complete for the full year, so that both the article numbers and impact factors are too low. This could easily be checked for individual journals and it turned out that the impact factors for 2010 as well as preceding years were in most cases almost double compared to the 2011 figures. A comparison with the journal level analysis in Miguel et al. [17] also pointed in the same direction. Unfortunately it was not possible to extract the older data for the over 12,000 journals in the study so we were limited to using the 2011 data, which was incomplete.
We nevertheless feel that the analysis using SCOPUS data provides a useful triangulation with the JCR analysis. Provided that the insufficient counting for 2011 is systematic across all journals, with no differentiation between OA and subscription journals, the citation levels for OA vs. subscription relative to each other should remain the same, although the absolute levels are lower. In comparing the numbers with the JCR based the proportions between OA and subscription citation rates were approximately the same in both sets supporting the conclusions we later illustrate mainly with the JCR results.

Results

The results were calculated using 2-year average citations (impact factors) from the JCR and Scopus (via SCImago) by journal and weighted by the number of article in each journal as described above. OA and subscription journals were compared by the time period when they were launched (pre-1996, 1996 to 2001, and 2002 to 2011), by country published grouped into the four largest publishing countries (USA, UK, The Netherlands, and Germany) versus other countries, scientific discipline (medicine and health versus other) and business model (OA funded by APC, OA not funded by APC, and subscription).
Table 1 provides a comparison of the impact factors for OA and subscription journals based on journals in the JCR and Scopus databases. OA journals had impact factors that were approximately 76% and 67% as high as subscription journals in JCR and Scopus respectively when analyzed by journal and 73% and 62% when weighted for articles published. Due to our concerns about the Scopus data from the SCImago Journal and Country site outlined above in the Methods Section only JCR figures are presented and discussed below.
Table 1. The 2-year citation averages for open access versus subscription journals, calculated using Web of Science or Scopus data
Figure 1 shows the average JCR impact factor for OA and subscription journals weighted by the number of articles as a function of the time period the journal was launched and location of the publisher. The left side of the figure includes the journals from the four countries where most of the major society and commercial publishers are located. The publishers in these four countries account for approximately 70% of the journals in our sample. The right side of the figure includes journals publishing in the rest of the world.
thumbnailFigure 1. Citation averages as a function of the journal start year for two regions. The figures are based on Web of Science and weighted by journal article volumes.
There are large differences in the impact factors between the two regions with the ‘big four’ on average having journals with significantly higher impact factors. Somewhat surprisingly in this region more recently launched journals tended to have higher impact scores than the older more established journals. This was true for both subscription and OA journals. In addition the difference in impact between OA and subscription journals narrows with time.
The pattern for journals from the rest of the world is quite different. While the overall number of journals published is much lower, the number of OA journals is actually quite high in the pre-1996 group where OA journals have a clearly lower impact. This group largely consists of old established print journals, which at some staged have opened up their electronic versions. In the middle time period, OA journals were outperforming subscription journals and in the youngest group they were on a par with subscription journals.

Effects of the discipline of the journals

Several studies have shown that gold open access journals have had a larger uptake in the biomedical fields [1,15], where authors usually have less problems in financing APCs and where many research funders also require some form of OA for the results. Figure 2 shows the average JCR impact factor of OA and subscription journals weighted by the number of articles as a function of the discipline. The journals were split into two groups. The first included journals with the Ulrichsweb discipline category ‘Medicine and Health’. All the other disciplines were combined into the second group.
thumbnailFigure 2. Citation averages as a function of the journal start year for Medicine and Health versus all other disciplines. The figures are based on Web of Science and weighted by journal article volumes.
In medicine and health, the large difference in impact between OA and subscription journals seen in older journals essentially disappears among the journals launched after 2001. This probably reflects the emergence of high quality professional OA publishers such as PLoS and BioMedCentral that rely on APCs for funding. For the other disciplines, OA articles had considerably lower impact scores in journals before 1996 and journals launched after 2001 but the average impact of OA articles in journals launched between 1996 and 2001 was essentially equal to the average impact of articles in subscription journals launched in the same period. In reviewing the raw data, the high average impact of the OA articles during this period was due to a handful of relatively high impact and high volume OA journals published by BioMedCentral, which had been classified as biological rather than medical journals.

Effects of the revenue model of OA journals

In Figure 3 (subscription journals), OA journals funded by APCs and OA journals that do not charge APCs are compared as a function of journal age. As noted above, the early OA journals were funded through volunteer effort and small subsidies from largely universities. Beginning with BioMedCentral and PLoS in about 2001 a growing number of professional publisher have begun publishing OA journals funding their operations by charging publication fees.
thumbnailFigure 3. Citation averages for open access journals using article processing charges (APCs) versus those that are free to publish in for authors, compared to impact factors for subscription journals.
The impact of OA journals that are not funded by APCs are more or less the same irrespective of journal age at about 1.25. The oldest age category consists mainly of print journals that have made their electronic versions freely available. APC funded OA journals’ average impact increased markedly in the period 1996 to 2001 and to a lesser extent in 2002 to 2011 nearly reaching the same level as subscription journals at about 3.2. The 89 APC funded journals launched before 1996 we expect largely include subscription journals that converted to the APC model of OA publishing. A number of the journals are published by Hindawi, which did in fact transition from a subscription publisher to an OA publisher funded by APCs [20]. The other journals are published by a variety of publishers, universities, societies and other organizations from around the world.

Discussion

The distribution of OA journals over time periods and regions differs markedly from the corresponding distribution of subscription journals. OA journals are much more numerous in categories that have low overall impact factors which may explain some of the difference in average impact between OA and subscription journals. Almost half (302) of all OA journals found in JCR are journals started before 1996 and published in the ‘other countries’ region. While over 75% of the subscription journals found in the JCR were also launched before 1996, nearly 70% of subscription journals are from publishers in the four major publishing counties. As can be seen in Figure 1, across all age categories and for both OA and subscription journals, those published outside the four major publishing countries have substantially lower impact factors. While correlation is not necessary causation, the location of the publisher appears to account for much of the difference in average impact between OA and subscription journals.
The vast majority of journals founded before 1996 that are listed in the JCR started as paper-based subscription journals. Those listed as OA must at some stage have made their electronic versions open access. Many of these are journals published by scientific societies and universities but at least in one case (Hindawi) a publisher converted their whole portfolio from subscription to OA.
Both in the leading publishing countries and in the rest of the world, older established journals that have made their electronic versions openly available have lower impact scores than their subscription counterparts. This is understandable since the large commercial publishers and the leading society publishers have usually refrained from opening up the e-content, BMJ being a notable exception. But for the newer journals, particularly in medicine and health, our results show that OA journals are performing at about the same level as subscription journals, in fact getting more citations in some subcategories.
For almost 15 years the quality of OA journals has been debated and questioned. In the early days of electronic journals, when hardly any startup OA journals were operated by reputable professional publishers, it was easy to understand the reluctance of scientists to submit their best manuscripts to OA journals and for research funders and university promotion and tenure committees to accept publishing in OA journals as on par with publishing in traditional subscription based journals. After the launch of professionally run high quality biomedical OA journals beginning in about 2000, the situation has changed. Today the funding mechanism of a journal is irrelevant in considering its quality. There are large numbers of both subscription and OA journals that are high quality and widely cited.
The development and increasing acceptance of the APC funding model for OA scholarly journals has spawned a group of publishers with questionable peer review practices that seem focused on making short-term profits by having low or non-existent quality standards. Unfortunately this has created some bad publicity for OA publishing. As this study demonstrates, this does not change the broad picture. Gold OA publishing has increased at a rate of 30% per year over the past decade [9] and in the last couple of years many major subscription publishers have started adding pure OA journals to their portfolios.
We believe our study of the quality of the OA journals indexed in either Web of Science or Scopus is the most comprehensive to date. The results indicate that the level of citations for older subscription based OA journals, which have made the electronic version openly available, is clearly lower than for the corresponding subscription journals. At the same time newly founded full OA journals compete on almost equal terms with subscription journals founded in the same period. OA articles published medicine and health by publishers in the four largest publishing countries; attract equal numbers of citations compared to subscription journals in these fields. Based on the evidence from earlier studies it is likely that a part of the citations to the OA articles are due to the increased readership following from the open availability, but there is no way we can isolate the effect of this factor in our calculations nor would this factor alone account for the increasing respect researchers are showing for these journals through their citations.
The focus of the criticism of OA journals has been directed against journals funding their operations with APCs, claiming that this revenue model leads to journals lowering their review standards in order to maximize their profits. While there is clearly a substrata of journals reflecting this phenomena, there are also a growing number of high quality APC funded journals from reputable publishers that are on par with their subscription counterparts.

Conclusions

In summary, gold OA publishing is rapidly increasing its share of the overall volume of peer-reviewed journal publishing, and there is no reason for authors not to choose to publish in OA journals just because of the ‘OA’ label, as long as they carefully check the quality standards of the journal they consider.

Competing interests

There are no competing financial interests. Both authors have founded OA journals in the 1990s and are emeritus editors-in-chiefs. B-CB is a current and DS a former board member of the Open Access Scholarly Publishers Association.

Authors’ contributions

B-CB initiated the study and has written most of the background sections of the articles. DS collected the data from the different sources and made the calculations. Both authors participated equally in the analysis of the results and the drawing of conclusions.

Authors’ information

B-CB is professor of Information Systems Science at the Hanken School of Economics, Helsinki, Finland. DS is Professor of Medicine at the College of Human Medicine, Michigan State University, USA.

References

  1. Björk B-C, Welling P, Laakso M, Majlender P, Hedlund T, Guðnason G: Open access to the scientific journal literature: situation 2009.
    PLoS ONE 2010, 5:e11273. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
  2. Solomon DJ, Björk B-C: A study of Open Access Journals using article processing charges.
    J Am Soc Info Sci Technol, in press. OpenURL
  3. Dallmeier-Tiessen S, Darby R, Goerner B, Hyppoelae J, Igo-Kemenes P, Kahn D, Lambert S, Lengenfelder A, Leonard C, Mele S, Nowicka M, Polydoratou P, Ross D, Ruiz-Perez S, Schimmer R, Swaisland M, van der Stelt W: Highlights from the SOAP project survey. [http://arxiv.org/abs/1101.5260v2] webcite
    What scientists think about open access publishing OpenURL
  4. Dallmeier-Thiessen S, Goerner B, Darby R, Hyppoelae J, Igo-Kemenes P, Kahn D, Lambert S, Lengenfelder A, Leonard C, Mele S, Polydoratou P, Ross D, Ruiz-Perez S, Schimmer R, Swaisland M, van der Stelt W: Open access publishing – models and attributes. [http://edoc.mpg.de/478647] webcite
    SOAP project report, Max Planck Society digital library; 2010.
  5. Suber P: Will open access undermine peer review? [http://www.earlham.edu/~peters/fos/newsletter/09-02-07.htm] webcite
    the SPARC Open Access Newsletter, issue 113; 2009.
  6. NIH: NIH Public Access Policy. [http://publicaccess.nih.gov/public_access_policy_implications_2012.pdf] webcite
  7. Gilbert N: Editor will quit over hoax paper: computer-generated manuscript accepted for publication in openaccess journal. [http://www.nature.com/news/2009/090615/full/news.2009.571.html] webcite
    Nature News 2009. OpenURL
  8. Järvi U: Open Access Attracts swindlers and idealists [in Finnish].
    Finn Med J 2012, 67:666-667. OpenURL
  9. Laakso M, Welling P, Bukvova H, Nyman L, Björk B-C, Hedlund T: The development of open access journal publishing from 1993 to 2009.
    PLoS ONE 2011, 6:e20961. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
  10. Swan A: The Open Access citation advantage: studies and results to date. [http:/ / openaccess.eprints.org/ index.php?/ archives/ 716-Alma-Swan-Review-of-Studies-on- Open-Access-Impact-Advantage.html] webcite
    Technical Report, School of Electronics & Computer Science, University of Southampton; 2010.
  11. Wagner A: Open access citation advantage: an annotated bibliography. [http://www.istl.org/10-winter/article2.html] webcite
    Iss Sci Technol Librarian 2010., 60: OpenURL
  12. Gargouri Y, Hajjem C, Larivière V, Gingras Y, Carr L, Gingras Y, Brody T, Harnad S: Self-selected or mandated, open access increases citation impact for higher quality research.
    PLoS ONE 2010, 5:e13636. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
  13. Evans J, Reimer J: Open access and global participation in science.
    Science 2009, 323:1025. PubMed Abstract | Publisher Full Text OpenURL
  14. Davis P: Open access, readership, citations: a randomized controlled trial of scientific journal publishing.
    FASEB J 2011, 25:2129-2134. PubMed Abstract | Publisher Full Text OpenURL
  15. McVeigh M: Open Access Journals in the ISI Citation Databases: Analysis of Impact Factors and Citation Patterns. [http://science.thomsonreuters.com/m/pdfs/openaccesscitations2.pdf] webcite
    citation study from Thomson Scientific; 2004.
  16. Giglia E: The impact factor of open access journals: data and trends. [http://elpub.scix.net/cgi-bin/works/Show?102_elpub2010] webcite
    In Proceedings of the 14th International Conference on Electronic Publishing (ELPUB 2010) 16-18 June 2010, Helsinki, Finland Edited by Turid Hedlund T, Tonta Y. Hanken School of Economics; 2010, 17-39. OpenURL
  17. Chinchilla-Rodriguez Z, de Moya-Anegoin F: Open Access and Scopus: a new approach to scientific visibility from the standpoint of access.
    J Am Soc Info Sci Technol 2011, 62:1130-1145. Publisher Full Text OpenURL
  18. Pislyakov V: Comparing two “thermometers”: impact factors of 20 leading economic journals according to Journal Citation Reports and Scopus.
    Scientometrics 2009, 79:541-550. Publisher Full Text OpenURL
  19. Sicilia M-A, Sánchez-Alonso S, García-Barriocanal E: Comparing impact factors from two different citation databases: the case of Computer Science.
    J Informetrics 2011, 5:698-704. Publisher Full Text OpenURL
  20. Peters P: Going all the way: how Hindawi became an open access publisher.
    Learn Pub 2007, 20:191-195. Publisher Full Text OpenURL

The OA Interviews: Jeffrey Beall, University of Colorado Denver

In 2004 the scholarly publisher Elsevier made a written submission to the UK House of Commons Science & Technology Committee. Elsevier asserted that the traditional model used to publish research papers — where readers, and institutions like libraries, pay the costs of producing scholarly journals through subscriptions — “ensures high quality, independent peer review and prevents commercial interests from influencing decisions to publish.”
Jeffrey Beall
Elsevier added that moving to the Open Access (OA) publishing model — where authors, or their sponsoring institutions, paid to publish research papers by means of an article-processing charge (APC) — would remove “this critical control measure” from scholarly publishing.
The problem with adopting the gold OA model, explained Elsevier, is that publishers’ revenues would then be driven entirely by the number of articles published. As such, OA publishers would be “under continual pressure to increase output, potentially at the expense of quality.”
This is no longer a viewpoint that Elsevier promulgates. Speaking to me earlier this year, for instance, Elsevier’s Director of Universal Access Alicia Wise said, “Today open access journals do generally contain high-quality peer reviewed content, but in 2004 this was unfortunately not always the case.”
She added, “Good work in this area by the Open Access Scholarly Publishers Association (OASPA) has helped to establish quality standards for open access publications. For several years now Elsevier has taken a positive test-and-learn approach to open access and believes that open access publishing can be both of a high quality and sustainable.”
Prescient
While many OA publishers today are undeniably as committed to the production of high-quality papers as subscription publishers ever were, Elsevier’s 2004 warning was nevertheless prescient.
No one knows this better than Jeffrey Beall, a metadata librarian at the University of Colorado Denver. Beall maintains a list of what he calls “predatory publishers”. That is, publishers who, as Beall puts it, “unprofessionally exploit the gold openaccess model for their own profit.” Amongst other things, this can mean that papers are subjected to little or no peer review before they are published.
Currently, Beall’s blog list of “predatory publishers” lists over 100 separate companies, and 38 independent journals. And the list is growing by 3 to 4 new publishers each week.
Beall’s opening salvo against predatory publishers came in 2009, when he published a review of the OA publisher Bentham Open for The Charleston Advisor. Since then, he has written further articles on the topic (e.g. here), and has been featured twice in The Chronicle of Higher Education (here and here).
His work on predatory publishers has caused Beall to become seriously concerned about the risks attached to gold OA. And he is surprised at how little attention these risks get from the research community. As he puts it, “I am dismayed that most discussions of gold openaccess fail to include the quality problems I have documented. Too many OA commenters look only at the theory and ignore the practice. We must ‘maintain the integrity of the academic record’, and I am doubtful that gold openaccess is the best long-term way to accomplish that.”
When presented with evidence of predatory publishing, OA advocates often respond by saying that most OA journals do not actually charge a processing fee. 

But as commercial subscription publishers increasingly enter the OA market it would be naïve to think that the number of journals that charge APCs will not grow exponentially in the coming years.

Whether this will lead to an overall increase in quality remains to be seen. It must be hoped that as more and more traditional journals embrace OA, so quality levels will rise, and predatory publishers will begin to be squeezed out. 

However, if Beall’s growing list is anything to go by, the omens are not currently very good. Moreover, if it turns out that there is indeed an inherent flaw in the gold OA model — as Elsevier once claimed — then the research community would appear to have a long-term problem.

 

The interview begins …

RP: You are a metadata librarian: what does your job involve?
JB: As a faculty librarian, my work is divided up into three components: librarianship, research, and service. My librarianship work involves creating and maintaining library metadata in my library’s discovery systems, including the online catalogue, the discovery layer, and the institutional repository, and related duties.
My research component is thirty per cent of my job, and I am devoting it to my research in scholarly communication. The service component chiefly involves committee work.
RP: How and when did you become interested in predatory open access publishing?
JB: I became interested in predatory publishers in 2008 when I began to receive spam email solicitations from new, online, third-world publishers.
RP: What is the purpose of the list of predatory OA publishers you keep, and how many publishers does it currently include?
JB: The lists are part of my blog. I write the blog to help myself develop my ideas and to share what I am learning about scholarly openaccess publishing. The lists are a means of sharing information about publishers I have judged as questionable or predatory.
There are actually two lists, one of independent journals that do not publish under the aegis of a publisher, and one of publishers. There are 38 independent journals and 111 publishers currently on the list.
RP: When you say independent journals do you mean journals published by researchers themselves?
JB: No, I mean journals that exist independently on the Internet that are not part of a publisher’s fleet of journals. An example is the Global Journal of Medicine and Public Health.

Criteria

RP: Do you have any sense of how fast the phenomenon of predatory publishing is growing?
JB: Yes, the attention my blog has received has inspired academics and others to forward me spam emails they have received and to pass on information they have about new, questionable publishers. In the last couple months, I have been adding 3-4 per week. A new predatory publisher appears almost weekly in India, the location of most of my recent listings.
RP: Is predatory publishing in your view a phenomenon that originates primarily in the developing world?
JB: Yes, and in this I include publishers in the U.S., Canada, Australia, and the U.K. that are run by people from developing countries. They typically set up shop in developed countries and then market their services (vanity scholarly publishing) to the unwary worldwide, especially to those in their home countries.
RP: How do you define a predatory publisher?
JB: Predatory publishers are those that unprofessionally exploit the gold openaccess model for their own profit.
RP: Presumably this implies publishers that charge a fee to publish scholarly papers (Not all gold OA journals do charge a fee)?
JB: By definition, gold openaccess publishers levy an article processing charge (APC).
RP: How do you select publishers to include in your list? What criteria do you use?
JB: As I mentioned, most of the additions to the list result from tips from scientists and other scholars. I have composed and use a criteria document, currently in draft form, that I am preparing for publication on my blog.
Most importantly, I use established criteria, specifically those published by the Committee on Publication Ethics (COPE), the Open Access Scholarly Publishers Association (OASPA), and the International Association of Scientific, Technical & Medical Publishers (STM). There is one statement in COPE’s code of conduct that nicely encapsulates all the criteria into one: “Maintain the integrity of the academic record”.

OASPA

RP: Can you say what specific things you look for when assessing a potentially predatory publisher: for instance, do you look for evidence of spamming, poor or no peer review, the absence of information on ownership and/or location of the publisher, lack of an editor-in-chief, or editorial board, or what? What are the tell-tale signs of a predatory publisher?
JB: Yes, broadly I look for deception and lack of transparency. These two characteristics can manifest themselves in many ways, including those you list.  One thing (among many) that I look for is publishers that refer to themselves as a “center”,  “institute”, “network”, etc. For example, the Institute of Advanced Scientific Researchis not really an institute; it’s a predatory publisher. This is deception. If you look at their contact address on Google Maps, it’s an apartment.
RP: Is there such a thing as a subscription-based predatory publisher?
JB: No, not according to my definition of predatory publisher.
RP: You mentioned OASPA. OASPA has been accused of doing too little to stem the tide of questionable OA publishers. Would you agree? Could it be doing more? If so, what? On the other hand, might OASPA be the wrong organisation to attempt to control these activities? What is and should be OASPA’s role (if any) vis-à-vis predatory publishing?
JB: Only one or two of the publishers on my list are OASPA members. Therefore, there’s little the organization can do to control the predatory publishers. In fact, most of the publishers on my list lack affiliation with any professional association, and they fail to follow many established publishing standards. It’s not really my role to tell OASPA what it should be doing.
RP: One of OASPA’s founding members, Hindawi, was at one time on your watchlist, but subsequently you removed it. However, your current list of predatory publishers still includes the International Scholarly Research Network (ISRN). ISRN is one of Hindawi’s brands. What do we learn from this?
JB: If you’re a publisher, don’t call yourself a network when you’re not a network.
RP: When and why do you remove a publisher from your list?
JB: I have removed publishers from my list for two reasons. First, if the publisher’s website disappears, I remove it from the list. This has happened only once or twice, and I removed them from the list without saving the names. Second, I remove a publisher from the list when I receive convincing comments from colleagues disagreeing with my having added it to the list.

Legal threats

RP: Have you ever removed a publisher from your list as a result of receiving a legal threat? Have you ever received any legal threats in connection with your list?
JB: My answer to the first question is no. Regarding the second question, yes, I have received two legal threats.
RP: I am struck that at least one of the publishers that you have removed from your list — Dove Press — was formerly a member of OASPA. Dove has been the subject of some controversy, and is no longer a member of OASPA. Why did you remove Dove from your list of questionable publishers?
JB: I removed it based on comments that JQJohnson left on my old blog. He is Director, Scholarly Communications and Instructional Support, at the University of Oregon and someone whose opinion I respect. I took his comments as a form of “peer review” and decided to accept his suggestion to remove Dove Press from the list.
RP: People have said to me that you tend to “shoot from the hip” when listing publishers as predatory, sometimes making your decision on too little information. Would you agree? Have you ever regretted putting a publisher on your list?
JB: In most cases, the decision to place a given publisher on my list is an easy one because the publisher is so clearly corrupt and predatory. Thus, a decisive and resolute action is appropriate, and no, I don’t agree, for I believe I make the decisions with sufficient information.
I now regret having the watchlist on my earlier blog. The feedback I received indicated that the watchlist painted a negative picture of the publishers on that list given the context in which the list appeared (juxtaposed with a list of predatory publishers). I acted on the feedback and now no longer have a public watchlist, though I do maintain one privately.

Conflict of interest?

RP: Others have suggested that you might have a conflict of interest, pointing out, for instance, that you are on the editorial board of a subscription journal. Should such claims be taken seriously? Why? Why not?
JB: Two people have said that. One is Scott Albers, an attorney from Great Falls, Montana and author of  the article, “The Golden Mean, The Arab Spring And a 10-Step Analysis of American Economic History“, a paper published in the Middle East Studies Online Journal. He asked me for advice as he was submitting the same article to a second publisher. I told him the publisher was essentially a vanity press, and he became offended and then contrived the conflict of interest story. The second is Ken Masters, the editor of Internet Scientific Publications’ The Internet Journal of Medical Education. Masters is an assistant professor at Oman’s Sultan Qaboos University, and he took it personally when I put Internet Scientific Publications, a publisher run out of a spare bedroom in Sugar Land, Texas, on my list.
The truth is there is no conflict of interest. I have no financial stake in Taylor & Francis, the publisher of the journal on whose editorial board I serve. In point of fact, my service on the editorial board has enabled me to learn a lot about the scholarly publishing process and scholarly publishing in general. Masters has been trying to bait people on email lists, including LIBLICENSE, with the conflict of interest story, but he has been ignored.
RP: The implication in the above claim, I assume, is that you are anti-OA. How would you describe your position vis-à-vis OA: advocate, sceptic, opponent?
JB: I am not “anti” anything. I am in favourof the best model for scholarly communication, whatever it turns out to be. If that is gold OA, then so be it.
I review science books for Library Journal. Occasionally, I’ll give a book a negative review. That doesn’t mean I’m anti-science. My list is essentially a collective review of gold openaccess publishers. It’s a re-invention of what librarians call “readers advisory”.
RP: Whatever your position vis-à-vis OA, do you think the author-pays publishing model is inherently flawed so far as scholarly publishing is concerned?
JB: It’s too early to tell, so I don’t have a final opinion on this yet. On the one hand, the evidence I see every day argues that the model is indeed flawed. On the other hand, we need to ask, Which is the best model for the future of scholarly communication? It’s too early to eliminate a potentially successful and sustainable model.  

Abused the system

RP: I assume most researchers publish in the journals of predatory publishers without realising that they are dealing with a predatory publisher — and clearly a list like yours can play a useful role in helping them avoid doing so. On the other hand, I have had researchers say to me that they have knowingly paid to appear in a predatory publisher’s journal, explaining that they did so because they were having difficulties being published in a more reputable journal, or simply needed to get a paper published quickly for tenure or promotion purposes. I do not know how common the practice is, but does it not suggest that the research community is conspiring in the growth of predatory publishers, and, therefore, that the phenomenon is likely only to grow?
JB: I don’t think there’s a conspiracy, but I do think that some individuals have unprofessionally abused the system for their own benefit. But that’s why we have tenure and promotion committees. It is the committees’ job to vet the research of their tenure candidates. Tenure and promotion committees must now bring greater scrutiny to candidates’ published works than they did in the past, given the presence and abuse of scholarly vanity presses and the disappearance of the validation function that traditional publishers have so effectively provided.
RP: In the UK recently the Finch Report recommended that all publicly funded research should be made freely available on an OA basis, and by means of gold OA. This, it said, would require UK universities to pay an additional £50-60 million a year in order to disseminate the research they produce. If other countries follow suit, and if the author-pays model does indeed turn out to be inherently flawed, we can presumably expect the research community to find itself in trouble at some point can we not?
JB: Yes, and I am dismayed that most discussions of gold openaccess fail to include the quality problems I have documented. Too many OA commenters look only at the theory and ignore the practice. We must “maintain the integrity of the academic record”, and I am doubtful that gold openaccess is the best long-term way to accomplish that.
RP: What future plans do you have for your work on predatory publishers? Will you be adding new features to your blog, for instance?
JB: One weakness of my list is that it is binary: a publisher is either on the list or it isn’t. I would like to classify the publishers more granularly in terms of their quality, an upgrade that would differentiate among the borderline ones and the really bad ones. I am also in the middle of a research project about library catalogues and inclusion of predatory journals and hope to carry out additional research on openaccess publishing.

Briefing paper on Open Access Business Models for research funders and universities

This briefing paper offers insight into various open access business models, from institutional to subject repositories, from open access journals to research data and monographs. This overview shows that there is a considerable variety in business models within a common framework of public funding. Open access through institutional repositories requires funding from particular institutions to set up and maintain a repository, while subject repositories often require contributions from a number of institutions or funding agencies to maintain a subject repository hosted at one institution. Open access through publication in open access journals generally requires a mix of funding sources to meet the cost of publishing. Public or charitable research funding bodies may contribute part of the cost of publishing in an open access journal but institutions also meet part of the cost, particularly when the author does not have a research grant from a research funding body.

To some extent the benefits follow the funding, institutions and their staff members being the primary beneficiaries from institutional repositories, while national research funding agencies may be the primary beneficiaries from the publication in open access of the research they fund. However, in addition all open access business models also allow benefits to flow to communities which have not been part of the funding infrastructure.

The briefing paper ‘Open Access Business Models for research funders and universities’ was commissioned by Knowledge Exchange and was written by Fred Friend.

The briefing paper is available for download here.

New Berlin Declaration

This Berlin Declaration on the Future of the Digital Press was launched by the Periodical Press on 16 March 2011 in Berlin. Contrary to its namesake (the Berlin Declaration on Open Access) it does not call for OA, but for a less restrictive publishing environment where publishers are free to manage their own business models.
The declaration has five conditions:
  • Maintenance of existing press freedom: a call to minimise restrictions on advertising as well as freedom of expression.
  • Freedom to experiment and manage innovative business models: a call for parity in negotiations with digital players (not mentioned but clearly aimed at Amazon, Apple and Google).
  • A strong copyright protection: including tighter control over allowable reuse of content.
  • Reduced VAT rates for digital as well as print publications: asking for zero rates on both digital and print.
  • Fair competition and transparency in the digital world: asking for legislation to prevent locked-in technologies that restrict mobile platforms for digital works.
The declaration makes no mention of open access and is aimed at the trade, rather than the research publishing area. The initiating partners are the European Federation of Magazine Publishers and the Association of German Magazine Publishers.

Book "Open access in Southern European countries"

This book is promoted by FECYT.
The Spanish Foundation for Science and Technology (FECYT) is a public foundation under the Spanish Ministry of Science and Innovation whose mission is to strengthen the value chain of knowledge by fostering science and innovation and trying to integrate them and bring them closer to society, in response to the needs and expectations of the Spanish science, technology and enterprise system. The Foundation’s goal is to be recognized by Spanish society as a key reference in the dissemination, information and measurement of science and innovation. It also wishes to contribute to the development of a knowledge-based economy.

Access full text book

Video recordings of the 2nd Conference on Open Access Scholarly Publishing

  1. Open Access Publishing: Retaining the core, stimulating progress
  2. Open, free, or hybrid? Open access at the BMJ Group
  3. Establishing an Institutional OA Publishing Fund: The UC Berkeley Experience
  4. BioMed Central’s Membership Schemes
  5. PLoS Institutional Membership Program