Promoting reproducibility by emphasizing reporting: PLOS ONE’s approach

0000-0002-4592-9214As we celebrate PLOS ONE’s ten year anniversary, we continue our commitment to uphold rigorous standards for publications across all scientific disciplines. This is not a small goal. Several years ago, the staff editors at

Online Platforms for Recruiting and Motivating Reviewers

Authors and publishers have easily understandable motivations for participating in scholarly publishing, but there is less clear motivation for reviewers. This post highlights the need of recognizing and rewarding reviewers and describes how online platforms can ease achieving this objective at the time of being a source for recruiting reviewers and recording review activity. A description and a comparison of the main online platforms available today are also provided.

The academic publishing process is driven by four main actors: authors, editors, publishers and reviewers, each of whom play a vital role in ensuring that high standards are maintained throughout the process of preparing the article, reviewing it and finally publishing it. Each main actor needs to have some motivation that drives participation and the quality of their contribution to the publishing process. I would like to summarize what I think are the main motivations of each party in the review process. Authors are driven by their wish to make public the results of their investigations. Besides that, the production of high quality scientific content is a highly valued merit in academia and research. Researchers whose curricular vitae boast of a large list of high-quality publications are well respected and have easier access to funding.
When it comes to editors, becoming a member of editorial board of scientific journals is in itself considered to be a merit. Editors normally serve in an “altruistic” mode, without expecting financial reward. They view being an editor as a means by which they can give back to the scientific and academic community. However, some editors are perhaps not as altruistic as one may think since they also gain recognition from the role which enhances their reputations and therefore access to funding. In addition, it is noteworthy that some publishers do provide some sort of compensation to editors for their work, which can be an additional motivation.
Scientific publishers are mainly based on two models of publication: 1. The traditional model, in which access to the full text of the articles is only accessible to subscribers (individual or institutional) 2. Open access model, in which publishers charge authors a fee for publishing articles with the full text available to all readers. In one way or other, major publishers manage to generate large chunks of revenue from the publishing process. The scientific publishing industry alone generates billions of dollars every year (1-4). Besides this, there is also a large group of non-profit and association/institutional publishers who make very little (if any) financial gain from their journals, but publish them as part of their mission to serve members and academia. Thus, the motivation of this last type of non-profit journal is radically different of that of publishers working as traditional for-profit companies.
While the motivation for three of the four actors in the publishing process can be clearly identified, the reason why reviewers participate in the publishing process is not so clear. There is no “material” reward for reviewers. Rather, it is the scientific altruism or commitment to the scientific model that motivates them to work. Reviewers are encouraged by the belief that they play an important role in ensuring that good quality research work reaches the community. The fact that reviewers themselves are also authors makes them more aware of the importance of good reviewers. In recent decades the number of scientific journals and the number of published articles has multiplied with a growth rate of approximately 3%-10% per year depending on the research area  (5-8), resulting in a true “explosion” of manuscripts that are submitted to publishers. As journals receive more and more manuscripts and the number of journals continues to grow, reviewers get saturated with multiple requests and invitations.  Thus, it is easy to understand “reviewer fatigue”, although many other factors may influence the reviewer’s decision to decline invitations to review manuscripts   (9). As a consequence editors often cannot find appropriate reviewers for manuscripts and this may result in delayed times for the various phases of the review process, and authors often have to wait months until their manuscripts get reviewed.
Getting more reviewers and making them more committed with providing good review reports on time is the main reason why it is necessary to increase the motivation of the reviewers. And indeed it seems fair to reward authors for their work in a sector that generates significant benefits. Several voices insist on this need again and again worldwide (10-15). Some journals/publishers are experimenting with direct payment of reviewers, although this is an exception. Anyway several arguments can be made against direct monetary compensation, in particular because paying reviewers would break the independence between editors/publishers and reviewers, which is one of the pillars of the academic publishing process. Most publishers acknowledge reviewers in front-matter summary pages or lists of reviewers or in letters upon request. Some others, such as Frontiers, make public the names of reviewers (and the name of the editor in charge) of all published articles including the names of the reviewers in a footnote in every published article. Others, such as Elsevier, are launching their own recognition platforms providing their reviewers with a personalized profile page where their reviewing history is documented and where they can download certificates. Authors and editors can also evaluate the quality of reviews done, providing feedback that may result in better quality of the review process. Nature, for example, recognizes reviewers with payment in kind, where reviewers receive free journal access, tools and services or vouchers for research supplies (16).  
In recent years, independent communities have developed online platforms offering review services for the scientific community. These platforms establish that it is possible to create an independent system where reviewers get recognition and reward for the efforts they put into ensuring that quality research reaches the scientific community. One of the main features of these platforms is that they are “third party companies” independent of publishers. This way, biases are completely prevented since editors and publishers are unable to influence reviewers, even when they may have a role in the workflow, since these platforms are designed to prevent direct communication among the different actors.
Basically, what these platforms do is provide authors and publishers with appropriate reviews and also provide reviewers with an extra motivation making them more willing to review manuscripts and complete the task in shorter periods (10, 11). They provide rewards to reviewers using two major strategies: 1. Credit through certificates or other elements that the reviewer can add to his curriculum vitae and 2. Other benefits such as monetary reward or rights to have their own manuscripts reviewed.
In this update, we report the global features of five of these platforms at the time of comparing them: Rubriq, Peereviewers, Publons.Peerage of Science and Academic Karma (Table 1). 

Rubriq
Peereviewers
Publons
Peerage of Science
Academic Karma
Service/s
Clients choose: review of contents + statistics, or review of contents + suggestion of suitable journals
Database of reviewers
Record of reviewers, journals and reviews
Reviews and publishing offers
Exchange of services
Review protocol
Closed. All manuscript go under the same protocol (Scorecard)
Open. Clients can customize the protocol of review
Open (Peerage Essay)
Open. Clients can customize the protocol of review
Fee (valid in  2015)
Several options depending on the services, from $500 to $650 (3 reviewers included)
$100 per reviewer
Type of acknowledgment to reviewers
Monetary (100$)
Monetary (50$), Certificate
Online record
Online record, ability to submit own articles for review
Online record, ability to submit own articles for review
Table 1. Comparison between third-party platforms offering reviewer services

To start with we would like to compare Rubriq (17) and Peereviewers (18). Both perform similarly but there are also some points distinguishing them (Table 1). In both cases, the reviewer must register on the platforms (restricted to academics and researchers with a given expertise) and declare their expert profile, so that they can be invited as reviewers for manuscripts that match their profile. Reviewers who are selected to review receive an email which contains a summary of the manuscript and instructions on how to complete the process. If the reviewer agrees, he/she will get access to the full text and the review form. When the review is finished a report is sent to the client and the reviewer is rewarded. The identity of the reviewer is also “anonymised” to the clients.
Another platform offering rewards to reviewers is Publons (19). Publons has a different objective: they do not offer any service to authors or publishers, but keep a record of reviewers, journals and reviews. They have a list of journals and create an account for each reviewer. A list containing all reviews conducted by a reviewer is listed in the reviewer’s account after being verified, next to the title of the journal to which each review belongs. Reviewers can claim the reviews they made in several ways, including online forms or by email. These data generate some statistics that place each reviewer in the corresponding percentile activity compared with that of all registered reviewers. The profile of each reviewer is public, so that reviewers can use this website to provide evidence of their activity.
Peerage of Science offers a tripartite where authors, reviewers and editors have a role (20) (Table 1). Authors submit manuscripts to Peerage of Science before submitting to any journal. Once submitted, any qualified peer-reviewer can choose to review the manuscript. The peer review process is available concurrently to all editors, with automated event tracking. If authors have received publishing offers from editors they may choose to accept one of these offers, or accept none and use their review in non-participating journals. A positive aspect of Peerage of Science is that peer reviewers are themselves peer reviewed. Reviewers are notified that they can evaluate the reviews sent by other reviewers. This extra twist contributes to increasing the quality of peer review. From the reviewer’s point of view, Peerage of Science offers credit for curricular purposes only as an externally verifiable measure of the reviewers’ expertise in their scientific fields.
An innovative approach comes from Academic Karma (21). Academic Karma is both the name of a currency and a platform for peer review. Instead of exchanging money, authors and reviewers exchange karma: reviewers earn 50 karma per reviewed manuscript and authors of the manuscript collectively spend 50 karma per reviewer (Table 1). Then reviewers may use their Karma for paying reviewers when authoring manuscripts. Editors are also involved since they receive the reviewer’s report simultaneously to authors.
An important point is how reviewers’ identities and their expertise are verified and how attribution of merits can be recorded and tracked. The Working Group on Peer Review Service (created to develop a data model and citation standard for peer review activity that can be used to support both existing and new review models) stresses the need for standardized citation structures for reviews which can enable the inclusion of peer review activity in personal recognition and evaluation, as well the ability to refer to reviews as part of the scholarly literature (6). In this regard, all platforms described here are using or are starting to use ORCID identifiers for both authors and reviewers, and DOIs as identification for published reviews (22). ORCID itself is also offering the option of adding reviews to ORCID profiles. Researchers with a profile in these networks can link this to their ORCID iD so that the reviews they have recorded on the platform are added to their ORCID page (23). In turn, these identificators will ease future reaearch on peer review and will probaly allow us to measure the impact of these platforms in the academic publishing process.
In conclusion, motivating and rewarding reviewers is a need that can be addressed both by publishers and third party organizations. Online platforms are good tools for giving credit to reviewers and to convey monetary reward, at the same time offering a way of recording review activity.

References and Notes
1.The Wellcome Trust (2003) Economic analysis of scientific research publishing: A report commissioned by the Wellcome Trust, revised ed. Available: http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtd003182.pdf. Accessed 10th July 2015.
2. Costs and business models in scientific research publishing A report commissioned by the Wellcome Trust. http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtd003184.pdf
3. The National Academies (US) Committee on Electronic Scientific, Technical, and Medical Journal Publishing. Electronic Scientific, Technical, and Medical Journal Publishing and Its Implications: Report of a Symposium. http://www.ncbi.nlm.nih.gov/books/NBK215820/
4. Ware, Mark and Mabe, Michael (2015)  An overview of scientific and scholarly journal publishing. International Association of Scientific, Technical and Medical Publishers, 2015. http://www.stm-assoc.org/2015_02_20_STM_Report_2015.pdf Accessed 20th October 2015.
5. Walker R, Rocha da Silva P. (2015) Emerging trends in peer review—a survey. Frontiers in Neuroscience 9:169
6. Paglione LD, Lawrence RN. (2015) Data exchange standards to support and acknowledge peer-review activity. Learned Publishing, 28 (4):309-316(8)
7.Van Noorden, R. (2014) Global scientific output doubles every nine years. Nature.com [Internet], NewsBlog, 7 May 2014. Available from: http:// blogs.nature.com/news/2014/05/global-scientific-output-doublesevery-nine-years.html
8. The Wellcome Trust (2015) Scholarly Communication and Peer Review: The Current Landscape and Future Trends. http://www.wellcome.ac.uk/stellent/groups/corporatesite/%40policy_communications/documents/web_document/wtp059003.pdf Accessed 12 November 2015.
9. Marijke Breuning, Jeremy Backstrom, Jeremy Brannon, Benjamin Isaak Gross, Michael Widmeie  Reviewer Fatigue? (2015) Why Scholars Decline to Review their Peers’ Work PS: Political Science & Politics 48(4):595-600. http://dx.doi.org/10.1017/S1049096515000827
10.Björk B; Hedlund T.(2015)  Emerging new methods of peer review in scholarly journals. Learned Publishing 28(2): 85-91
11. Thomson Reuters (2010) Increasing the Quality and Timeliness of Scholarly Peer Review. A report for Scholarly Publishers..http://scholarone.com/media/pdf/peerreviewwhitepaper.pdf
12. Taylor & Francis (2015) Peer review in 2015: A global view. http://authorservices.taylorandfrancis.com/peer-review-in-2015/Accessed 20th October 2015
13. Alice Meadows (2015, January 7th) Recognition for peer review and editing in Australia – and beyond? Blog post in Exchanges http://exchanges.wiley.com/blog/2015/01/07/recognition-for-peer-review-and-editing-in-australia-and-beyond/Accessed 20th October 2015.
14. Andrew Trounson. Journals should credit editors, says ARC. Post in The Australian http://www.theaustralian.com.au/higher-education/journals-should-credit-editors-says-arc/story-e6frgcjx-1227201178857Accessed 20th October 2015.
15. Alberts, P., Hanson, B., and Kelner, K.L. 2008. Reviewing peer review. Science, 321 (5885): 15. http://dx.doi.org/10.1126/science.1162115.
16. Review rewards. Nature [Internet], 514(7522): 274–274. http:// dx.doi.org/10.1038/514274a
17.http://www.rubriq.com/
18.http://www.peereviewers.com/
19.http://www.publons.com
20.https://www.peerageofscience.org
21.http://academickarma.org/
22.Gasparyan AY, Akazhanov NA, Voronov AA, Kitas GD. Systematic and open identification of researchers and authors: focus on open researcher and contributor ID. J Korean Med Sci. 2014 Nov;29(11):1453-6. doi: 10.3346/jkms.2014.29.11.1453
   

PLOS Recommended Data Repositories

In line with our updated Data Policy, we are pleased to announce a PLOS Data Repository Recommendation Guide. To support the selection of data repositories for authors, PLOS has identified a set of established repositories, which are recognized and trusted within their respective communities. To … Continue reading »

The post PLOS Recommended Data Repositories appeared first on EveryONE.

Peer-Review and Quality Control

Many physicists say ? and some may even believe ? that peer review does not add much to their work, that they would do fine with just unrefereed preprints, and that they only continue to submit to peer-reviewed journals because they need to satisfy their promotion/evaluation committees.

And some of them may even be right. Certainly the giants in the field don?t benefit from peer review. They have no peers, and for them peer-review just leads to regression on the mean.

But that criterion does not scale to the whole field, nor to other fields, and peer review continues to be needed to maintain quality standards. That?s just the nature of human endeavor.

And the quality vetting and tagging is needed before you risk investing the time into reading, using and trying to build on work — not after. (That’s why it’s getting so hard to find referees, why they’re taking so long (and often not doing a conscientious enough job, especially for journals whose quality standards are at or below the mean.)

Open Access means freeing peer-reviewed research from access tolls, not freeing it from peer review…

Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. http://cogprints.org/1646/

Harnad, S. (2009) The PostGutenberg Open Access Journal. In: Cope, B. & Phillips, (Eds.) The Future of the Academic Journal. Chandos. http://eprints.soton.ac.uk/265617/

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8). http://eprints.ecs.soton.ac.uk/21348/

Harnad, S. (2014) Crowd-Sourced Peer Review: Substitute or supplement for the current outdated system? LSE Impact Blog 8/21 August 21 2014 http://blogs.lse.ac.uk/impactofsocialsciences/2014/08/21/crowd-sourced-peer-review-substitute-or-supplement/

Journal publishing prices need to go down, not up

Today’s transitional period for peer-reviewed journal publishing — when both the price of subscribing to conventional journals and the price of publishing in open-access journals (“Gold OA”) is grossly inflated by obsolete costs and services — is hardly the time to inflate costs still further by paying peer reviewers.

Institutions and funders need to mandate the open-access self-archiving of all published articles first (“Green OA”). This will make subscriptions unsustainable, forcing journals to downsize to providing only peer review, leaving access-provision and archiving to the distributed global network of institutional repositories. The price per submitted paper of managing peer review — since peers review, and always reviewed for free — is low, fair, affordable and sustainable, on a no-fault basis (irrespective of whether the paper is accepted or rejected: accepted authors should not have to subsidize the cost of rejected papers).

Let’s get there first, before contemplating whether we really want to raise that cost yet again, this time by paying peers.

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).

Harnad, S (2014) The only way to make inflated journal subscriptions unsustainable: Mandate Green Open Access. LSE Impact of Social Sciences Blog 4/28 h

Crowd-Sourced Peer Review: Substitute or Supplement?


Harnad, S. (2014) Crowd-Sourced Peer Review: Substitute or supplement for the current outdated system? LSE Impact Blog 8/21


If, as rumoured, google builds a platform for depositing unrefereed research papers for ?peer-reviewing? via crowd-sourcing, can this create a substitute for classical peer-review or will it merely supplement classical peer review with crowd-sourcing?

In classical peer review, an expert (presumably qualified, and definitely answerable), an “action editor,” chooses experts (presumably qualified, and definitely answerable), “referees,” to evaluate a submitted research paper in terms of correctness, quality, reliability, validity, originality, importance and relevance in order to determine whether it meets the standards of a journal with an established track-record for correctness, reliability, originality, quality, novelty, importance and relevance in a certain field.

In each field there is usually a well-known hierarchy of journals, hence a hierarchy of peer-review standards, from the most rigorous and selective journals at the top all the way down to what is sometimes close to a vanity press at the bottom. Researchers use the journals’ public track-records for quality standards as a hierarchical filter for deciding in what papers to invest their limited reading time to read, and in what findings to risk investing their even more limited and precious research time to try to use and build upon.

Authors’ papers are (privately) answerable to the peer-reviewers, the peer-reviewers are (privately) answerable to the editor, and the editor is publicly answerable to users and authors via the journal’s name and track-record.

Both private and public answerability are fundamental to classical peer review. So is their timing. For the sake of their reputations, many (though not all) authors don’t want to make their papers public before they have been vetted and certified for quality by qualified experts. And many (though not all) users do not have the time to read unvetted, uncertified papers, let alone to risk trying to build on unvalidated findings. Nor are researchers eager to self-appoint themselves to peer-review arbitrary papers in their fields, especially when the author is not answerable to anyone for following the freely given crowd-sourced advice (and there is no more assurance that the advice is expert advice rather than idle or ignorant advice than there is any assurance that a paper is worth taking the time to read and review).

The problem with classical peer review today is that there is so much research being produced that there are not enough experts with enough time to peer-review it all. So there are huge publication lags (because of delays in finding qualified, willing referees, and getting them to submit their reviews in time) and the quality of peer-review is uneven at the top of the journal hierarchy and minimal lower down, because referees do not take the time to review rigorously.

The solution would be obvious if each unrefereed, submitted paper had a reliable tag marking its quality level: Then the scarce expertise and time for rigorous peer review could be reserved for, say, the top 10% or 30% and the rest of the vetting could be left to crowd-sourcing. But the trouble is that papers do not come with a-priori quality tags: Peer review determines the tag.

The benchmark today is hence the quality hierarchy of the current, classically peer-reviewed research literature. And the question is whether crowd-sourced peer review could match, exceed, or even come close enough to this benchmark to continue to guide researchers on what is worth reading and safe to trust and use at least as well as they are being guided by classical peer review today.

And of course no one knows whether crowd-sourced peer-review, even if it could work, would be scaleable or sustainable.

The key questions are hence:

1. Would all (most? many?) authors be willing to post their unrefereed papers publicly (and in place of submitting them to journals!)?

2. Would all (most? many?) of the posted papers attract referees? competent experts?

3. Who/what decides whether the refereeing is competent, and whether the author has adequately complied? (Relying on a Wikipedia-style cadre of 2nd-order crowd-sourcers who gain authority recursively in proportion to how much 1st-order crowd-sourcing they have done ? rather than on the basis of expertise ?  sounds like a way to generate Wikipedia quality, but not peer-reviewed quality?)

4. If any of this actually happens on any scale, will it be sustainable?

5. Would this make the landscape (unrefereed preprints, referee comments, revised postprints) as navigable and useful as classical peer review, or not?

My own prediction (based on nearly a quarter century of umpiring both classical peer review and open peer commentary) is that crowdsourcing will provide an excellent supplement to classical peer review but not a substitute for it. Radical implementations will simply end up re-inventing classical peer review, but on a much faster and more efficient PostGutenberg platform. We will not realize this, however, until all of the peer-reviewed literature has first been made open access. And for that it is not sufficient for Google merely to provide a platform for authors to post their unrefereed papers, because most authors don?t even post their refereed papers in their institutional repositories until it is mandated by their institutions and funders.

Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8). 

Harnad, S. (2011) Open Access to Research: Changing Researcher Behavior Through University and Funder Mandates. JEDEM Journal of Democracy and Open Government 3 (1): 33-41. 

Harnad, Stevan (2013) The Postgutenberg Open Access Journal. In, Cope, B and Phillips, A (eds.) The Future of the Academic Journal (2nd edition). Chandos.

“Low T” and Prescription Testosterone: Public Viewing of the Science Does Matter

testosterone_gray

The post “Low T” and Prescription Testosterone: Public Viewing of the Science Does Matter appeared first on EveryONE.

Meta-Analyses of Genetic Association Studies – PLOS ONE’s Approach

Meta-analysis can be a powerful way to reveal otherwise hidden or unclear associations, when done with care. In line with recent trends in biomedical literature (1), PLOS ONE has seen a consistent increase in submissions reporting meta-analyses of genetic association studies over the last few years. These submissions report analyses of potential associations between candidate gene variants (usually single nucleotide polymorphisms, or SNPs) and specific disease risks and outcomes in human populations, based on a search of the literature to identify published reports studying the association and statistical analyses that synthesize the results of the identified studies.

However, researchers in the community, among them members of our editorial board, have raised concerns about some of these meta-analyses, including the risk of false positives due to publication bias, incomplete searches of the literature, redundancy, and an insufficient assessment of the power and quality of the included studies. As noted a decade ago, “Meta-analysis is not a replacement for adequately powered genetic association studies” (2).  Many of these studies focus on a single gene variant, and many do not include data from relevant genome-wide association  studies (GWAS), some of which have failed to replicate previously reported associations between candidate genes and diseases.

While many meta-analyses of genetic association studies are still clinically relevant, especially those studying rare conditions where GWAS data are not available, and well-conducted meta-analyses can provide useful and valid clinical evidence, we strongly feel that meta-analyses of genetic association studies considered by PLOS ONE must have the rationale clearly explained and that authors must report their studies according to high standards.

In order to address these concerns and after consultation with PLOS ONE editorial board members, we are introducing a new process to handle meta-analyses of genetic association studies. Authors will now be asked to provide the following information:

  1. The rationale for conducting the meta-analysis;
  2. The contribution that the meta-analysis makes to knowledge in light of previously published related reports, including other meta-analyses and systematic reviews;
  3. Whether GWASs relevant to the meta-analysis have been published and whether these were included in the analysis;
  4. Full methodological details for the meta-analysis, including completion of a checklist that has been developed with reference to several published guidelines (3, 4, 5) and in consultation with members of the PLOS ONE editorial board.

The information supplied by the authors will be evaluated by the in-house editorial team as part of the checks undertaken on new submissions. Meta-analyses replicating studies in the literature without adequate justification will be rejected. For those manuscripts that proceed to review, PLOS ONE Academic Editors will be consulted on the adequacy of the methodological aspects of the study and the quality of the reporting in the manuscript.

This process underscores our commitment to maintaining high standards of quality and reporting in publications at PLOS ONE. We are grateful for the input we have received from our editorial board that led to this new process, and wish to thank the PLOS ONE Academic Editors who provided advice and guidance.

If you have any questions or feedback, or if you are an author who would like additional information about our requirements for meta-analyses of genetic association studies, please contact us at plosone@plos.org.

Posted on behalf of the in-house editors at PLOS ONE:

Associate Editors Gina Alvino, Meghan Byrne, Christna Chap, Michelle Dohm, Matt Hodgkinson, Alejandra Clark and Nicola Stead and; Senior Editors Eric Martens and Iratxe Puebla; and Editorial Director Damian Pattinson

  1. Ioannidis JPA, Chang CQ, Lam TK, Schully SD, Khoury MJ (2013) The Geometric Increase in Meta-Analyses from China in the Genomic Era. PLOS ONE 8(6): e65602. doi:10.1371/journal.pone.0065602
  2. Marcus R. Munafò and Jonathan Flint (2004) Meta-analysis of genetic association studies. Trends Genet. 20(9):439-44 doi:10.1016/j.tig.2004.06.014
  3. Sagoo GS, Little J, Higgins JPT (2009) Systematic Reviews of Genetic Association Studies. PLOS Med 6(3): e1000028. doi:10.1371/journal.pmed.1000028
  4. Minelli C, Thompson JR, Abrams KR, Thakkinstian A, Attia J: The quality of meta-analyses of genetic association studies: a review with recommendations. Am J Epidemiol. 2009 Dec 1;170(11):1333-43. doi: 10.1093/aje/kwp350
  5. Little J, Higgins JP, Ioannidis JP, Moher D, Gagnon F, et al. (2009) STrengthening the REporting of Genetic Association Studies (STREGA)- An Extension of the STROBE Statement. PLOS Med 6(2): e1000022. doi:10.1371/journal.pmed.1000022

The post Meta-Analyses of Genetic Association Studies – PLOS ONE’s Approach appeared first on EveryONE.