“If you work with journals in the biomedical or life sciences, getting the articles you publish added to PubMed to make them more discoverable is likely one of your top goals. But, you may be wondering how to go about it.
We caught up with PubMed Central (PMC) Program Manager Kathryn Funk to get answers to some of the most common questions that we hear from journal publishers about PubMed and the related literature databases at the National Library of Medicine (NLM), MEDLINE and PMC. Read on to learn more about how the PubMed database works and how to apply to have a journal included in MEDLINE or PMC in order to make its articles searchable via PubMed….”
Abstract: Objective Dissemination of research findings is central to research integrity and promoting discussion of new knowledge and its potential for translation into practice and policy. We investigated the frequency and format of dissemination to trial participants and patient groups. Design Survey of authors of clinical trials indexed in PubMed in 2014–2015. Results Questionnaire emailed to 19 321 authors; 3127 responses received (16%). Of these 3127 trials, 2690 had human participants and 1818 enrolled individual patients. Among the 1818, 498 authors (27%) reported having disseminated results to participants, 238 (13%) planned to do so, 600 (33%) did not plan to, 176 (10%) were unsure and 306 (17%) indicated ‘other’ or did not answer. Of the 498 authors who had disseminated, 198 (40%) shared academic reports, 252 (51%) shared lay reports, 111 (22%) shared both and 164 (33%) provided individualised study results. Of the 1818 trials, 577 authors (32%) shared/planned to share results with patients outside their trial by direct contact with charities/patient groups, 401 (22%) via patient communities, 845 (46%) via presentations at conferences with patient representation, 494 (27%) via mainstream media and 708 (39%) by online lay summaries. Relatively few of the 1818 authors reported dissemination was suggested by institutional bodies: 314 (17%) of funders reportedly suggested dissemination to trial participants, 252 (14%) to patient groups; 333 (18%) of ethical review boards reportedly suggested dissemination to trial participants, 148 (8%) to patient groups. Authors described many barriers to dissemination. Conclusion Fewer than half the respondents had disseminated to participants (or planned to) and only half of those who had disseminated shared lay reports. Motivation to disseminate results to participants appears to arise within research teams rather than being incentivised by institutional bodies. Multiple factors need to be considered and various steps taken to facilitate wide dissemination of research to participants.
Abstract: Rigorous evidence identification is essential for systematic reviews and meta?analyses (evidence syntheses), because the sample selection of relevant studies determines a review’s outcome, validity, and explanatory power. Yet, the search systems allowing access to this evidence provide varying levels of precision, recall, and reproducibility and also demand different levels of effort. To date, it remains unclear which search systems are most appropriate for evidence synthesis and why. Advice on which search engines and bibliographic databases to choose for systematic searches is limited and lacking systematic, empirical performance assessments.
This study investigates and compares the systematic search qualities of 28 widely used academic search systems, including Google Scholar, PubMed and Web of Science. A novel, query?based method tests how well users are able to interact and retrieve records with each system. The study is the first to show the extent to which search systems can effectively and efficiently perform (Boolean) searches with regards to precision, recall and reproducibility. We found substantial differences in the performance of search systems, meaning that their usability in systematic searches varies. Indeed, only half of the search systems analysed and only a few Open Access databases can be recommended for evidence syntheses without adding substantial caveats. Particularly, our findings demonstrate why Google Scholar is inappropriate as principal search system.
We call for database owners to recognise the requirements of evidence synthesis, and for academic journals to re?assess quality requirements for systematic reviews. Our findings aim to support researchers in conducting better searches for better evidence synthesis.
“PubMed, the National Library of Medicine’s repository of millions of abstracts and citations, has long been one of the most highly regarded sources for searching biomedical literature. For some members of the scientific community, the presence of predatory journals, publications that tend to churn out low-quality content and engage in unethical publishing practices—has been a pressing concern….
In 2017, Manca, Franca Deriu, a professor of physiology at the University of Sassari, and their colleagues conducted twostudies that pinpointed more than 200 predatory journals across the disciplines of neuroscience, neurology, and rehabilitation, and discovered that several of those also appeared on PubMed….
According to Manca, content from predatory publishers likely seeps into PubMed via PMC, where he and his colleagues have been able to find papers from several predatory journals….”
Objective: PubMed’s provision of MEDLINE and other National Library of Medicine (NLM) resources has made it one of the most widely accessible biomedical resources globally. The growth of PubMed Central (PMC) and public access mandates have affected PubMed’s composition. The authors tested recent claims that content in PMC is of low quality and affects PubMed’s reliability, while exploring PubMed’s role in the current scholarly communications landscape.
Methods: The percentage of MEDLINE-indexed records was assessed in PubMed and various subsets of records from PMC. Data were retrieved via the National Center for Biotechnology Information (NCBI) interface, and follow-up interviews with a PMC external reviewer and staff at NLM were conducted.
Results: Almost all PubMed content (91%) is indexed in MEDLINE; however, since the launch of PMC, the percentage of PubMed records indexed in MEDLINE has slowly decreased. This trend is the result of an increase in PMC content from journals that are not indexed in MEDLINE and not a result of author manuscripts submitted to PMC in compliance with public access policies. Author manuscripts in PMC continue to be published in MEDLINE-indexed journals at a high rate (85%). The interviewees clarified the difference between the sources, with MEDLINE serving as a highly selective index of journals in biomedical literature and PMC serving as an open archive of quality biomedical and life sciences literature and a repository of funded research.
Conclusion: The differing scopes of PMC and MEDLINE will likely continue to affect their overlap; however, quality control exists in the maintenance and facilitation of both resources, and funding from major grantors is a major component of quality assurance in PMC….”
PubMed Commons has been a valuable experiment in supporting discussion of published scientific literature. The service was first introduced as a pilot project in the fall of 2013 and was reviewed in 2015. Despite low levels of use at that time, NIH decided to extend the effort for another year or two in hopes that participation would increase. Unfortunately, usage has remained minimal, with comments submitted on only 6,000 of the 28 million articles indexed in PubMed. While many worthwhile comments were made through the service during its 4 years of operation, NIH has decided that the low level of participation does not warrant continued investment in the project, particularly given the availability of other commenting venues.
“iCite is a tool to access a dashboard of bibliometrics for papers associated with a portfolio. Users upload the PubMed IDs of articles of interest (from SPIRES or PubMed), optionally grouping them for comparison. iCite then displays the number of articles, articles per year, citations per year, and Relative Citation Ratio (a field-normalized metric that shows the citation impact of one or more articles relative to the average NIH-funded paper). A range of years can be selected, as well as article type (all, or only research articles), and individual articles can be toggled on and off. Users can download a report table with the article-level detail for later use or further visualization. Read about how the Relative Citation Ratio (RCR) is calculated at PLOS Biology….”