Abstract: We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms. Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.
Abstract: There is a dearth of research on the perceptions of faculty members in educational leadership regarding open access publications. This reality may exist because of a lack of funding for educational leadership research, financial obstacles, tenure demands, or reputation concerns. It may be that there are simply fewer established open access publishers with reputable impact factors to encourage publication by members in the field. The current study seeks to answer the following question: “What are the perceptions of educational leadership faculty members in UCEA about open access publishing?” The results are based on responses from 180 faculty members in the field of educational leadership.
Abstract: Using an online survey of academics at 55 randomly selected institutions across the US and Canada, we explore priorities for publishing decisions and their perceived importance within review, promotion, and tenure (RPT). We find that respondents most value journal readership, while they believe their peers most value prestige and related metrics such as impact factor when submitting their work for publication. Respondents indicated that total number of publications, number of publications per year, and journal name recognition were the most valued factors in RPT. Older and tenured respondents (most likely to serve on RPT committees) were less likely to value journal prestige and metrics for publishing, while untenured respondents were more likely to value these factors. These results suggest disconnects between what academics value versus what they think their peers value, and between the importance of journal prestige and metrics for tenured versus untenured faculty in publishing and RPT perceptions.
Abstract: This article presents results from a survey of faculty in North American Library and Information Studies (LIS) schools about their attitudes towards and experience with open-access publishing. As a follow-up to a similar survey conducted in 2013, the article also outlines the differences in beliefs about and engagement with open access that have occurred between 2013 and 2018. Although faculty in LIS schools are proponents of free access to research, journal publication choices remain informed by traditional considerations such as prestige and impact factor. Engagement with open access has increased significantly, while perceptions of open access have remained relatively stable between 2013 and 2018. Nonetheless, those faculty who have published in an open-access journal or are more knowledgeable about open access tend to be more convinced about the quality of open-access publications and less apprehensive about open-access publishing than those who have no publishing experience with open-access journals or who are less knowledgeable about various open-access modalities. Willingness to comply with gold open-access mandates has increased significantly since 2013.
Abstract: Preprints in the life sciences are gaining popularity, but release of a preprint still precedes only a fraction of peer-reviewed publications. Quantitative evidence on the relationship between preprints and article-level metrics of peer-reviewed research remains limited. We examined whether having a preprint on bioRxiv.org was associated with the Altmetric Attention Score and number of citations of the corresponding peer-reviewed article. We integrated data from PubMed, CrossRef, Altmetric, and Rxivist (a collection of bioRxiv metadata). For each of 26 journals (comprising a total of 46,451 articles and 3,817 preprints), we used log-linear regression, adjusted for publication date and scientific subfield, to estimate fold-changes of Attention Score and citations between articles with and without a preprint. We also performed meta-regression of the fold-changes on journal-level characteristics. By random effects meta-analysis across journals, releasing a preprint was associated with a 1.53 times higher Attention Score + 1 (95% CI 1.42 to 1.65) and 1.31 times more citations + 1 (95% CI 1.24 to 1.38) of the peer-reviewed article. Journals with larger fold-changes of Attention Score tended to have lower impact factors and lower percentages of articles released as preprints. In contrast, a journal’s fold-change of citations was not associated with impact factor, percentage of articles released as preprints, or access model. The findings from this observational study can help researchers and publishers make informed decisions about how to incorporate preprints into their work.
Abstract: This paper proposes the creation of a dashboard consisting of five metrics that could be used to replace the journal impact factor. It should be especially useful in circumstances, like promotion and tenure committees, where the evaluators do not share the authors subject expertise and where they are working under time constraints.
Abstract: The journal impact factor (IF) is the leading method of scholarly assessment in today’s research world. An important question is whether or not this is still a constructive method. For a specific journal, the IF is the number of citations for publications over the previous 2 years divided by the number of total citable publications in these years (the citation window). Although this simplicity works to an advantage of this method, complications arise when answers to questions such as ‘What is included in the citation window’ or ‘What makes a good journal impact factor’ contain ambiguity. In this review, we discuss whether or not the IF should still be considered the gold standard of scholarly assessment in view of the many recent changes and the emergence of new publication models. We will outline its advantages and disadvantages. The advantages of the IF include promoting the author meanwhile giving the readers a visualization of the magnitude of review. On the other hand, its disadvantages include reflecting the journal’s quality more than the author’s work, the fact that it cannot be compared across different research disciplines, and the struggles it faces in the world of open access. Recently, alternatives to the IF have been emerging, such as the SCImago Journal & Country Rank, the Source Normalized Impact per Paper and the Eigenfactor Score, among others. However, all alternatives proposed thus far are associated with their own limitations as well. In conclusion, although IF contains its cons, until there are better proposed alternative methods, IF remains one of the most effective methods for assessing scholarly activity.
“Recently though, there have been more and more attempts to change that system and find a new way of measuring scholarly achievements other than via the impact factor. But to change the status quo, what exactly needs to change and how can this be achieved? These are just three of the many issues that were discussed during a Panel Discussion on Wednesday afternoon of the 68thLindau Nobel Laureate Meeting….”
Objectives Academical and not-for-profit research funders are increasingly requiring that the research they fund must be published open access, with some insisting on publishing with a Creative Commons Attribution (CC BY) licence to allow the broadest possible use. We aimed to clarify the open access variants provided by leading medical journals and record the availability of the CC BY licence for commercially funded research.
Methods We identified medical journals with a 2015 impact factor of ?15.0 on 24 May 2017, then excluded from the analysis journals that only publish review articles. Between 29 June 2017 and 26 July 2017, we collected information about each journal’s open access policies from their websites and/or by email contact. We contacted the journals by email again between 6 December 2017 and 2 January 2018 to confirm our findings.
Results Thirty-five medical journals publishing original research from 13 publishers were included in the analysis. All 35 journals offered some form of open access allowing articles to be free-to-read, either immediately on publication or after a delay of up to 12 months. Of these journals, 21 (60%) provided immediate open access with a CC BY licence under certain circumstances (eg, to specific research funders). Of these 21, 20 only offered a CC BY licence to authors funded by non-commercial organisations and one offered this option to any funder who required it.
Conclusions Most leading medical journals do not offer to authors reporting commercially funded research an open access licence that allows unrestricted sharing and adaptation of the published material. The journals’ policies are therefore not aligned with open access declarations and guidelines. Commercial research funders lag behind academical funders in the development of mandatory open access policies, and it is time for them to work with publishers to advance the dissemination of the research they fund.
Abstract: This study examined compliance with the criteria of transparency and best practice in scholarly publishing defined by COPE, DOAJ, OASPA and WAME in Biomedical Open Access journals indexed in Journal Citation Reports (JCR). 259 Open Access journals were drawn from the JCR database and on the basis of their websites their compliance with 14 criteria for transparency and best practice in scholarly publishing was verified. Journals received penalty points for each unfulfilled criterion when they failed to comply with the criteria defined by COPE, DOAJ, OASPA and WAME. The average number of obtained penalty points was 6, where 149 (57.5%) journals received ? 6 points and 110 (42.5%) journals ? 7 points. Only 4 journals met all criteria and did not receive any penalty points. Most of the journals did not comply with the criteria declaration of Creative Commons license (164 journals), affiliation of editorial board members (116), unambiguity of article processing charges (115), anti-plagiarism policy (113) and the number of editorial board members from developing countries (99). The research shows that JCR cannot be used as a whitelist of journals that comply with the criteria of transparency and best practice in scholarly publishing.