The field of oncology is among the highest productive fields in medicine, with the highest impact journals. The impact of open access (OA) journals is still understudied in the field of oncology. In this study, we aim to study the open-access status of oncology journals and the impact of the open-access status on journal indices.
We collected data on the included journals from Scopus Source List on 1st of November 2018. We filtered the list for oncology journals for the years from 2011 to 2017. OA journals covered by Scopus are indicated as OA if the journal is listed in the Directory of Open Access Journals (DOAJ) and/or the Directory of Open Access Scholarly Resources (ROAD).
There were 318 oncology journals compared to 260 in 2011, an increase by about 24.2%, and the percentage of OA journals has increased from 19.6% to 23.9%. Although non-OA journals have significantly higher scholarly output (P=0.001), percent cited and source normalized impact per paper (SNIP) were higher for OA journals.
Publishing in oncology OA journals will yield more impact, in term of citations, and will reach boarder audience
Abstract: We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms. Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.
Abstract: A common motivation for increasing open access to research findings and data is the potential to create economic benefits—but evidence is patchy and diverse. This study systematically reviewed the evidence on what kinds of economic impacts (positive and negative) open science can have, how these comes about, and how benefits could be maximized. Use of open science outputs often leaves no obvious trace, so most evidence of impacts is based on interviews, surveys, inference based on existing costs, and modelling approaches. There is indicative evidence that open access to findings/data can lead to savings in access costs, labour costs and transaction costs. There are examples of open science enabling new products, services, companies, research and collaborations. Modelling studies suggest higher returns to R&D if open access permits greater accessibility and efficiency of use of findings. Barriers include lack of skills capacity in search, interpretation and text mining, and lack of clarity around where benefits accrue. There are also contextual considerations around who benefits most from open science (e.g., sectors, small vs. larger companies, types of dataset). Recommendations captured in the review include more research, monitoring and evaluation (including developing metrics), promoting benefits, capacity building and making outputs more audience-friendly.
“A key political driver of open access and open science policies has been the potential economic benefits that they could deliver to public and private knowledge users. However, the empirical evidence for these claims is rarely substantiated. In this post Michael Fell, discusses how open research can lead to economic benefits and suggests that if these benefits are to be more widely realised, future open research policies should focus on developing research discovery, translation and the capacity for research utilisation outside of the academy….”
“For decades, the syllabus has been the roadmap to college classes, listing homework, assignments, and most crucially, texts for students to read and reference. But while a syllabus might be able to teach students what they’re in for during the semester, academics have lacked a tool to analyze large masses of syllabi to better understand what teachers are teaching in different disciplines. That means there isn’t as much empirical data about the content being taught at universities.
The Open Syllabus Project aims to fix this problem. Researchers at the the American Assembly, a nonprofit housed within Columbia University, have collected an archive of more than six million syllabi from college courses all over the world that could help teachers to create new syllabi and researchers to garner a cross-cultural understanding of higher education.
The project first launched three years ago, but this new update has six times as many syllabi and search tools and visualizations designed to map out how academia works right now….”
Abstract: This paper proposes the creation of a dashboard consisting of five metrics that could be used to replace the journal impact factor. It should be especially useful in circumstances, like promotion and tenure committees, where the evaluators do not share the authors subject expertise and where they are working under time constraints.
Abstract: The impact of published research is sometimes measured by the number of citations an individual article accumulates. However, the time from publication to citation can be extensive. Years may pass before authors are able to measure the impact of their publication. Social media provides individuals and organizations a powerful medium with which to share information. The power of social media is sometimes harnessed to share scholarly works, especially journal article citations and quotes. A non?traditional bibliometric is required to understand the impact social media has on disseminating scholarly works/research. The International Journal of Mental Health Nursing (IJMHN) appointed a social media editor as of 1 January 2017 to implement a strategy to increase the impact and reach of the journal’s articles. To measure the impact of the IJMHN social media strategy, quantitative data for the eighteen months prior to the social media editor start date, and the eighteen months after that date (i.e.: from 01 July 2015 to 30 June 2018) were acquired and analysed. Quantitative evidence demonstrates the effectiveness of one journal’s social media strategy in increasing the reach and readership of the articles it publishes. This information may be of interest to those considering where to publish their research, those wanting to amplify the reach of their research, those who fund research, and journal editors and boards.
Abstract: The journal impact factor (IF) is the leading method of scholarly assessment in today’s research world. An important question is whether or not this is still a constructive method. For a specific journal, the IF is the number of citations for publications over the previous 2 years divided by the number of total citable publications in these years (the citation window). Although this simplicity works to an advantage of this method, complications arise when answers to questions such as ‘What is included in the citation window’ or ‘What makes a good journal impact factor’ contain ambiguity. In this review, we discuss whether or not the IF should still be considered the gold standard of scholarly assessment in view of the many recent changes and the emergence of new publication models. We will outline its advantages and disadvantages. The advantages of the IF include promoting the author meanwhile giving the readers a visualization of the magnitude of review. On the other hand, its disadvantages include reflecting the journal’s quality more than the author’s work, the fact that it cannot be compared across different research disciplines, and the struggles it faces in the world of open access. Recently, alternatives to the IF have been emerging, such as the SCImago Journal & Country Rank, the Source Normalized Impact per Paper and the Eigenfactor Score, among others. However, all alternatives proposed thus far are associated with their own limitations as well. In conclusion, although IF contains its cons, until there are better proposed alternative methods, IF remains one of the most effective methods for assessing scholarly activity.
Abstract: Herein, we discuss a novel way to knit current life sciences publishing structures together under the scope of a single life science journal that would countermand many of the issues faced in current publishing paradigms. Such issues include, but are not limited to, publication fees, subscription fees, impact factor, and publishing in more “glamorous” journals for career health. We envision a process flow involving (i) a single, overall, life sciences journal, (ii) divided into sections headed by learned societies, (iii) to whom all scientific papers are submitted for peer review, and (iv) all accepted scientific literature would be published open access and without author publication fees. With such a structure, journal fees, the merit system of science, and unethical aspects of open access would be reformed for the better. Importantly, such a journal could leverage existing online platforms; that is to say, it is conceptually feasible. We conclude that wholly inclusive publishing paradigms can be possible. A single, open access, online, life sciences journal could solve the myriad problems associated with current publishing paradigms and would be feasible to implement.