Abstract: Purpose: This study identifies the personal and professional profiles of researchers with a greater potential to publish high-impact academic articles.
Method: The study involved conducting an international survey of journal authors using a web-based questionnaire. The survey examined personal characteristics, funding, and the perceived barriers of research quality, work-life balance, and satisfaction and motivation in relation to career. The processes of manuscript writing and journal publication were measured using an online questionnaire that was developed for this study. The responses were compared between the two groups of researchers using logistic regression models.
Results: A total of 269 questionnaires were analysed. The researchers shared some common perceptions; both groups reported that they were seeking recognition (or to be leaders in their areas) rather than financial remuneration. Furthermore, both groups identified time and funding constraints as the main obstacles to their scientific activities. The amount of time that was spent on research activities, having >5 graduate students under supervision, never using text editing services prior to the publication of articles, and living in a developed and English-speaking country were the independent variables that were associated with their article getting a greater chance of publishing in a high-impact journal. In contrast, using one’s own resources to perform studies decreased the chance of publishing in high-impact journals.
Conclusions: The researchers who publish in high-impact journals have distinct profiles compared with the researchers who publish in low-impact journals. English language abilities and the actual amount of time that is dedicated to research and scientific writing, as well as aspects that relate to the availability of financial resources are the factors that are associated with a successful researcher’s profile.
“A new study surveying authors from a range of countries investigates the crucial differences between authors who publish in high- and low-impact factor medical journals. This original research shows that the growth of open access hasn’t significantly changed the publishing landscape as regards impact factor….”
“In April 2013, some of the original signers of DORA [Declaration on Research Assessment] wrote to executives at Thomson Reuters to suggest ways in which it might improve its bibliometric offerings. Suggestions included replacing the flawed and frequently misused two-year Journal Impact Factor (JIF) with separate JIFs for the citable reviews and for the primary research article content of a journal; providing more transparency in Thomson Reuters’ calculation of JIFs; and publishing the median value of citations per citable article in addition to the JIFs. Thomson Reuters acknowledged receipt of the letter and said, “We are carefully reviewing all the points raised and will respond as soon as possible.”…”
Abstract: Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or they are found to violate ethical guidelines. Using a novel measure that we call the “retraction index,” we found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor. Although retractions are relatively rare, the retraction process is essential for correcting the literature and maintaining trust in the scientific process.
“The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).”
“Using matching and regression analyses, we measure the difference in citations between articles posted to Academia.edu and other articles from similar journals, controlling for field, impact factor, and other variables. Based on a sample size of 31,216 papers, we find that a paper in a median impact factor journal uploaded to Academia.edu receives 16% more citations after one year than a similar article not available online, 51% more citations after three years, and 69% after five years. We also found that articles also posted to Academia.edu had 58% more citations than articles only posted to other online venues, such as personal and departmental home pages, after five years.”
“We need to take serious cognizance of the document titled ‘DBT and DST Open Access Policy’ released jointly by DST and DBT on 12 December 2014. The focus of the document is on ensuring that knowledge created through the use of public funds is available to the public. This document stipulates that papers resulting from funds received from DST or DBT from the fiscal year 2012–13 onwards are required to be deposited in institutional repositories or in designated central repositories (dbt. sciencecentral.in and dst.sciencecentral. in). It stipulates that institutes receiving core funding from DST or DBT must set up institutional repositories. Most of this document discusses modalities, etc. for the repositories, but it makes two interesting statements that we should discuss. One is a view about an outcome of such open access, viz. ‘providing free online access by depositing them in an institutional repository is the most effective way of ensuring that the research it funds can be accessed, read and built upon’. The other statement makes a judgment call on the use of journal impact factors (IF). The document states ‘The DBT and DST affirms the principle that the intrinsic merit of the work, and not the title of the journal in which an author’s work is published, should be considered in making future funding decisions. The DBT and DST do not recommend the use of journal impact factors either as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions’. I shall discuss these two statements in some detail …”
“Frontiers recently published a fascinating article about the relationship between the impact factors (IF) and rejection rates from a range of journals. It was a neat little study designed around the perception that many publishers have that in order to generate high citation counts for their journals, they must be highly selective and only publish the ‘highest quality’ work. Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research. The new data from Frontiers shows that this perception is most likely false. From a random sample of 570 journals (indexed in the 2014 Journal Citation Reports; Thomson Reuters, 2015), it seems that journal rejection rates are almost entirely independent of impact factors. Importantly, this implies that researchers can just as easily submit their work to less selective journals and still have the same impact factor assigned to it. This relationship will remain important while the impact factor continues to dominate assessment criteria and how researchers evaluate each other (whether or not the IF is a good candidate for this is another debate) …”
“Trickery by editors to boost their journal impact factor means that the widely used metric ‘has now lost most of its credibility’, according to Research Policy journal … In the past two decades, the reliance on impact factors when deciding which academics are promoted or granted tenure has grown. One of the most widely used impact factors is calculated by Thomson Reuters by dividing the average number of citations given to articles in a journal by the total number of papers. Normally the figure is calculated for articles published over the previous two years. ‘Editors’ JIF-boosting stratagems – Which are appropriate and which not?’, by Ben Martin, a professor of science and technology policy studies at the University of Sussex, lists a number of potentially suspect ways journals manipulate this figure …”
[Abstract] The present study means to establish to what extent high-quality open access journals are available as an outlet for publication, by examining their distribution in different scientific disciplines, including the distribution of those journals without article processing charges. The study is based on a systematic comparison between the journals included in the DOAJ, and the journals indexed in the Journal Citation Reports (JCR) Science edition 2013, released by Thomson Reuters. The impact factor of Open Access (OA) journals was lower than those of other journals by a small but statistically significant amount. Open access journals are present in the upper quartile (by impact factor) of 85 out of 176 (48.8%) categories examined. There were no OA journals with an Impact Factor in only 16 categories (9%).