“None of this is to deny that if you have strong primary research to report it is better to push it out in journals wherever feasible. But book chapters can have valuable exploratory, discursive, synoptic and review roles. And they can carry new findings too, especially in start-up fields and with good editors and editing. The old problems from the early digital phase, when for a while chapter texts became literally unfindable, and authors passively left things to publishers to promote their work, no longer apply with much of their previous force. However conservative your editors and publishers may be, you can get your chapter noticed, read and cited in the communities that matter to you.”
Abstract: With the rise of Wikipedia as a first-stop source for scientific information, it is important to understand whether Wikipedia draws upon the research that scientists value most. Here we identify the 250 most heavily used journals in each of 26 research fields (4,721 journals, 19.4M articles) indexed by the Scopus database, and test whether topic, academic status, and accessibility make articles from these journals more or less likely to be referenced on Wikipedia. We find that a journal’s academic status (impact factor) and accessibility (open access policy) both strongly increase the probability of it being referenced on Wikipedia. Controlling for field and impact factor, the odds that an open access journal is referenced on the English Wikipedia are 47% higher compared to paywall journals. These findings provide evidence is that a major consequence of open access policies is to significantly amplify the diffusion of science, through an intermediary like Wikipedia, to a broad audience.
Sharing research data provides benefit to the general scientific community, but the benefit is less obvious for the investigator who makes his or her data available.
We examined the citation history of 85 cancer microarray clinical trial publications with respect to the availability of their data. The 48% of trials with publicly available microarray data received 85% of the aggregate citations. Publicly available data was significantly (p?=?0.006) associated with a 69% increase in citations, independently of journal impact factor, date of publication, and author country of origin using linear regression.
This correlation between publicly available data and increased literature impact may further motivate investigators to share their detailed research data.
“QDR selects, ingests, curates, archives, manages, durably preserves, and provides access to digital data used in qualitative and multi-method social inquiry. The repository develops and publicizes common standards and methodologically informed practices for these activities, as well as for the reusing and citing of qualitative data. Four beliefs underpin the repository’s mission: data that can be shared and reused should be; evidence-based claims should be made transparently; teaching is enriched by the use of well-documented data; and rigorous social science requires common understandings of its research methods….”
“The Alfred P. Sloan Foundation has made a 2-year, $747K award to the California Digital Library, DataCite and DataONE to support collection of usage and citation metrics for data objects. Building on pilot work, this award will result in the launch of a new service that will collate and expose data level metrics.
The impact of research has traditionally been measured by citations to journal publications: journal articles are the currency of scholarly research. However, scholarly research is made up of a much larger and richer set of outputs beyond traditional publications, including research data. In order to track and report the reach of research data, methods for collecting metrics on complex research data are needed. In this way, data can receive the same credit and recognition that is assigned to journal articles.
‘Recognition of data as valuable output from the research process is increasing and this project will greatly enhance awareness around the value of data and enable researchers to gain credit for the creation and publication of data’ – Ed Pentz, Crossref.
This project will work with the community to create a clear set of guidelines on how to define data usage. In addition, the project will develop a central hub for the collection of data level metrics. These metrics will include data views, downloads, citations, saves, social media mentions, and will be exposed through customized user interfaces deployed at partner organizations. Working in an open source environment, and including extensive user experience testing and community engagement, the products of this project will be available to data repositories, libraries and other organizations to deploy within their own environment, serving their communities of data authors.”
Abstract: “Conducting copyright clearance and ingesting appropriate versions of faculty publications can be a labor intensive and time consuming process. At Loyola Marymount University (LMU), a medium-size, private institution, the Digital Library Program (DLP) had been conducting copyright clearance one publication at a time. This meant that it took an enormous amount of time from start to finish to review and process the list of publications on a given faculty member’s CV. In October 2016, the Digital Program Librarian learned about the automated workflow developed by librarians at University of North Texas and decided to give it a try. At this time, the DLP hired a Library Assistant who then began exploring and experimenting with this automated workflow. The goal of such experimentation was to increase efficiency in our processes to ingest more faculty publications in LMU’s institutional repository.
In this session, we will share information about our workflows and tools used to manage our various processes. […]”
“In short, when women political scientists make their work freely available online, their research is cited at similar rates to men’s work. This is a very positive finding given the current gender imbalance found in many aspects of the discipline. (Side note: many scholars, regardless of gender, fail to self-archive due to lack of know-how; Carling has written a very helpful primer on the subject. See also Atchison and Bull.)
A final caveat is necessary. These results should be interpreted with caution. First, the finding that OA can help to negate the gender citation advantage is surprising in light of previous research on gendered citation effects. This must be investigated further to determine whether it is an artefact of the data, whether the pattern holds when other data are used, and whether the pattern holds once self-archiving becomes more commonplace in political science. Second, as with any single-discipline study, the results may lack generalisability. There is considerable evidence that GCE varies by discipline, so it will be important to explore the GCE-OA interaction both within and across disciplines.”
There is fresh momentum in the scholarly publishing world to open up data on the citations that link research publications.
Six organizations today announced the establishment of the Initiative for Open Citations (I4OC): OpenCitations, the Wikimedia Foundation, PLOS, eLife, DataCite, and the Centre for Culture and Technology at Curtin University.
“The Initiative for Open Citations (I4OC) aims to allow anyone to access science papers’ reference lists and to build analytical services on top of that raw data. Started last year by the Wikimedia Foundation in San Francisco, California and five other partner organizations, I4OC announced at its official launch on 6 April that 29 organizations, including some of the world’s largest scientific publishers, have now agreed to openly release citation data.”
Abstract: “This is the first in-depth study on the coverage of Microsoft Academic (MA). The coverage of a verified publication list of a university was analyzed on the level of individual publications in MA, Scopus, and Web of Science (WoS). Citation counts were analyzed and issues related to data retrieval and data quality were examined. A Perl script was written to retrieve metadata from MA. We find that MA covers journal articles, working papers, and conference items to a substantial extent. MA surpasses Scopus and WoS clearly with respect to book-related document types and conference items but falls slightly behind Scopus with regard to journal articles. MA shows the same biases as Scopus and WoS with regard to the coverage of the social sciences and humanities, non-English publications, and open-access publications. Rank correlations of citation counts are high between MA and the benchmark databases. We find that the publication year is correct for 89.5% of all publications and the number of authors for 95.1% of the journal articles. Given the fast and ongoing development of MA, we conclude that MA is on the verge of becoming a bibliometric superpower. However, comprehensive studies on the quality of MA data are still lacking.”