“MicroPublishing in this context means the publication of short , single experiment, peer reviewed OA articles , with DOIs and metadata to make them citable and discoverable. Typically this might be supplementary or ancillary material that might have been once grouped into a major research program report , delaying it and making it too dense or bulky . Or it might be work on reagents that has genuine scientific interest but, as an incidental finding , only clutters the main report . And MicroPublishing might be a first chance for a post grad or even a student doing lab support work to get their name onto a collaborative publication for the first time . And in all of this work of adding small pieces to the jigsaw and making sure they did not get lost or overlooked – curation is clearly at the heart of these efforts – I heard nothing described in terms of workflows or process that would not have been identical in a commercial environment . And that is important . There is a great deal of bogus hype around “ publishing expertise” . If you are clever enough to be a Professor of Genomics , then mastering publishing does not seem to be a huge intellectual challenge .And the digitally networked world has democratised all processes like publishing . We can all be publishers now – and we all are! …
And we should be attentive not just because of the competitive element . I have a 30 year record of saying that the competitor to the information provider in a digital network is the user doing it for himself , and I am not altering that view now . But we really need to pay attention because this is where and how innovation takes place . This is where and how needs are discovered . If granularity , discoverability and speed to market are the critical issues here., then those are the issues that we must attend to , instead of packing articles with greater amounts of supplemental material , holding articles in peer review until they are “complete” or using citations to game journal impact factors . Above all , we have to remember that scholarly communication is communication by and for scholars . They will , and are , re-inventing it all the time . Rather than propagandising the virtues of “ traditional publishing “ commercial publishers should be forming relationships that help change take place cost-effectively and at scale .”
“Daniel Garisto published a good backgrounder late last year on the history of preprints, and the beginnings of their adoption in biomedicine. And I wrote a post about the pros and cons of preprints in biomed back in 2016. I don’t think anyone, though, had “the worst pandemic in 100 years will massively expand the use of preprints overnight” on their bingo cards. But we already have at least one preprint and at least one journal article about it! Ironically, it was a journal publication about preprints that appeared first.
It was by Maimuna Majumder and Kenneth Mandl, in March, analyzing media and other interest in preprints versus journal articles about the reproduction number for the new coronavirus. They concluded that because of the speed of release of preprints, they were driving the discourse, not journal articles. Decision-making can be informed quickly, they point out, but it can go badly wrong, too, as when a preprint had to be retracted after an outcry, because it “erroneously claimed that COVID-19 contained HIV ‘insertions’”….”
“German universities have uploaded the results of 76 clinical trials over the past six months. Universities have uploaded twice as many results over the past six months than during the preceding six years combined….”
“Chemistry is now starting to embrace preprints, with more and more researchers in chemical and materials sciences posting their manuscripts online prior to peer review. Preprints can speed up the dissemination of scientific results and lead to more informal exchanges between researchers, hopefully accelerating the pace of research as a whole….
Several bibliometric studies have shown that preprints also increase the visibility of the work being done5 by combining two distinct advantages: they are open access, and they appear online earlier than the final peer-reviewed publication. This typically translates into more views and higher impact than non-preprinted articles in the same field6,7: namely, preprinted articles typically have better online metrics, attention scores and number of citations8. …”
“Over the past five-years, Scopus has invested in increasing the discoverability of open access content. In 2018, Scopus partnered with CrossRef to retrieve open access information for ~2M records in Scopus. This currently includes ‘Gold’ OA, either in OA journals or hybrid journals. Users can now view open access articles which were previously indicated at the journal level (more information here). We then quickly also moved on to partner with ImpactStory which means that now Scopus users will be able to search over 7 million peer-reviewed articles tagged as OA in Scopus (more info here).” …”
“PLOS ONE and Scientific Reports have been very successful journals. Any publisher would be thankful to have them in their portfolio. Nonetheless, their unstable performance should also serve as a warning. In the year of their steepest decline, each journal shrunk by about 7,000 articles, which can translate to a loss of more than $10m year-on-year. That will reflect poorly on the balance sheet of any publisher.
The takeaways for publishers are simple:
Do not get carried away; the revenue of megajournals can be inconsistent, so avoid overselling their success to investors and avoid reckless investments
Invest heavily in marketing; if the journal is shedding 10% of citability every year, marketing should try plug this hole as well as possible
Build around their success; launch affiliated, higher impact journals that will absorb some of the eventual content loss
Do not put all your eggs in one basket; pursue a less risky, broad portfolio approach rather than a smaller, focused megajournal approach….”
“When Albert-László Barabási, a computational scientist at Northeastern University in Boston, Massachusetts, submitted a paper to the preprint server bioRxiv last month, he received an unexpected response. The biomedical repository would no longer accept manuscripts making predictions about treatments for COVID-19 solely on the basis of computational work. The bioRxiv team suggested that Barabási submit the study to a journal for rapid peer review, instead of posting it as a preprint.
Publication norms are changing rapidly for science related to the coronavirus pandemic, as scientists worldwide conduct research at breakneck speeds to tackle the crisis. Preprint servers — where scientists post manuscripts before peer review — have been flooded with studies. The two most most popular for coronavirus research, bioRxiv and medRxiv, have posted nearly 3,000 studies on the topic (see ‘Preprint surge’). The servers’ merits are clear: results can be disseminated quickly, potentially informing policy and speeding up research that could lead to the development of vaccines and treatments. But their popularity is spotlighting the scrutiny that these studies receive. Without peer review, it’s hard to check the quality of the work, and sharing poor science could be harmful, especially when research can have immediate effects on medical practice. That has led platforms including bioRxiv and medRxiv, to enhance their usual screening procedures….”
“If you had just one word to sum up what’s happening in the world of open data right now, it should be progress.
On 15 January the International Association of Scientific, Technical and Medical Publishers launched ‘STM 2020 Research Data Year’, an industry-wide initiative to expand the numbers of journals depositing data links as well as grow the volume of citations to datasets.
Then, two weeks later, eight university networks – representing more than 160 research-intensive universities worldwide – signed the Sorbonne Declaration on research data rights, which sets out the needs and benefits of having research data open, by default, wherever possible….”
“Hypothesis just reached its 10 millionth annotation. Half of those have happened in the last year.
This milestone is the achievement of a community: all the scientists, scholars, journalists, authors, publishers, fact-checkers, technologists and, now more than ever, teachers and students who have used and valued collaborative annotation over the years. Thank you all for reaching this momentous number with us, especially during this challenging time….”
“Here, we measure in near time the number of publications on COVID-19 and Sars-CoV-2 and the share of Open Access publications. We generally focus on those Open Access publications that can be found in peer-reviewed journals, so-called golden Open Access, and those that can be found in repositories (green Open Access). To this end, we use Scopus, one of the most important citation databases for peer-reviewed journals, The Lens, a “free & open patent and scholarly search” platform and bioRxiv, the most important preprint server for the life sciences, as well as medRxiv for health science as sources for our dashboard (see below).
What does this tracking offer and why is it important? The COVID-19 crisis is without doubt a special situation for research in general – and science in particular. Results on the virus must be published quickly (speed) and be accessible to all (in other words, “open”). Not only to enable academic collaboration of scientist around the globe, but also to form a basis for informed political decision making. On our dashboard, the speed of publishing is demonstrated by the amount of preprint publications, which are usually not peer-reviewed and therefore available faster. Openness is shown by the general number of open access publications. …”