Abstract: The United States (‘US’) extended most copyright terms by 20 years in 1998, and has since exported that extension via ‘free trade’ agreements to countries including Australia and Canada. A key justification for the longer term was the claim that exclusive rights are necessary to encourage publishers to invest in making older works available — and that, unless such rights were granted, they would go underused. This study empirically tests this ‘underuse hypothesis’ by investigating the relative availability of ebooks to public libraries across Australia, New Zealand, the US and Canada. We find that books are actually less available where they are under copyright than where they are in the public domain, and that commercial publishers seem undeterred from investing in works even where others are competing to supply the same titles. We also find that exclusive rights do not appear to trigger investment in works that have low commercial demand, with books from 59% of the ‘culturally valuable’ authors we sampled unavailable in any jurisdiction, regardless of copyright status. This provides new evidence of how even the shortest copyright terms can outlast works’ commercial value, even where cultural value remains. Further, we find that works are priced much higher where they are under copyright than where they in the public domain, and these differences typically far exceed what would be paid to authors or their heirs. Thus, one effect of extending copyrights from life + 50 to life + 70 is that libraries are obliged to pay higher prices in exchange for worse access.
This is the first published study to test the underuse hypothesis outside the US, and the first to analyse comparative availability of identical works across jurisdictions where their copyright status differs. It adds to the evidence that the underuse hypothesis is not borne out by real world practice. Nonetheless, countries are still being obliged to enact extended terms as a cost of trade access. We argue that such nations should explore alternative ways of dividing up those rights to better achieve copyright’s fundamental aims of rewarding authors and promoting widespread access to knowledge and culture.
“Rebecca Giblin (previously) writes, “We’ve just droppeda new study we’ve been working on for a year. You know how it keeps being claimed that we need longer copyrights because nobody will invest in making works available if they’re in the public domain? Heald and some others have done some great work debunking that in the US context, but now we’ve finally tested this hypothesis in other countries by looking at the relative availability of ebooks to libraries. It’s also the first time anyone has been able to compare availability of identical works (by significant authors) across jurisdictions. The books we sampled were all in the public domain in Canada and NZ, all under copyright in Australia, and a mix in the US (courtesy of its historical renewal system).”
“So what’d we find? That Canada and NZ (public domain) have access to more books and at cheaper prices than Australia (copyright) and the US (mixed). Also that publishers don’t seem to have any problem competing with each other on the same popular titles. And, sadly but not surprisingly: 59% of our sampled ‘culturally significant’ authors had no books available to libraries in any country regardless of copyright status. That’s because even the shortest terms wildly outlast most books’ commercial life (even where they still have cultural value). …”
“Scholarly publishing is a world of maddening inefficiencies. It’s also an unavoidable part of scientific discussion, and it remains one of the only features of academic life that offers some semblance of a meritocratic measure of a scholar’s contributions to the field. “Publish or perish,” as the adage goes, and publishing means dealing with publishers.
Yet every step of the typical academic publication process is fraught with practices that would quickly drive away the customer base of almost any other industry….”
Abstract: Data sharing, i.e. depositing data in research community accessible repositories, is not becoming as rapidly widespread across the life science research community as hoped or expected. I consider the sociological and cultural context of research and lay out why the community should instead move to data publishing with a focus on neuroscience data, and outline practical steps that can be taken to realize this goal.
“Presenting evidence from the Harbingers Study, a three-year longitudinal study of Early Career Researchers (ECRs), David Nicholas assesses the extent to which the new wave of researchers are driving changes in scholarly practices. Finding that innovative practices are often constrained by institutional structures and precarious employment, he suggests that the pace of change in these areas is always going to be slower than might be expected. …”
Abstract: Open access to research data is one of the key themes of current science development concepts and relevant R & D strategies at least in Europe. A systemic change in the modus operandi of science and research should lead to so-called Open Science. The presented paper questions the extent to which the Open Science concept is reflected in the strategies of Czech universities. The paper first describes basic idea of Open Access to Research Data including principles of „FAIR data” as one of the key assumption of it. After a brief characterization of the Czech university sector, the results of the empirical analysis of the inclusion of the Open Access to Research Data concept in the current strategic plans of the Czech universities are presented. The conclusion of the paper is then an evaluation of the results, which reveal an underestimation of the Open Science concept in the current strategic plans of the Czech universities.
“Scientific progress is anchored in the way science is communicated to other scientists. Research papers are published through an antiquated system: scientific journals. This system, enforced by the scientific journals’ lobby, enormously slows down the progress of our society. This article analyzes the limitations of the current scientific publishing system, focusing on journals’ interests, their consequences on science and possible solutions to overcome the problem….”
“Ecologist Thomas Crowther knew that scientists had already collected a vast amount of field data on forests worldwide. But almost all of those data were sequestered in researchers’ notebooks or personal computers, making them unavailable to the wider scientific community. In 2012, Crowther, then a postdoctoral researcher at Yale University in New Haven, Connecticut, began to e-mail and cold-call researchers to request their data. He started to assemble an inventory, now hosted by the Global Forest Biodiversity Initiative, an international research collaboration, that contains data on more than 1 million locations. Data are stored in CSV files (plain-text files that contain a list of data) on servers at Crowther’s present laboratory at the Swiss Federal Institute of Technology in Zurich and on those of a collaborator at Purdue University in West Lafayette, Indiana; he hopes to outsource database storage to a third-party organization with expertise in archiving and access.
After years of courting and cajoling, Crowther has persuaded about half of the data owners to make their data public. The other half, he laments, say that they support open data in principle, but have specific reasons for keeping their data sets private. Mainly, he explains, they want to use their data to conduct and publish their own studies.
Crowther’s database challenges reflect the current state of science: partly open, partly closed, and with unclear and inconsistent policies and expectations on data sharing that are still in flux….”
“There is a good reason why lawyers need to get retainers: unlike a house, the products they provide can’t be repossessed, and it’s difficult to know how much they are worth. Lawyers aren’t selling a win, they are selling information, advice, and services which aim to optimize the outcome based on the situation. Improved information is worth a great deal to clients in aggregate (I have done some research into the value of improved information for parties to litigation here), but it’s difficult to know how much particular information will be worth in a particular situation before it is acquired.
This leads to difficulties in how to price legal services, because lawyers want to make a good return on their labour and expertise, and clients tend to want to pay less than their services may be worth because they are informational in nature. Given this dynamic, it makes sense that lawyers would look for a mechanism like the billable hour to quantify the value of the work they deliver.
In turn, the work publishers and libraries do is also difficult to price. Large commercial legal publishers are well known for not disclosing the prices associated with their sales contracts and a large part of this is because they are trying to harvest as much value as possible from each customer. Different areas of practice are more or less profitable and different people value particular research tools differently, which means that they have different willingness to pay.
By charging everyone different rates, the publishers aim to get the maximum value customers are willing to pay….”
“I know of a case where an unemployed researcher saw his postdoctoral research paper blocked in limbo by Taylor & Francis after acceptance, with the demand that the author either pays the hefty APC of $2500 or formally withdraws the manuscript. All he was offered was a minor discount. Eventually, that ex-postdoc’s former employer conceded to his pleas and agreed to pay the APC. Only that it hasn’t happened yet and the accepted proofread paper is stuck for already over half a year in the Taylor & Francis black box, unpaid and unpublished. You can call it blackmail if you like. [this story has been corrected, I initially wrote the author was in luck and the university did pay.] …”