“Our new AI for Cultural Heritage program will use artificial intelligence to work with nonprofits, universities and governments around the world to help preserve the languages we speak, the places we live and the artifacts we treasure. It will build on recent work we’ve pursued using various aspect of AI in each of these areas, such as:
Work in New York , where we have collaborated with The Metropolitan Museum of Art and MIT to explore ways in which AI can make The Met’s Open Access collection accessible, discoverable and useful to the 3.9 billion internet-connected people worldwide.
Work in Paris at the Musée des Plans-Reliefs, where we have partnered with two French companies, HoloForge Interactive and Iconem, to create an entirely new museum experience with mixed reality and AI that paid homage to Mont-Saint-Michel, a French cultural icon off the coast of Normandy.
And in southwestern Mexico, where we’re engaged as part of our ongoing efforts to preserve languages around the world to capture and translate Yucatec Maya and Querétaro Otomiusing AI to make them more accessible to people around the world….”
“Heritage institutions are places in which works of art, historical records, and other objects of cultural or scientific interest are sheltered and made accessible to the public. The equivalent of that in the digital world, is already taking shape, through digitization and sharing of digital-born or digitized objects on online platforms. In this article we shed light on how the issue of structured data about heritage institutions is being tackled by Wikipedia, and its sister Wikidata, through their “Sum of All GLAM” project..
Access to these objects, and information about them, is provided and mediated both through platforms maintained by the heritage sector itself and through more general-purpose platforms, which often serve as a first point of entry for the wider public. These platforms include Google, Facebook, YouTube, and Wikipedia, which also happen to be among the most visited websites on the Web. In this emerging data and platform ecosystem, Wikipedia and related Wikimedia projects play a special role as they are community-driven, non-profit endeavours. Moreover, these projects are working hard to make data and information available in a free, connected and structured manner, for anybody to re-use.
There are various layers of information about heritage institutions, ranging from descriptions of institutions themselves and descriptions of their collections, to descriptions of individual items. There may be digital representations of these items, and in some cases even searchable content within the items. Figure 1 illustrates how the top four layers of data and information are currently addressed in Wikipedia, with Wikidata and Wikimedia Commons increasingly focussing on providing structured and linked data alongside the unstructured or semi-structured encyclopaedic information contained in Wikipedia articles….”
“On June 19, the reading rooms of the Archive of the Russian Academy of Sciences (ARAN) reopened after months of inactivity. The archive’s director, Alexander Rabotkevich, toldMeduza about the reopening.
“I am pleased to inform you that, beginning today, the reading rooms of the RAN archives in Moscow and St. Petersburg are open once again,” he said. “Our employees’ salaries have been paid in full, and all debts […] whose payment was necessary for the organization’s accounts to operate have been paid.” According to Rabotkevich, the debts that brought the archive’s operations to a stop amounted to 4.3 million rubles ($68,000). A subsidy from Russia’s Education Ministry helped the archive pay up….”
“With the new Europeana Newspapers collection, Europeana Collections gives access to hundreds of newspaper titles and millions of newspaper pages, spanning four centuries and 20 countries from across Europe….”
“A single point of access. A gateway to America’s cultural riches. Available to everyone. This was the founding vision of the Digital Public Library of America: an open, distributed national digital library to educate, inform and empower everyone. Today, we are doubling down on that vision witha new strategic planto guide our work in the coming years.
Our mission remains constant: to provide equitable access to knowledge for all. We will advance this mission by expanding the cultural heritage aggregation network that has been our hallmark achievement, growing our collaborative ebooks solutions for libraries, and heightening our role as a library convener and innovator.
DPLA’s strategy is guided by three beliefs: that we are stronger when we work collaboratively; that everyone—particularly those historically marginalized from projects like ours—is included; and that digital technology can be a positive force for unleashing knowledge and enabling creativity. …”
“The Digital Public Library of America (DPLA) is pleased to announce a new $622,000 grant from The Andrew W. Mellon Foundation to strengthen and expand its national cultural heritage network and platform. DPLA’s cultural heritage aggregation program has been its signature achievement since launching in 2013, making over 34 million items—photographs, maps, news footage, oral histories, manuscript documents, artwork, and more—from 4,000 libraries, museums, and archives across the country freely discoverable to all.
The new two-year grant from the Mellon Foundation will enable DPLA to support the current and future activities and priorities of its national network and continue to make their materials available to everyone. DPLA will work with its partners to develop new services and tools to support the needs of the diverse institutions in our cultural heritage network; build new partnerships to ensure that every institution in the country has a pathway to contribute materials to DPLA; promote the use of DPLA’s rich collections by learners of all stripes; and continue to work with our members to advance our shared goals of increasing access to our nation’s digital collections. …”
“On 17 May 2019 the Directive on Copyright in the Digital Single Market was published in the Official Journal of the European Union. Member States have until the 7 June 2021 to implement the new rules into national law. In this explainer, Paul Keller, Policy Advisor to Europeana Foundation breaks down the changes these new rules bring to Europe’s Cultural Heritiage insitutions. …
Article 14 of the directive clarifies a fundamental principle of EU copyright law. The article makes it clear that “when the term of protection of a work of visual art has expired, any material resulting from an act of reproduction of that work is not subject to copyright or related rights, unless the material resulting from that act of reproduction is original”. In other words, the directive establishes that museums and other cultural heritage institutions can no longer claim copyright over (digital) reproductions of public domain works in their collections. In doing so the article settles an issue that has sparked quite some controversy in the the cultural heritage sector in the past few year and aligns the EU copyright rules with the principles expressed in Europeana’s Public Domain Charter….
Finally the DSM directive introduces not one but two new Text and Data Mining exceptions (Articles 3 & 4) that will need to be implemented by all Member States. The first exception (Article 3) allows “research organisations and cultural heritage institutions” to make extractions and reproductions of copyright protected works to which they have lawful access “in order to carry out, for the purposes of scientific research, Text and Data Mining”. Under this exception cultural heritage institutions can text and data mine all works that the have in their collections (or to which they have lawful access via other means) as long as this happens for the purpose of scientific research.
The second exception (Article 4) is not limited to Text and Data Mining for the purpose of scientific research. Instead it allows anyone (including cultural heritage institutions) to make reproductions or extractions of works to which they have lawful access for Text and Data Mining regardless of the underlying purpose. …”
“There has been a growing interest from libraries and other cultural heritage organizations in Wikidata. Of the many potential uses for Wikidata, one emerging area of focus has been using Wikidata as a hub for institutional identifiers. Many organizations maintain unique identifiers for people, subjects, works, etc. If these IDs are all added to Wikidata then you could seamlessly access data from dozens of sources if you know the Wikidata ID. If we return to the author example from above you can see the Wikidata page for Virginia Woolf has ninety external links to various organizations. Many of these are national libraries, museums, and other cultural heritage institutions including the Library of Congress.
The Library of Congress maintains many authority files that are widely used. Two of the largest are the Name Authority File (NAF) and Library of Congress Subject Headings (LCSH). The Network Development and MARC Standard Office maintains the Linked Open Data version of these files at the site id.loc.gov. For example, authority data for Virginia Woolf is located at //id.loc.gov/authorities/names/n79041870. This data ensures that items being cataloged are all referencing the same person. One of the goals of linked data is to make sure you link out to other’s data. With id.loc.gov we maintain links to many other institutions authority files including the French and German national libraries, other government services such as Department of Agriculture and other cultural institutions like the Getty Museum. You’ll notice these links on the page and are also present in the machine readable data. With the potential of Wikidata being a hub of identifiers we wanted to also include links in our authority record out to Wikidata….
Using records from the Library of Congress Prints & Photographs Division I built an interface that combines Library of Congress collection items with Wikidata information. This tool demonstrates the possibilities in connecting these two knowledge systems….”
“Can India lead a global revolution in access to knowledge? In this talk, Carl Malamud will discuss some efforts in India to take some small initial steps to change how we access information. He will discuss public interest litigation in the Hon’ble High Court of Delhi with two co-petitioners to make all Indian standards available.
In Bengaluru, the Indian Academy of Sciences has embarked on an ambitious program to digitize scientific literature, a program which will soon expand to other kinds of institutions in Chennai, Mangalore, and other locations, a program driven by a volunteer group known as the Servants of Knowledge. And, in Delhi, 750 terra bytes of disk is spinning at JNU and IIT Delhi, the beginnings of a research facility for big data and text mining as well as a distribution depot for moving content throughout India. Carl will explain who these components are part of his vision for what might become a Public Library of India, making available the vast treasures of knowledge of India to all….”