The need for free and open data in Earth observation activities – SpaceNews

“The evolving quality and quantity of Earth observation data enables an ever-increasingly profound knowledge of the climate crisis, enhancing the efficacy of mitigation strategies as well as the management of risk and natural or human-made disasters. Access to satellite imagery offers a unique and game-changing advantage compared to data collected in situ: the capacity to build data sets with decades worth of observations while providing constant, up-to-date, and reliable information.

The environmental emergency, while having severe global effects, will not affect all states equally. Poorer, less developed countries are expected to face severe challenges directly related to climate change, and will experience the large majority of climate-induced human mobility, be it internally displaced people or climate migrants. Open Data policies promoting free and open access to Earth observation data and information are an important tool to guarantee access to satellite imagery to those states which do not yet possess the capabilities for independent access to space. This is especially true for data related to the causes and effects of climate emergencies, such as the Essential Climate Variables identified by the Global Climate Observing System. Open Data principles not only greatly enhance the mitigation strategies of less-developed countries, but would significantly further their risk and disaster management….”

WordPress Saves Creative Commons Search Engine From Shutting Down

“Creative Commons Search is joining WordPress.org, which will help keep the search engine of free-to-use images running for the foreseeable future.

Matt Mullenweg, CEO of WordPress parent company Automattic, says he decided to bring CC Search on board after hearing it was in danger of shutting down….”

How 3-D Scanning Is Reinventing Paleoanthropology – Scientific American

“My principal job on site is to reconstruct fossils, and so I was tasked with putting together the DNH 155 skull. It took around a week to fully remove the skull fragments and all the sediment gluing the pieces together from their original resting place within the Drimolen Main Quarry. As each of the roughly 300 fragments were painstakingly removed, they were digitized with an Artec Space Spider, a professional handheld 3-D scanner. The scanner shoots patterns of light that distort based on the geography of the object it is hitting and bounce back to the scanner—like a bat using sonar, but in this case, light rather than sound is what’s bouncing back and forth. This technology was used to create high-resolution digital records of each piece of the cranium’s location within the sediment in case any pieces unexpectedly dislodged….

The first phase of reconstruction was completed by manually putting the pieces together. But, even after manual reconstruction, there were some elements of the cranium that couldn’t be placed because the contact point was too small, or a tiny part of the edges had been lost. In these cases, the Artec software was used to digitally situate the parts in relation to one another. Specifically, the face of DNH 155 cannot safely be attached to the rest of the cranium. This fusion was achieved digitally. Although it could have been glued, joining the pieces in this fashion would have been risky and would likely have caused permanent damage to the fossil. The published reconstruction of the DNH 155 cranium would not have been possible without 3-D technology, which would have been a huge blow to the ability of other researchers to assess the fossil in the future….

Reconstruction was only one part of the research program designed to reveal the secrets of this rare skull. Many of the researchers who work on fossils from South Africa are unable to travel to Johannesburg to work on the originals. This is especially true for researchers who are not based at wealthy institutions, and for cash-strapped students in general. It is for this reason that the Drimolen team have invested significant capital to digitize the DNH 155 cranium and most of the Drimolen fossil assemblage. As a Ph.D. student myself, I am particularly interested in the potential for high quality 3-D scanners such as the Space Spider to democratize research by allowing free and easy access to research-quality data. While permissions and access to such data are controlled by the University of the Witswatersrand (in the case of the Drimolen fossils) it is our ultimate intention to share our data with researchers, particularly early-career researchers, who are pursuing a topic related to the South African hominin fossils…..”

A Giant Medieval Puzzle – Library Matters

““Fragmentology” is a new approach to the visual gathering of such dispersed fragments in order to re-assemble the pieces of a codex.  A digital platform is now available to apply collective energy into fitting the pieces of the puzzle back together again, which has an enormous potential for research.  Fragmentarium is the name of a partnership of institutions gathered to develop the technologies needed to build “a common laboratory for fragments” and conduct research.  It promises to yield digital versions from the original fragments, constituted from various holdings. This process will enable provenance research, the study of the circulation of manuscripts, and generate connections among researchers and curators. Thus a leaf holding comparable visual cues may be further investigated as a originating from the same or similar source. …”

“Just Make the Data Available”: Exploring Manuscripts with OPenn | Penn Libraries

“Once upon a time, examining pages from one of the Medieval manuscripts held by Penn Libraries’ Kislak Center for Special Collections, Rare Books, and Manuscripts would always require someone to make an appointment with a curator, travel to Philadelphia, and visit the Charles K. MacDonald Reading Room. While the experience of viewing a rare book or manuscript in person is still one of vital importance to researchers, this is not a trip that just anyone had the capability to make, even before the COVID-19 pandemic restricted all our movements. Since the late 1990s, Penn Libraries has helped researchers surmount this obstacle through a wide variety of digitization efforts, including projects like Penn in Hand and Print at Penn. Today, one way to explore the Libraries’ digitized manuscripts is using OPenn, a website hosting high-resolution archival images of manuscripts and descriptive information about each one of them. Launched in 2015, OPenn now holds just over 10,000 documents and more than 1 million individual images from over fifty institutions, including the African Episcopal Church of St. Thomas, Columbia University, the Rosenbach, and the British Library, all freely available to download, use, and share. …”

The Louvre Has Digitized 482,000 Works — Wander The Museum Online, For Free : NPR

“One of the world’s most massive museums has announced an encompassing digitization of its vast collection.

“The Louvre is dusting off its treasures, even the least-known,” said Jean-Luc Martinez, President-Director of the Musée du Louvre, in a statement on Friday. “For the first time, anyone can access the entire collection of works from a computer or smartphone for free, whether they are on display in the museum, on loan, even long-term, or in storage.” 

Some of this is hyperbole. The entire collection is so huge, no one even knows how big it is. The Louvre’s official release estimates about 482,000 works have been digitized in its collections database, representing about three quarters of the entire archive. (The museum’s recently revamped homepage is designed for more casual visitors, especially those on cellphones, with translations in Spanish, English and Chinese.) …”

Louvre site des collections

“The database for the Louvre’s collections consists of entries for more than 480,000 works of art that are part of the national collections and registered in the inventories of the museum’s eight curatorial departments (Near Eastern Antiquities; Egyptian Antiquities; Greek, Etruscan and Roman Antiquities; Islamic Art; Paintings; Medieval, Renaissance and Modern Sculpture; Prints and Drawings; Medieval, Renaissance and Modern Decorative Arts), those of the History of the Louvre department, or the inventories of the Musée National Eugène-Delacroix, administratively attached to the Louvre since 2004.

The Collections database also includes so-called ‘MNR’ works (Musées Nationaux Récupération, or National Museums Recovery), recovered after WWII, retrieved by the Office des Biens et Intérêts Privés and pending return to the legitimate owners. A list of all MNR works conserved at the Musée du Louvre is available in a dedicated album and may also be consulted in the French Ministry of Culture’s Rose Valland database. 

Lastly, the Louvre Collections database includes information on works on long-term loan from other French or foreign institutions such as the Bibliothèque Nationale de France, the Musée des Arts Décoratifs, the Petit Palais, the Fonds National d’Art Contemporain, the British Museum and the archaeological museum of Heraklion. …”

Louvre museum makes its entire collection available online

“As part of a major revamp of its online presence, the world’s most-visited museum has created a new database of 482,000 items at collections.louvre.fr with more than three-quarters already labelled with information and pictures.

It comes after a year of pandemic-related shutdowns that has seen an explosion in visits to its main website, louvre.fr, which has also been given a major makeover….

The new database includes not only items on public display in the museum but also those in storage, including at its new state-of-the-art facility at Lievin in northern France….

Data preparation for artificial intelligence in medical imaging: A comprehensive guide to open-access platforms and tools – Physica Medica: European Journal of Medical Physics

“Highlights

Image pre-processing tools are critical to develop and assess AI solutions.
Open access tools and data are widely available for medical image preparation.
AI needs Big Data to develop and fine-tune a model properly.
Big Data needs AI to fully interpret the decision making process….”

 

Images of the arXiv: Reconfiguring large scientific image datasets | Published in Journal of Cultural Analytics

Abstract:  In an ongoing research project on the ascendancy of statistical visual forms, we have been concerned with the transforma­tions wrought by such images and their organisation as datasets in ‘re­drawing’ knowledge about empirical phenomena.Historians and science studies researchers have long established the generative rather than simply illustrative role of im­ages and figures within scientific practice. More recently, the deployment and generation of images by scientific researchand its communication via publication has been impacted by the tools, techniques, and practices of working with large(image) datasets. Against this background, we built a dataset of 10 million­plus images drawn from all preprint articles deposited in the open access repository arXiv from 1991 (its inception) until the end of 2018. In this article, we suggest ways – including algorithms drawn from machine learning that facilitate visually ’slicing’ through the image data and metadata – for exploring large datasets of statistical scientific images. By treating all forms of visual material found inscientific publications – whether diagrams, photographs, or instrument data – as bare images, we developed methods for tracking their movements across a range of scientific research. We suggest that such methods allow us different entry points into large scientific image datasets and that they initiate a new set of questions about how scientific representatio nmight be operating at more­-than-­human scale.