The post Sharing is Caring: Varied Diets in Dinosaurs Promoted Coexistence appeared first on EveryONE.
The post Sharing is Caring: Varied Diets in Dinosaurs Promoted Coexistence appeared first on EveryONE.
It’s been said that the eyes are the windows to the soul. They allow us to communicate feelings across a room, direct the attention of others, and express emotion better than words ever could. The importance of eye contact in non-human species is well known—we’ve all heard that you shouldn’t stare a bear or angry dog in the eyes—but we don’t know a whole lot about how gaze is used between individuals of the same species. Japanese researchers took on this topic in a recent PLOS ONE article, focusing specifically on how eye contact and communication is affected by eye visibility and facial patterning around the eyes of canids.
Their research observed 25 canid species, comparing variations in facial pattern and coloring to observations about their social behavior and evolutionary history. They found that canines may use facial markers to either highlight or de-emphasize their eyes. Species with more distinguishable eyes tended to live and hunt in groups, where gaze-communication facilitates the teamwork that is necessary to bring down large prey and stay safe. Those with camouflaged eyes were more likely to live alone or in pairs, where communication with other members of their species may not be needed in the same way.
Using photos of each species, the authors analyzed the contrast between five areas of the canine face: pupil, iris, eyelid margin, coat around the eyes, and facial area including the eyes, as shown in the figure above. They measured contrast assuming red-green colorblindness of the observer (fun fact: canids cannot see the full spectrum of color). Species were then grouped according to the visibility of their eyes, described in the figure below:
The authors found the strongest correlation between eye visibility and living and hunting behavior. More species in Group A, like gray wolves, live and hunt in packs, whereas more species in Groups B and C, like the fennec fox and bush dog, live and hunt alone or in pairs. Species in Group A also spend significantly more time in “gazing postures,” with their sight and body directed at another animal, an action that accentuates their focused attention to other members of the group. The genetic similarity between species was not as useful in explaining these differences, with A-type faces found in 8 of 10 wolf-like species, and in 3 of 10 red fox-like species. The authors suggest that A-type markings developed independently once these groups had evolutionarily split.
Lighter iris coloring is thought to be an adaptation to ultraviolet light in many species, similar to variations in human skin pigmentation. To determine whether this adaptation could explain the variation seen in canid iris color, the researchers compared the eye coloring of three wolf subspecies from Group A originating from arctic, temperate, and subtropical regions, to see if any differences in their lighter coloring could be attributed to geographical origin. They found that iris color did not vary significantly between the subspecies, suggesting that it may have developed to facilitate communication and not as an adaptation to specific geographical locations.
When the authors reviewed social behaviors, they found a number of social species with B- and C-type faces, the groups normally found alone or in pairs. These species are known to use acoustic or other visual signals, like a howl or the flash of a white tail, to communicate with their comrades. This allows them to skirt one possible disadvantage of gaze-communication: when prey can also identify and follow a gaze, and realize they’ve been targeted.
Gaze communication may be an important tool for other canids, including our own companions, domestic dogs. Previous studies have shown that domestic dogs are more likely to make direct eye contact with humans than wolves raised in the same setting. This could mean that after thousands of years of cohabitation, dogs see us in socially useful ways that wolves never will. Luckily for us, that means we get to see this.
Citation: Ueda S, Kumagai G, Otaki Y, Yamaguchi S, Kohshima S (2014) A Comparison of Facial Color Pattern and Gazing Behavior in Canid Species Suggests Gaze Communication in Gray Wolves (Canis lupus). PLoS ONE 9(6): e98217. doi:10.1371/journal.pone.0098217
Images 2 and 3: Figures 1 and 2 from the article
We’ve all heard the story: dim-witted Neanderthals couldn’t quite keep up with our intelligent modern human ancestors, leading to their eventual downfall and disappearance from the world we know now. Apparently they needed more brain space for their eyes. The authors of a recent PLOS ONE paper are digging into the ideas behind this perception, and take a closer look at eleven common hypotheses for the demise of the Neanderthals, comparing each to the latest research in this field to convince us that Neanderthals weren’t the simpletons we’ve made them out to be.
The authors tackled ideas like the Neanderthal’s capacity for language and innovative ability, both often described as possible weaknesses leading to their decline. Analyzing the published research on each topic, they found that archaeologists often used their finds to “build scenarios” that agreed with the running theories of human superiority, and that some long-held truths have now been challenged by recent discoveries and ongoing research at the same excavation sites.
As one example, researchers who found shell beads and pieces of ochre and manganese in South Africa—used as pigments—claimed them as evidence of the use of structured language in anatomically modern humans. While we can only guess when linking items like these to the presence of language, new findings at Neanderthal sites indicate that they also decorated objects with paints and created personal ornaments using feathers and claws. Whatever the anatomically modern humans were doing in South Africa, Neanderthals were also doing in Europe around the same time, negating the claim that this ability may have provided the anatomically modern humans with better survival prospects once they arrived in Europe.
Another set of South African artifacts led the archaeological community to believe that anatomically modern humans were capable of rapidly improving on their own technology, keeping them ahead of their Neanderthal contemporaries. Two generations of tools, created during the Stillbay and Howiesons Poort periods, were originally believed to have evolved in phases shorter than 10,000 years—a drop in the bucket compared to the Neanderthals’ use of certain tools, unchanged, for 200,000 years. However, new findings suggest that the Stillbay and Howiesons Poort periods lasted much longer than previously thought, meaning that the anatomically modern humans may not have been the great visionaries we had assumed. Additionally, while Neanderthals were not thought capable of crafting the adhesives used by anatomically modern humans to assemble weapons and tools, it is now known that they did, purifying plant resin through an intricate distillation process.
We’re all living proof that anatomically modern humans survived in the end. Perhaps in an effort to flatter our predecessors, we have been holding on to dated hypotheses and ignoring recent evidence showing that Neanderthals were capable of a lot more (and perhaps the anatomically modern humans of a lot less) skill-wise than previously believed. Genetic studies continue to support the idea that anatomically modern humans and Neanderthals interbred and show that the genome of modern humans with Asian or European ancestry contains nearly 2% Neanderthal genes, a substantial quantity considering 40,000 years and 2000 generations have passed since they ceased to exist. These genes may have helped modern humans adjust to life outside of Africa, possibly aiding in the development of our immune system and variation in skin color. Researchers believe that the concentration of Neanderthal genes in modern humans was once much higher, but genetic patterns in modern humans show that hybrid Neanderthal-Human males may have been sterile, leaving no opportunity for their genes to be passed to the next generation.
So, while they may not walk among us today, we have Neanderthals to thank for some major adaptations that allowed us to thrive and spread across the planet. Too bad they’re not here to see the wonderful things we were able to accomplish with their help.
Citation: Villa P, Roebroeks W (2014) Neandertal Demise: An Archaeological Analysis of the Modern Human Superiority Complex. PLoS ONE 9(4): e96424. doi:10.1371/journal.pone.0096424
Image 1: Neandertaler im Museum from Wikimedia Commons
The post Modern Humans: Were We Really Better than Neanderthals, or Did We Just Get Lucky? appeared first on EveryONE.
Yeast—including more than 1500 species that make up 1% of all known fungi—plays an important role in the existence of many of our favorite foods. With a job in everything from cheese making to alcohol production to cocoa preparation, humans could not produce such diverse food products without this microscopic, unicellular sous-chef. While we have long been aware of our dependence on yeast, new research in PLOS ONE suggests that some strains of yeast would not be the same without us, either.
Studies have previously shown how our historical use of yeast has affected the evolution of one of the most commonly used species, Saccharomyces cerevisiae, creating different strains that are used for different purposes (bread, wine, and so on). To further investigate our influence on yeast, researchers from the University of Bordeaux, France, took a look at a different yeast species of recent commercial interest, Torulaspora delbrueckii. In mapping the T. delbrueckii family tree, the authors show not only that human intervention played a major role in the shaping of this species, but they provide us with valuable information for further improving this yeast as a tool for food production.
The authors collected 110 strains of T. delbrueckii from global sources of wine grapes, baked goods, dairy products, and fermented beverages. Possible microsatellites, or repeating sequences of base pairs (like A-T and G-C), were found in one strain’s DNA and used to create tools that would identify similar sequences in the other strains. They used the results to pinpoint eight different microsatellite markers (base pair sequences) that were shared by some strains but not others to measure genetic variation in the T. delbrueckii family. The composition of each strain was measured using microchip electrophoresis, a process in which DNA fragments migrate through a gel containing an electric field, which helps researchers separate the fragments according to size. As each strain’s microsatellite markers were identified, the information was added to a dendrogram (a funny-looking graph, shown below) to illustrate the level of similarity between strains. The researchers also estimated the time it took different strains to evolve by comparing the average rate of mutation and reproduction time for T. delbrueckii to the level of genetic difference between each strain.
The dendrogram shows four clear clusters of yeast strains heavily linked to each sample’s origin. Two groups contain most of the strains isolated from Nature, but can be distinguished from each other by those collected on the American continents (nature Americas group) and those collected in Europe, Asia, and Africa (nature Old World group). The other two clusters include strains collected from food and drink samples, but cannot be discriminated by geographic location. The grape/wine group contains 27 strains isolated from grape habitats in the major wine-producing regions of the world: Europe, California, Australia, New Zealand, and South America. The bioprocess group contains geographically diverse strains collected from other areas of food processing—such as bread products, spoiled food, and fermented beverages—and includes a subgroup of strains used specifically for dairy products. Further analysis of the variation between strains confirmed that, while the clusters don’t perfectly segregate the strains according to human usage, and geographic origin of the sample played some role in diversity, a large part of the population’s structure is explained by the material source of the strain.
Divergence times calculated for the different groups further emphasize the connection between human adoption of T. delbrueckii yeast and the continued evolution of this species. The grape/wine cluster of strains diverged from the Old World group approximately 1900 years ago, aligning with the expansion of the Roman Empire, and the spread of Vitis vinifera, or the common grape, alongside. The bioprocesses group diverged much earlier, an estimated four millennia ago (around the Neolithic era), showing that yeast was used for food production long before it was domesticated for wine making.
While T. delbrueckii has often been overlooked by winemakers in favor of the more common S. cerevisiae, it has recently been gaining traction for its ability to reduce levels of volatile compounds that negatively affect wine’s flavor and scent. It has also been shown to have a high freezing tolerance when used as a leavening agent, making it of great interest to companies attempting to successfully freeze and transport dough. Though attempts to develop improved strains of this yeast for commercial use have already begun, we previously lacked an understanding of its life-cycle and reproductive habits. In creating this T. delbrueckii family tree, the authors also gained a deeper understanding of the species’ existence, which may help with further development for technological use.
Yeast has weaseled its way into our hearts via our stomachs, and it seems that, in return, we have fully worked our way into its identity. With a bit of teamwork, and perhaps a splash of genetic tweaking, we can continue this fruitful relationship and pursue new opportunities in Epicureanism. I think we would all drink to that!
Reference: Albertin W, Chasseriaud L, Comte G, Panfili A, Delcamp A, et al. (2014) Winemaking and Bioprocesses Strongly Shaped the Genetic Diversity of the Ubiquitous Yeast Torulaspora delbrueckii. PLoS ONE 9(4): e94246. doi:10.1371/journal.pone.0094246
Image 1: Figure 1 from article
Image 2: Figure 3 from article
The post For Yeast’s Sake: The Benefits of Eating Cheese, Chocolate, and Wine appeared first on EveryONE.
We’re all familiar with the shell game, though many may not recognize it by that name. Popular with street swindlers and sports fans at halftime, it has been annoying those unable to locate which cup the ball is under—most people—as long as anyone can remember. But, what if you were a small ruminant playing a modified version of the game, in which cups were not moved? Seems easier, right? Turns out it may depend on what kind of ruminant you are.
The authors of a recent PLOS ONE article updated this age-old game to find out whether goats and sheep, given information upfront about which cup the food was (or was not) under, could find the food again once the cups were replaced. The ability to connect an experience with an imagined result (“If I see that the cup is empty, the other must have the food!”) is called inferential reasoning, and has been tested in dogs, primates, and birds. Different levels of inferential reasoning, especially between two closely related species, may be linked to differences in the way they search for and find food. A deeper understanding of the different feeding behaviors found in ruminants could shed light on the evolutionary practices that shape these behaviors and help us understand how domesticated breeds have modified their behavior over time to better fit a farming environment.
In this study, researchers ran two rounds of testing with slightly different protocols to evaluate how the ruminants’ behavior would change in response to varied amounts of information. They created an experimental set with the animal separated by a series of bars from the researcher and a table (see image above) that held two cups, one with a piece of food hidden underneath. In the first test, researchers lifted one of the following:
The animal then chose a cup and was rewarded with food if found. The video below shows a sheep correctly identifying the baited cup after receiving direct information on the food’s location.
To minimize the possibility that animals would simply select the cup most recently moved by the researcher, the authors further modified the game for the second round of testing. A set of internal cups was added under each external cup, some transparent and some opaque, allowing the researchers to lift both external cups simultaneously while providing the same amount of information to the animals.
Animals participating in the test were again presented with two cups. Both external cups were removed simultaneously to reveal the inner set, which were one of the following:
In the same way as in the first experiment, animals were given a chance to select a cup and were rewarded with food if found.
When reviewing the results of all the sessions, the authors found that goats were more successful than sheep at identifying the baited cup when given full or direct information, although sheep also chose the baited cup more often under these circumstances. The second experiment showed that goats were also more successful at finding food when provided with indirect information. In both experiments, sheep did not guess better than chance when shown the empty contents of the unbaited cup or no cups at all, and goats did not guess better than chance when neither of the cups were listed.
Though several factors varied between species populations used in this experiment—like the time of day experiments were completed, number of animals tested, or history of testing experience in goats—the authors saw no sign of improved performance due to learning in goats, and no differences in motivation as tests continued.
The researchers speculate that the different foraging behaviors used by goats and sheep allowed goats to get ahead in these experiments. Goats are dietary browsers, meaning they are more selective in what they eat and prefer low-fiber foods, like plant stems and leaves. Sheep, as dietary grazers, feed primarily on high fiber foods like grass, and are not at all particular about what they consume. If goats have evolved to be picky, selective eaters, then it makes sense that they might have a better memory for where they saw food in the past, and the insight to avoid an empty cup when they saw one. Future studies on this topic could provide insight on decision making and risk sensitivity in these animals, and give us a glimpse into the minds of a species that can actually beat the odds in the shell game.
Citation: Nawroth C, von Borell E, Langbein J (2014) Exclusion Performance in Dwarf Goats (Capra aegagrus hircus) and Sheep (Ovis orientalis aries). PLoS ONE 9(4): e93534. doi:10.1371/journal.pone.0093534
The post Beating the Odds: Are Goats Better Gamers than Sheep? appeared first on EveryONE.
Crazed squirrels: we’ve all seen them. Some dashing toward you only to stop short long enough to glare with beady eyes before fleeing, others dive-bombing the dirt, coming up with their heads waving back and forth. They’re the butt of many a joke on college campuses, providing endless amusement with their antics. Some UC Berkeley students even think that the resident campus squirrels may have gobbled up substances left over from the wilder moments of Berkeley’s past, leaving them permanently crazed. However, according to a recently published PLOS ONE article from UC Berkeley, these squirrels’ seemingly odd behavior may actually have a purpose. We’ve long known that scatter-hoarders will store food they find to prepare for periods when it’s less abundant, but there is little information on the hoarding process. Turns out these squirrels might actually have a refined evaluation method based on economic variables like food availability and season. To eat now, or cache for later?
Researchers interacted with 23 fox squirrels, a species well-habituated to humans, in two sessions during the summer and fall of 2010 on the Berkeley campus, evaluating food collection behavior during both lean (summer) and bountiful (fall) seasons. The authors engaged the squirrels with calls and gestures to attract their attention, and the first squirrel to approach was the focus of that round of testing.
Each squirrel was given a series of 15 nuts, either peanuts or hazelnuts, in one of two sequences. Some were offered five peanuts, followed by five hazelnuts, then five more peanuts (PHP). Others were given five hazelnuts, five peanuts, then five hazelnuts (HPH). The purpose of this variation was to evaluate how squirrels would respond to offers of nuts with different nutritional and “economic” values at different times. Hazelnuts are, on average, larger than peanuts, and their hard shell prevents spoiling when stored long term, but peanuts tend to have more calories and protein per nut. Researchers videotaped and coded each encounter to calculate variables, like the number of head flicks per nut, time spent pawing a nut, and time spent traveling or caching nuts. See the video below for a visual example of these behaviors.
The results showed that season and nut type significantly affected the squirrel’s response, and the squirrel’s evaluation of the nut could forecast its course of action. Predictably, the fall trial showed squirrels quickly caching most of their nuts, likely taking advantage of the season’s abundance. Squirrels ate more nuts in the summer, though they still cached the majority of hazelnuts (76% vs. 99% cached in the fall) likely due to their longer “shelf life”.
The squirrels who head-flicked at least one time in response to a nut cached it nearly 70% of the time, while those who spent more time pawing the nut tended to eat it (perhaps searching for the perfect point of entry?). The time spent caching and likelihood of head flicking were clearly linked to the type of nut received and to the trial number, with time spent evaluating a nut decreasing as the trials continued for a squirrel. The authors suggest that the changes in food assessment strategies in response to resource availability provide an example of flexible economic decision making in a nonhuman species.
So, now that squirrels are possibly making economically prudent decisions when evaluating nuts, I guess we have to give them a break when we see them running around like crazy on campus. Doesn’t mean we’ll stop laughing.
Citation: Delgado MM, Nicholas M, Petrie DJ, Jacobs LF (2014) Fox Squirrels Match Food Assessment and Cache Effort to Value and Scarcity. PLoS ONE 9(3):e92892. doi:10.1371/journal.pone.0092892
From vertical gardens to succulent gardens to community veggie gardens like the San Francisco garden pictured above, city dwellers all around us have started embracing their (hopefully) green thumbs. For urbanites in particular, community gardening provides us with much needed “outside time” with likeminded individuals, with the added gift of hyper-local produce available throughout the growing season. These benefits have led to increases in residential and community garden participation in major cities across the US.
While many people are jumping on the garden-fresh bandwagon to reap the obvious, verdant benefits, it is important to consider the potential side effects that come alongside urban farming. Urban soil is not only closer to possible sources of pollution, like traffic and industrial areas, but could also contain residual chemicals from past land use. Residential land previously occupied by industrial buildings has been found to contain dangerous levels of toxins like lead, which can poison residents and contaminate food grown on-site. But it doesn’t take a former factory to contaminate your backyard. Soil can absorb and hold toxins left over from something as small as a previous homeowners dumping of cleaning water down the drain or off the back porch.
Researchers from Baltimore published an article in PLOS ONE earlier this month assessing Baltimore community gardeners’ knowledge of soil contamination risks and explored what steps can be taken to mitigate the dangers of urban pollution in urban gardens.
The authors, hailing from Johns Hopkins, University of Maryland, and the Community Greening Resource Network, conducted interviews with Baltimore’s community garden members, and found that unfortunately, the gardeners generally seem to have low levels of concern about potential contaminants in their soil. Those working in established community gardens were least concerned as they often assumed that any issues with soil contamination had been addressed in the early days of the garden’s use.
Participants listed lead as the most concerning pollutant—likely due to city interventions concerning lead poisoning—with 66% of surveyed gardeners mentioning it as something that would concern them if found in their soil. The study results also indicate that gardeners are more worried about the presence of pesticides and other added chemicals than most other residual chemicals in the soil. Soil quality and fertility even took greater precedence for some gardeners than the presence of contaminants.
By interviewing Baltimore officials knowledgeable about community gardening practices and soil contamination issues, the researchers determined key steps in assuring the safety of gardening sites. Above all, officials suggested the creation of a central source of information related to soil contamination concerns. Similar projects relating to regulation and urban agriculture are already underway in places like Los Angeles, though these resources aim to help residents navigate the maze of confusing legislation related to urban agriculture, and focus less on providing information on how to evaluate the safety of specific plots of land.
The authors suggest other important ways to determine the safety of a garden site, including learning about the site’s past uses and testing the soil for lingering chemicals, both of which might not seem necessary to those untrained in urban planning or chemical analysis. They also recommend that officials in urban areas provide services that will encourage use of these tools and help gardeners find and interpret the results of soil testing or historical research.
In the meantime, the authors suggest limiting exposure to potentially contaminated land. For instance, we should minimize contact with dirt from garden sites by washing our hands and taking off shoes before entering any indoor spaces. Many interviewed gardeners have tried to mitigate this problem by using raised beds, which they believe eliminates concern about contaminants in homegrown vegetables. However, researchers find this method ineffective, and it should not be seen as a fix-all. Raised beds do not prevent contamination from soil around the beds, which can still be ingested or tracked into the home, and surrounding pollutants have been known to blow into beds or seep into the soil from treated wood used to build the structures.
Urban community gardening is a trend that is here to stay, and we have it to thank for fresher local produce, greener surroundings, a greater sense of community, and for the physical, and sometimes therapeutic, activity it provides. The potential dangers associated with gardening in urban areas probably do not outweigh the benefits, as long as gardeners remain diligent and become better informed. Though their study focused on a limited group, this paper’s findings draw attention to the fact that they’re not. So, next time you’re digging into a grassy patch in your backyard with visions of veggies or working in your local community garden, take a minute to think about what you know about your area, discuss past developments with longtime residents, and above all, clean up afterward.
More information on soil testing and good gardening practices can be found on this site from the EPA.
Citation: Kim BF, Poulsen MN, Margulies JD, Dix KL, Palmer AM, et al. (2014) Urban Community Gardeners’ Knowledge and Perceptions of Soil Contaminant Risks. PLoS ONE 9(2): e87913. doi:10.1371/journal.pone.0087913
The post It’s Not Easy Being Green: Assessing the Challenges of Urban Community Gardening appeared first on EveryONE.
Whether you are trapped inside because of it, or mourning the lack of it, water is on everyone’s mind right now. Too much snow in the Midwest and Northeast has been ruining travel plans, while too little snow is limiting Californians’ annual ski trips. No one wants to drive three hours only to find a rocky hillside where their favorite slope used to be.
It’s hard to deny that abnormal things are happening with the weather right now. Recently, Governor Jerry Brown officially declared a state of emergency in California due to the drought and suggested that citizens cut water usage by 20%. With no relief in sight, it is important not only to regulate our current water use, but also to reevaluate our local programs and policies that will affect water usage in the future. So, how do we go about making these decisions without being able to predict what’s next? A recently published PLOS ONE article may offer an answer in the form of a model that allows us to estimate how potential future climate scenarios could affect our water supply.
Researchers from UC Berkeley and the Stockholm Environmental Institute’s (SEI) office in Davis, CA built a hydrology simulation model of the Tuolumne and Merced River basins, both located in California’s Central Valley (pictured above). Their focus was on modeling the sensitivity of California’s water supply to possible increases in temperature. When building the model, the authors chose to incorporate historical water data, current water use regulations, and geographical information to estimate seasonal water availability across the Central Valley and the San Francisco Bay Area. They then ran various water availability scenarios through the model to predict how the region could be affected by rising temperatures.
Using estimated temperature increases of 2°C, 4°C, and 6°C, the model predicted earlier snowmelts, leading to a peak water flow earlier in the year than in previous years. The model also forecasted a decreased river flow due to increased evapotranspiration (temperature, humidity, and wind speed). The water supply was also estimated to drop incrementally with each temperature increase, though it is somewhat cushioned by the availability of water stored in California’s reservoirs.
The authors used an existing model as an initial structure, and built upon it to include information on local land surface characteristics, evapotranspiration, precipitation, and runoff potential. Surrounding water districts were modeled as nodes and assigned a priority according to California’s established infrastructure and legislation. Using this information, the authors state that the tool is equipped to estimate monthly water allocation to agricultural and urban areas and compare it to historical averages for the same areas.
Though a broad model, the authors present it as a case study that provides estimates of longer-term water availability for the Central Valley and Bay Area, and encourage other areas to modify its design to meet the needs of their unique locales. Those of us looking for more specific predictions can also use the tool to create models with additional information and refined approximations, allowing flexibility for future changes in land use and policy. For now, we might have a good long-term view of our changing water supply and a vital tool as we race to keep up with our ever-changing world.
Citation: Kiparsky M, Joyce B, Purkey D, Young C (2014) Potential Impacts of Climate Warming on Water Supply Reliability in the Tuolumne and Merced River Basins, California. PLoS ONE 9(1): e84946. doi:10.1371/journal.pone.0084946
Image 2 Credit: Figure 1 pone.0084946
Image 3 Credit: Figure 2 pone.0084946
The post All Dried Up? Modeling the Effects of Climate Change in California’s River Basins appeared first on EveryONE.
While few question the importance of maternal care in humans, scientists do question the influence of a mother’s behavior in other species. Researchers from the Max Planck Institute for Ornithology have now published an article in PLOS ONE showing exactly how important a mother’s guidance can be to our friend the western lowland gorilla. After monitoring the spread of two specific behaviors in captive groups of gorillas, the authors suggest that gorilla mothers play a vital role in social learning and the transmission of behaviors between generations.
The authors videotaped gorilla behavior for 4-6 hours per day over the course of eight weeks in 2000 and 2010 at Howletts Wild Animal Parks. Throughout their sittings, they watched for two specific behaviors shown by different individuals: the “Puff-Blowing” technique, used during mealtimes to separate oat from chaff, and the “Throw-Kiss-Display,” one male gorilla’s coy way of drawing visitors’ attention to him. Check out the live-action versions in the videos below.
During the initial observational period in 2000, the “Puff-Blowing” technique was used by three adult females, while the “Throw-Kiss-Display” was implemented by a single silverback male, Kouillou, and no other members of the group.
By the time the researchers returned in 2010, the “Puff-Blowing” technique was practiced by 15 individuals, while the “Throw-Kiss-Display” had been dropped entirely, even by the original practitioner.
When the researchers analyzed the data, they found that the spread of the observed “Puff-Blowing” technique to new gorillas could be tracked through mother-child relationships. All but three offspring (13 total) of the original three mothers used the technique. Furthermore, this behavior was never seen in the offspring of mothers who did not perform the technique.
Based on their observations, the authors suggest that the actions of the gorilla mother play a major role in the transmission of behaviors. In other words, baby gorilla see, baby gorilla do. While the authors mention that “Puff-Blowing” may be more likely to be passed down because it’s useful at mealtime—unlike the “Throw-Kiss-Display”—they argue that the path of transmission (mother-offspring) is significant. The authors also indicate that genetic factors may affect the occurrence of these behaviors, as not all offspring of the “Puff-Blowing” mothers inherited the action, suggesting that other forces may be at play.
Lesson learned: Even gorillas need their mommies.
For more evidence of the importance of mothers in the animal kingdom, check out this paper on migration patterns in humpback whales.
Citation: Luef EM, Pika S (2013) Gorilla Mothers Also Matter! New Insights on Social Transmission in Gorillas (Gorilla gorilla gorilla) in Captivity. PLoS ONE 8(11): e79600. doi:10.1371/journal.pone.0079600
Image Credit: USINFO Photo Gallery
Videos: S1 and S2 from the paper
The toothpick —an often unnoticed tool for post-meal rituals and appetizer stability—has played a greater role in our ancestors’ health and comfort than many would imagine. In a world before dentists, Neanderthals and modern humans took oral hygiene into their own hands using the only tools they had readily available: little bits of nature they found surrounding them.
Researchers from Spain have presented evidence in PLOS ONE that Neanderthals used small sticks or blades of grass not only to remove fragments of food from between their teeth, but also to lessen the pain caused by periodontal disease, a form of gum disease. While multiple human and Neanderthal remains have been found showing evidence of toothpick use, the authors propose that the combined evidence of toothpick use and gum disease suggests that Neanderthals were perhaps practicing an early form of dental care.
The samples in the image above show an adult’s upper jaw with three teeth left intact found at the Cova Foradà cave site in Valencia, Spain, amidst animal remains and tools dated to the Mousterian era (300,000-30,000 years ago). The adult teeth, believed to belong to an individual between 35-45 years old, show heavy wear on the top surface and exposed roots, the result of a lifetime’s consumption of fibrous and abrasive foods like meats and grains. There are no signs of cavities in the remaining teeth, though decayed bone and the porous surface texture of the left side of the jaw indicate the presence of gum disease.
Two distinct grooves are present on the sides of the existing premolar and molar above the crown of the tooth. These grooves were caused by consistent dragging of a tool across the side of the tooth. The existence of these marks above the gum line indicates heavy dental wear and disease. Here, the gums had receded and left the base of the tooth unprotected. With the roots exposed, it would be easy for leftover debris from meals to get stuck and put pressure on already inflamed gums. The use of a foreign object pushed between the teeth would remove any particles lodged in this sensitive area. Without the extra burden of invasive food detritus, force on the gums would be reduced and irritation and pain would decrease.
Over the last few years, studies have shown that Neanderthals were capable of complex behaviors and emotions, may have used a sophisticated language, were more often right-handed than left, and were generally not the inferior cousins some thought they were. As this study shows, they practiced dental care as well. So, the next time you’re heading out of your favorite restaurant, take a good look at that bowl of toothpicks at the front: the contents could very well be evidence of one of our oldest habits.
Citation: Lozano M, Subirà ME, Aparicio J, Lorenzo C, Gómez-Merino G (2013) Toothpicking and Periodontal Disease in a Neanderthal Specimen from Cova Foradà Site (Valencia, Spain). PLOS ONE 8(10): e76852. doi:10.1371/journal.pone.0076852
Image Credit: Image from Figure 2 of the manuscript