Music, Language, and the Brain: Are You Experienced?

Have you ever thought about everything that goes into playing music or speaking two languages? Musicians for example need to listen to themselves and others as they play, use this sensory information to call up learned actions, decide what is important and what isn’t for this specific moment, continuously integrate these decisions into their playing, and sync up with the players around them. Likewise, someone who is bilingual must decide based on context which language to use, and since both languages will be fairly automatic, suppress one while recalling and speaking the other, all while continuously modifying their behavior based on their interactions with another listener/speaker. All of this must happen quickly enough for the conversation or song to flow and sound natural and coherent. It sounds exhausting, yet it all happens in milliseconds!

Playing music or speaking two languages are challenging experiences and complex tasks for our brains. Past research has shown that learning to play music or speak a second language can improve brain function, but it is not known exactly how this happens. Psychology researchers in a recent PLOS ONE article examined how being either a musician or a bilingual changed the way the brain functions. Although we sometimes think of music as a universal language, their results indicate that the two experiences enhance brain function in different ways.

heat map

One way to test changes in brain function is by using Event Related Potentials (ERPs). ERPs are electrical signals (brain waves) our brains give off immediately after receiving a stimulus from the outside world. They occur in fairly predictable patterns with slight variations depending on the individual brain. These variations, visualized in the figure above with the darkest red and blue areas showing the most intense electrical signals, can clue researchers into how brain function differs between individuals and groups, in this case musicians and bilinguals.

The ERP experiment performed here consisted of a go/nogo task that is frequently used to study brain activity when it is actively suppressing a specific behavior, also called inhibition. In this study, the authors asked research participants to sit in front of a computer while simple shapes appeared on screen, and they were to press a key when the shape was white—the most common-colored shape in the task—but not when purple, the least frequent color in the task. In other words, they responded to some stimuli (go) and inhibited their response to others (nogo). This is a similar task to playing music or speaking a second language because the brain has to identify relevant external sensory information, call on a set of learned rules about that information, and make a choice about what action to take.


The authors combined and compared correct responses to each stimulus type in control (non-musician, non-bilingual) groups, musician groups, and bilingual groups. The figure above compares the brainwaves of different groups over time using stimulus related brainwave components called N2, P2, and LP. As can be seen above, these peaks and valleys were significantly different between the groups in the nogo instances. The N2 wave is associated with the brain’s initial recognition of the meaning or significance of the stimulus and was strongest in the bilingual group. The P2 on the other hand, is associated with the early stages of putting a stimulus into a meaningful context as it relates to an associated behavior, and was strongest in the musician group. Finally, the authors note a wave called LP wave, which showed a prolonged monitoring response in the bilingual group. The authors believe this may mean bilinguals take more time to make sure their initial reaction is correct.

In other words, given a task that involved identifying a specific target and subsequently responding or not responding based on learned rules, these results suggest that musicians’ brains may be better at quickly assigning context and an appropriate response to information because they have a lot of practice turning visual and auditory stimuli into motor responses. Bilinguals, on the other hand, show a strong activation response to stimuli along with prolonged regulation of competing behaviors, likely because of their experience with suppressing the less relevant language in any given situation. Therefore, despite both musicianship and bilingual experiences improving brain function relative to controls, the aspects of brain function they improve are different. As games and activities for the purpose of “brain training” become popular, the researchers hope this work will help with testing the effectiveness of brain training.

Citation: Moreno S, Wodniecka Z, Tays W, Alain C, Bialystok E (2014) Inhibitory Control in Bilinguals and Musicians: Event Related Potential (ERP) Evidence for Experience-Specific Effects. PLoS ONE 9(4): e94169. doi:10.1371/journal.pone.0094169 

Images are Figures 1 and 2 from the article.

The post Music, Language, and the Brain: Are You Experienced? appeared first on EveryONE.

Signs of Change: Regional and Generational Variants in British Sign Language

British Sign Language chart

British Sign Language chart

As societies change,so too do its languages. In the English-speaking world, we often make note ofchanges in language by recognizing the rise of new words, like “selfie,” and the repurposing of familiar words, such as “because.” It may not be a surprise, then, to learn that this “evolution” isn’t limited to the spoken word: sign languages can also change over time. In a recent PLOS ONE study, scientists examined regional variations within British Sign Language (BSL), and found evidence that the language is evolving and moving away from regional variation.

To assist in this undertaking, the authors used data collected and recorded for the British Sign Language Corpus Project. About 250 participants took part in the project, recruited from eight regions in the UK. In addition to hailing from different parts of the country, participants came from various social, familial, and educational backgrounds.

When the first deaf schools were established across the UK in 1760, there was little standardization in signing conventions. Consequently, depending on the school you were attending, schools sometimes taughtpupils to use different signs to convey the same concepts or words. The authors posit that this lack of standardization may be the basis for today’s regionalism in BSL.

The participants were given visual stimuli, such as colors or numbers, and then asked to provide the corresponding sign, one that they would normally use in conversation. The researchers also recorded participants engaging in unscripted conversations, a more formal interview, and in the delivery of a personal narrative,all of which were incorporated into the authors’ study and analyzed.

Example of the stimuli shown to participants.

Example of the stimuli shown to participants.

In their analysis, researchers focused on four concepts: UK place names, numbers, colors, and countries. The participants’ responses to the visual stimuli were compared to with their recorded conversation to control for any confounding variables, or unforeseen social pressure to sign in a particular way. The responses were also coded as being either “traditional” or “non-traditional” according to the regional signing conventions.

Results indicated that age may play a role in whether a participant uses traditional or non-traditional signs.Particularly when signing for countries, about half the responses given by younger participants were non-traditional signs. In addition, some participants—young and old—explained that they changed the country sign they used as they grew older. The researchers posit that this may be due to changing definitions of political correctness, in which older, more traditional signs are now perceived to be politically incorrect.

The authors also found that age may also play an important role in the participant’s use of color and number signs. As was the case for signing countries, younger participants were more likely to use non-traditional signs, and older participants more likely to use traditional signs. The researchers noted that younger participants using signs non-traditional to their region seemed to be adopting signing conventions from southern parts of the country, such as London, or from multiple regions. In other cases, younger participants responded by signing the first letter of the word, such as ‘p’ for purple. The authors attribute this generational shift to the participants’ increased exposure to different signing conventions, ushered in by technological developments, such as the Internet, and increased opportunities for travel.

Changing social norms, technologies, and opportunities—these are no strangers to us by now. As the world changes, so too do the ways in which we communicate, verbally and physically.


Citation:Stamp R, Schembri A, Fenlon J, Rentelis R, Woll B, et al. (2014) Lexical Variation and Change in British Sign Language. PLoS ONE 9(4): e94053. doi:10.1371/journal.pone.0094053

Image 1: British Sign Language chart by Cowplopmorris, Wikimedia Commons

Image 2: Figure 3 from article

The post Signs of Change: Regional and Generational Variants in British Sign Language appeared first on EveryONE.

Modern Humans: Were We Really Better than Neanderthals, or Did We Just Get Lucky?


We’ve all heard the story: dim-witted Neanderthals couldn’t quite keep up with our intelligent modern human ancestors, leading to their eventual downfall and disappearance from the world we know now. Apparently they needed more brain space for their eyes. The authors of a recent PLOS ONE paper are digging into the ideas behind this perception, and take a closer look at eleven common hypotheses for the demise of the Neanderthals, comparing each to the latest research in this field to convince us that Neanderthals weren’t the simpletons we’ve made them out to be.

The authors tackled ideas like the Neanderthal’s capacity for language and innovative ability, both often described as possible weaknesses leading to their decline. Analyzing the published research on each topic, they found that archaeologists often used their finds to “build scenarios” that agreed with the running theories of human superiority, and that some long-held truths have now been challenged by recent discoveries and ongoing research at the same excavation sites.

As one example, researchers who found shell beads and pieces of ochre and manganese in South Africa—­used as pigments—claimed them as evidence of the use of structured language in anatomically modern humans. While we can only guess when linking items like these to the presence of language, new findings at Neanderthal sites indicate that they also decorated objects with paints and created personal ornaments using feathers and claws. Whatever the anatomically modern humans were doing in South Africa, Neanderthals were also doing in Europe around the same time, negating the claim that this ability may have provided the anatomically modern humans with better survival prospects once they arrived in Europe.

Another set of South African artifacts led the archaeological community to believe that anatomically modern humans were capable of rapidly improving on their own technology, keeping them ahead of their Neanderthal contemporaries. Two generations of tools, created during the Stillbay and Howiesons Poort periods, were originally believed to have evolved in phases shorter than 10,000 years—a drop in the bucket compared to the Neanderthals’ use of certain tools, unchanged, for 200,000 years. However, new findings suggest that the Stillbay and Howiesons Poort periods lasted much longer than previously thought, meaning that the anatomically modern humans may not have been the great visionaries we had assumed. Additionally, while Neanderthals were not thought capable of crafting the adhesives used by anatomically modern humans to assemble weapons and tools, it is now known that they did, purifying plant resin through an intricate distillation process.

We’re all living proof that anatomically modern humans survived in the end. Perhaps in an effort to flatter our predecessors, we have been holding on to dated hypotheses and ignoring recent evidence showing that Neanderthals were capable of a lot more (and perhaps the anatomically modern humans of a lot less) skill-wise than previously believed. Genetic studies continue to support the idea that anatomically modern humans and Neanderthals interbred and show that the genome of modern humans with Asian or European ancestry contains nearly 2% Neanderthal genes, a substantial quantity considering 40,000 years and 2000 generations have passed since they ceased to exist. These genes may have helped modern humans adjust to life outside of Africa, possibly aiding in the development of our immune system and variation in skin color. Researchers believe that the concentration of Neanderthal genes in modern humans was once much higher, but genetic patterns in modern humans show that hybrid Neanderthal-Human males may have been sterile, leaving no opportunity for their genes to be passed to the next generation.

So, while they may not walk among us today, we have Neanderthals to thank for some major adaptations that allowed us to thrive and spread across the planet. Too bad they’re not here to see the wonderful things we were able to accomplish with their help.

Related links:

Picked Clean: Neanderthals’ Use of Toothpicks to Fight Toothache

Contextualizing the Hobbits

Sharing was Caring for Ancient Humans and Their Prehistoric Pups

Citation: Villa P, Roebroeks W (2014) Neandertal Demise: An Archaeological Analysis of the Modern Human Superiority Complex. PLoS ONE 9(4): e96424. doi:10.1371/journal.pone.0096424

Image 1: Neandertaler im Museum from Wikimedia Commons

The post Modern Humans: Were We Really Better than Neanderthals, or Did We Just Get Lucky? appeared first on EveryONE.

Linking Isolated Languages: Linguistic Relationships of the Carabayo

Amazon Header

Like PLOS ONE, the English language is rapidly taking over the world (we kid). In 2010, English clocked in at over 360 million native speakers, and it is the third-most-commonly used native language, right behind Mandarin Chinese and Spanish. While these languages spread, however, other indigenous languages decline at an accelerated pace. A fraction of these enigmatic languages belong to uncontacted indigenous groups of the Amazonian rainforest, groups of people in South America who have little to no interaction with societies beyond their own. Many of these groups choose to remain uncontacted by the rest of the world. Because of their isolation, not much is known about these languages beyond their existence.

The researchers of a recent PLOS ONE paper investigated one such language, that of the Carabayo people who live in the Colombian Amazon rainforest. Working with the relatively scarce historical data that exists for the Carabayo language—only 50 words have been recorded over time—the authors identified similarities between Carabayo and Yurí and Tikuna, two known languages of South America that constitute the current language family, Ticuna-Yurí. Based on the correspondences, the authors posit a possible genealogical connection between these languages.

Few resources were available to the authors in this endeavor. They analyzed historical wordlists collected during the last encounter with the Carabayo people in 1969—the only linguistic data available from this group— against wordlists for the Yurí language. In addition, they sought the expertise of a native speaker of Tikuna, a linguist trained in Tikuna’s many dialects. Using these resources, the authors broke down the Carabayo words into their foundational forms, starting with consonants and vowels. They then compared them to similarly deconstructed words in Yurí and Tikuna.

The examination involved the evaluation of similarities in the basic building blocks of these words: the number of times a specific sound (or phoneme) appeared; the composition and patterns of the smallest grammatical units of a word (a morpheme); and the meanings attached to these words. When patterns appeared between Carabayo and either Yurí or Tikuna, the authors considered whether or not the languages’ similarities constituted stronger correspondences. They also paid attention to the ways in which these words would have been used by the Carabayo when the lists were originally made many years ago.

The Yurí language was first recorded in the 19th century, but it is thought to have become extinct since then. From these lists, five words stood out: in Carabayo, ao ‘father’, hono ‘boy’, hako ‘well!’, and a complex form containing both the Yurí word from warm, noré, and the Yurí word, t?au, which corresponds in English to ‘I’ or ‘my’. Given the evidence, the authors contend that the strongest link between Carabayo and Yurí is found in the correspondence of t?au. The study of other languages has indicated that first person pronouns are particularly resistant to “borrowing”, or the absorption of one language’s vocabulary into another. Therefore, the authors surmise it is unlikely in this instance that either of the languages absorbed t?au from the other, but that they share a genealogical link.

Similarly, the comparison of Carabayo words to words of the living language of Tikuna provided a high number of matches, including in Carabayo gudda ‘wait’ and gu ‘yes’. The matches especially exhibit sound correspondences of Carabayo g (or k) and the loss of the letter n in certain circumstances. Table 7 from the article shows the full results (click to enlarge):

Carabayo-Tikuna correspondences

Carabayo-Tikuna correspondences


Although it is possible that the Carabayo language represents a language that had not yet been documented until the time of 1969, the results of the researchers’ evaluation have led them to conclude that Carabayo more likely belongs to the language family of Ticuna-Yurí. The relationship of Carabayo to Yurí and Tikuna changes the structure of the Ticuna-Yurí family by placing Carabayo on the map as a member of that family. The Tikuna language, once considered to be the sole surviving member of the Ticuna-Yurí family, might now have a sibling, and the identity of a barely known language has become that much more defined.

For the authors, this research is a complicated endeavor. The desire to advance our knowledge and understanding of these precious languages must be balanced with the desires of the uncontacted indigenous groups, some of whom voluntarily choose to remain in isolation. As the authors themselves express, the continued study of these uncontacted languages seeks to engender an awareness in the larger community of the people who speak these languages, and to reiterate their right to be left to live their lives as they wish—in isolation.

Citation: Seifart F, Echeverri JA (2014) Evidence for the Identification of Carabayo, the Language of an Uncontacted People of the Colombian Amazon, as Belonging to the Tikuna-Yurí Linguistic Family. PLoS ONE 9(4): e94814. doi:10.1371/journal.pone.0094814

Image 1: Sunset on the Amazon by Pedro Szekely

Image 2: pone.0094814

The post Linking Isolated Languages: Linguistic Relationships of the Carabayo appeared first on EveryONE.

Awkward Silences: Technical Delays Can Diminish Feelings of Unity and Belonging


Smooth social interaction is fundamental to a sense of togetherness. We’ve all experienced disrupted conversations—some caused by human awkwardness and others by breakdowns in technology. The content of our interactions does influence our connection to each other, but the form and process of communication also play a role.  Technical delays that occur below our conscious detection can still make us feel like we don’t quite click with the person we are trying to communicate with. The authors of a recently published PLOS ONE article, funded by a Google Research Award, investigated how delays introduced into technologically mediated conversations affected participants’ sense of solidarity with each other, defined as unity, belongingness, and shared reality.

For this research, conducted at University of Groningen, The Netherlands, participants in three sets of experiments sat in cubicles with headsets connected to computers (conditions that many of us with desk jobs can relate to) and were asked to talk about holidays for five minutes with an assigned partner. Some conversations were uninterrupted. Others were manipulated by introducing a one-second auditory delay. Some pairs knew about the delay and others did not. Afterward, the conversationalists completed a questionnaire about their sense of unity, belonging, understanding, and agreement with their partners.


Researchers found that those participants whose conversations were interrupted expressed significantly diminished feelings of unity and belonging. Awareness of technical problems had no apparent effect on perceived solidarity.  Even acquaintances stated that they felt a disconnect, though to a lesser degree, than participants who did not know each other. Despite participants expressing that they felt less unity and belongingness with their partner even when they had the opportunity to attribute it to technical problems, technology did not get a free pass on the delayed signal. Those with an interrupted connection also expressed less satisfaction with the technology. Points may have been lost for both relationships and telecommunications.

In a world where our interactions are increasingly mediated by computers and mobile phones with less than perfect signals, the authors suggest that this research provides insight into how our daily interactions may be affected. The method of communication we choose may influence our personal and business relationships, especially among strangers. The authors also posit that technology meant to improve long distance communication by imitating face-to-face interaction may not measure up to expectations if it is not executed without interruptions or delays. Perhaps this is something to consider during your next awkward phone call or video conference— though your awareness of technology as a possible barrier ultimately may not make a difference in how you perceive the person on the other end of the line.

Citation: Koudenburg N, Postmes T, Gordijn EH (2013) Conversational Flow Promotes Solidarity. PLoS ONE 8(11): e78363. doi:10.1371/journal.pone.0078363

Images: First image by Villemard is in the public domain. Second image is Supplemetary Figure 1 from the article.

A way with words: Data mining uncloaks authors’ stylistic flair


As any writer or wordsmith knows, searching for the right word can be a painful struggle. Here’s comforting news: word choice may be the key to understanding your stylistic flair.

New research in the field of text mining suggests that distinct writing styles are discernible by word selection and frequency. Even the use of common words, such as “you” and “say,” can help distinguish one writer from another. To learn more about style, the authors of a recent PLOS ONE paper turned to the famed lord of language, William Shakespeare.

The researchers assembled a pool of 168 plays written during the 16th and 17th centuries. After accounting for duplicates, 55,055 unique words were identified and then cross-referenced against the work of four writers from that time period: William Shakespeare, Ben Jonson, Thomas Middleton, and John Fletcher. The researchers counted how often these writers used words from the pool and ranked words by their frequency. Lists of twenty of the most-used and least-used words were then compiled for each writer and considered “markers” of their individual styles.

Fletcher, for one, frequently used the word “ye” in his plays, so a relatively high frequency of “ye” would be a strong marker of Fletcher’s particular writing style. Similarly, Middleton often used “that” in the demonstrative sense, and Jonson favored the word “or.” Shakespeare himself used “thou” the most frequently, and the word “all” the least.

In addition to looking at individual word use, the researchers analyzed specific works where the writer’s style changed significantly, such as in Middleton’s political satire “A Game at Chess,” which was notably different from his other works. They also compared word choice between writers. Their findings indicate that, unlike his contemporaries, Shakespeare’s style was marked more by his underuse of words rather than his overuse. Take, for example, Shakespeare’s use of “ye.” Unlike Fletcher, who used this word liberally, “ye” is one of Shakespeare’s least frequently used words.

Such analyses, the researchers suggest, may help with authorship controversies and disputes, but they can also address other concerns. In a post in The Conversation, the authors of this paper suggest that the mathematical method used to identify words as markers of style may also be helpful to identify biomarkers in medical research. In fact, the research team currently uses these methods to study cancer and the selection of therapeutic combinations, multiple sclerosis, and Alzheimer’s disease.


Citation: Marsden J, Budden D, Craig H, Moscato P (2013) Language Individuation and Marker Words: Shakespeare and His Maxwell’s Demon. PLoS ONE 8(6): e66813. doi:10.1371/journal.pone.0066813

Image: First Folio – Folger Shakespeare Library – DSC09660, Wikimedia Commons

Sleep May Solve Grammar Gremlins

Beinecke Library

Do you know when to use who versus whom? Affect versus effect? If you’re stumped, first crack open your textbook, but then make sure to get a good night’s sleep – it could help! According to newly published research, sleep plays an important part in learning grammar, and perhaps other complex rules as well.

In their study the researchers used an invented grammar to develop sets of letter sequences. They also assessed each sequence for its “associative chunk strength,” or memorable letter clusters. Sequences with lots of these “chunks” could be easy to memorize, which the authors differentiate from learning, or rule acquisition. Participants were then shown these sequences and asked to recreate them from memory. They were not told that the letter sequences were constructed according to a set of grammatical rules.

The participants then waited 15 minutes, 12 hours, or 24 hours before being tested to see whether they had retained or learned the rules. Participants in the 12 hour group that started in the evening and those in the 24 hour group slept between experimental phases. When the testing began, participants were told that grammatical rules were in use and asked to judge whether letter sequences were grammatical.

Participants that slept between stages, i.e. those in the 12 hour and 24 hour groups, performed significantly better than those who did not sleep prior to the test. Specifically, those who slept between tests were better able to discern grammatical from not-grammatical letter sequences. The same was true for letter sequences with fewer chunks of memorable letter clusters. Their results also indicate that the length of the waiting period, whether it was 15 minutes or hours, did not significantly affect the participants’ performance.

Students, the next time you think you can forgo a good night’s sleep, think again! Sleep may just help you learn those tricky grammatical rules.

Citation: Nieuwenhuis ILC, Folia V, Forkstam C, Jensen O, Petersson KM (2013) Sleep Promotes the Extraction of Grammatical Rules. PLoS ONE 8(6): e65046. doi:10.1371/journal.pone.0065046

Image: Childrens talk, English & Latin by Beinecke Library.