Most of us take it for granted, but a new brain map shows how complicated speaking actually is.The work, published in the journal Nature, details the intricate coordination of neural networks required to make speech happen, from the nerves that control the jaws, lips and tongue to those that manipulate the larynx. The map lays the groundwork for potential brain-computer interfaces that could some day help people who are completely paralyzed to speak again.
“We have discovered the basic mechanisms of how the brain controls the complex constellation of vocal tract movements required to produce speech,” says lead author Dr. Edward Chang, chief of epilepsy and pain neurosurgery at the University of California San Francisco (UCSF) Medical Center.
Working with three patients who were undergoing surgery for epilepsy, Chang and his colleagues implanted arrays of electrodes on the surface of the brain, in a region known to be involved in speech — the ventral sensorimotor cortex. The patients needed to have electrodes placed to locate the source of their uncontrollable seizures and agreed to participate in the experiment during the course of their treatment.
Previous work on epilepsy patients, starting in the 1930s, established that this brain region was important for speech, but electrically stimulating small areas did little more than produce twitches of the mouth and tongue, suggesting that speech production required a more intricate interplay among a wider set of nerve networks.
To tease out some of those connections, the UCSF researchers recorded the brain activity that occurred as the participants pronounced short syllables like “ba” or “ga” or “da.” “By looking simultaneously at many electrodes over [all of the regions involved in the] speech motor cortex, we found that speech sounds generated complex patterns of brain activity that changed over time,” Chang explains.
The scientists had expected that each distinct speech sound would have a corresponding brain region devoted to producing it, but that wasn’t the case. “The realization that the neural activity was being played out though the entire region led us to completely re-think what was going on,” he says, “It appears that even simple speech sounds, like ba or da, require precise neural coordination. It’s kind of like individual musicians working together to create a symphony.”
Fortunately, however, while these patterns are wide-ranging in the brain, they were mostly the same in all of the patients, meaning that it may be possible to “translate” signals from one brain to another and develop a computer program that would work to generate speech for those who are paralyzed, though it would need to be tuned for particular individuals. “The map layout of the vocal tract on the brain was largely the same,” Chang says, “But there were definitely individual differences that were observable at a more detailed level. The big picture blueprint for translating results across people probably works well, but the finer details, which also important, are likely different.”
This means that it might also be possible to figure out what someone is going to say simply by measuring and studying their brain activity; an intriguing possibility that may help patients whose vocal abilities are hampered or paralyzed, but whose brain functions are normal. “Our findings suggest that this could in fact be done from these types of brain recordings,” Chang says. The study participants were all native English speakers and the researchers plan next to study people who speak other languages to see whether the signalling patterns vary from that of English sounds.
“Our results demonstrate that the ‘neural code’ for speech production is not based around distinct brain areas for individual speech sounds, but instead on patterns of neural activity across adjacent brain regions that represent the speech articulators (lips, tongue, jaw, and larynx) that are used to produce that sound,” says Dr. Kristopher Bouchard, also of UCSF, who collaborated with Chang on the research.
The research also explains why tongue twisters are so difficult and why some verbal mistakes— like saying “she shells” for “seashells”— are more common than others. “We found that how consonants and vowels are generated from the speech cortex are very distinct from one another,” Bouchard says, “In some types of ‘slips of the tongue’, this may explain why it is more common to substitute consonants with one another, and the same for vowels, but very rarely across these categories.”
Understanding the complexity of what speech looks like in the brain could be an important step in generating better treatments for speech disorders as well.