Researchers at Massachusetts General Hospital (MGH) have conducted a novel study that uses cutting edge brain recording techniques to show how neurons in the human brain cooperate to enable humans to plan words for speech and then utter them verbally.
Collaboratively, these discoveries offer an intricate diagram illustrating how speech sounds, including vowels and consonants, are encoded in the brain long before they are uttered and how they are combined during language production.
Understanding the neurons in the brain that facilitate language creation is made possible by this discovery, which is published in Nature. This knowledge may help treat and comprehend speech and language issues better.
Senior author Ziv Williams, MD, an associate professor of neurosurgery at Massachusetts General Hospital and Harvard Medical School, notes that “although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech—including coming up with the words we want to say, planning the articulatory movements and producing our intended vocalizations.”
“Our brains perform these feats surprisingly fast—about three words per second in natural speech—with remarkably few errors. Yet how we precisely achieve this feat has remained a mystery.”
Williams and colleagues found cells involved in language production that may underpin the ability to speak when they recorded the activities of individual neurons in the prefrontal cortex, a frontal region of the human brain, using a cutting-edge technology called Neuropixels probes. They also discovered that speaking and listening are controlled by different neural networks in the brain.
Williams, who collaborated with Sydney Cash, MD, Ph.D., an MGH and Harvard Medical School professor of neurology and a key player in the study’s direction, to create these recording methods, adds,”The use of Neuropixels probes in humans was first pioneered at MGH. These probes are remarkable—they are smaller than the width of a human hair, yet they also have hundreds of channels that are capable of simultaneously recording the activity of dozens or even hundreds of individual neurons,”
“Use of these probes can therefore offer unprecedented new insights into how neurons in humans collectively act and how they work together to produce complex human behaviors such as language,” Williams says.
The study demonstrated how the brain’s neurons encode some of the most fundamental components used in the construction of spoken words, from phonemes, which are basic speech sounds, to syllables, which are more sophisticated speech strings.
For instance, the word dog requires the consonant “da,” which is made by placing the tongue against the hard palate located behind the teeth.
Through single-neuron recordings, the researchers discovered that some neurons fire even before this phoneme is uttered. More intricate parts of word building, such the precise phoneme assembly, were reflected in other neurons.