Mapping how our brains process sound holds clues for helping some people retain — or even regain — speech. A team of UT and Dell Children’s collaborative researchers is pointing the way.
A 12-year-old patient, Anna, is at Dell Children’s Medical Center of Central Texas awaiting surgery to treat epilepsy, which she’s had since she was born. Her seizures are getting worse and more frequent, and though epilepsy is among the most common neurological conditions seen today, Anna’s case is rare: It’s not responding to medication.
Ahead of Anna’s surgery, a team carefully wires electrodes directly into her brain to monitor its activity during the week or two she’ll be in the hospital — tracking how and where her brain “lights up” as she listens to her mom talking or watches her favorite Pixar movie.
That resulting brain activity is the focus of Liberty Hamilton, Ph.D., assistant professor in the Department of Neurology at Dell Medical School at The University of Texas at Austin and its Moody College of Communication’s Department of Speech, Language and Hearing Sciences. Her and her team’s National Institutes of Health-funded work to map the way brains process sound has far-reaching implications for surgeries like Anna’s, as well as for people who experience brain injury and much more.
“I’ve always been fascinated by how people are able to communicate and how we’re able to turn sounds into meaningful language, how we’re able to learn new languages, or communicate through non-language like music,” Hamilton says. “From a medical standpoint, knowing how the brain processes sound and language — especially in developing brains, which are understudied — can help us to understand what’s going on in people who have a brain injury or something like epilepsy that might affect language networks.”
Hamilton collaborates widely with clinicians and researchers at Dell Med, Dell Children’s and UT — including on an interdisciplinary study aiming to protect crucial brain functions during surgery, like speech and language.
Beyond Epilepsy
In addition to helping young epilepsy patients, Hamilton’s brain mapping work can contribute to other studies where knowledge of how sound and language work in the brain is important — like for patients with ALS, or motor neuron disease, who have lost the ability to speak.
Jun Wang, Ph.D., also holds joint appointments in the departments of Neurology and Speech, Language and Hearing Sciences. His work with brain-computer interfaces aims to help ALS patients speak again — essentially, helping the brain to “speak” through an external device.
“Using brain-computer interfaces for speech — to actually communicate — is relatively new,” Wang says. “Until about five years ago, the idea of a computer being able to decode content from our brains was on par with science fiction.”
But Wang’s team is working with Sandia National Labs to create a next-generation device that will allow ALS patients to use speech-related brain-computer interfaces at home or at work with a device the size of a small helmet.
Hamilton’s brain mapping work is key here: Instead of the 200 sensors Wang’s team originally planned for, Hamilton’s data showed that they only needed nine.
“There definitely are clinical studies for which areas of the brain are active during speech and language, but we’re really working toward a level of fine-grained detail that pinpoints which areas are for pitch, which areas are for phonemes, which areas are for the meaning of words,” Hamilton says. “With that level of precision, and with hundreds of thousands of patients who experience these neurological conditions every day, the opportunities for impact are pretty endless.”