OSU Professor Makes Strides in Hearing Aid Research
An Ohio State professor has made strides in hearing aid research in an effort to solve a difficult auditory occurrence called “the cocktail party problem.”
The cocktail party problem occurs when too many things are happening in a room simultaneously, and the hearing aid can’t pinpoint individual sounds.
“The mechanisms for hearing loss are a lot more complicated compared to nearsightedness or needing reading glasses,” Deliang Wang, a professor of computer science and engineering, said.
To filter out the random noise, Wang and his team used a technique called deep learning — an algorithm used to tell machines how to function for a humanlike task.
“Deep learning has all sorts of applications,” Wang said. “But there’s a very significant benefit to people with hearing aids.”
The application is considered a breakthrough in hearing technology, and Ohio State is the first in the world to apply the technique in the field and dominate new research. Wang called it a new solution to an old challenge.
According to the National Institute on Deafness and Other Communication Disorders, more than 15 percent of Americans report having some type of hearing loss in their life, but only 1 in 5 people actually benefit from hearing aid use.
But to long-time hearing-impaired listeners, the cocktail party phenomenon represents more of a social issue than an actual danger.
Eric Healy, professor of speech and hearing at the Speech Psychoacoustic Lab, has been involved with hearing science for more than 20 years and has collaborated with Wang since 2012.
Primarily working with the human subjects of their research, Healy said this research will address the “No. 1 complaint.”
“They all say the same thing, their hearing aids work fine when at home, but anytime they go anywhere noisy the hearing aids just don’t work at all,” Healy said.
Another aspect of their research is to provide an opportunity to blend the experience of people who wear hearing aids and people who don’t.
By stripping away the amplifiers used in regular aids, background noise can be eliminated, and the ability to better understand speech is increased by almost 100 percent, according to Wang’s cover article featured in the IEEE Spectrum magazine.
While the improved hearing aid is not available yet, the same deep-learning technology has been used in machines for automatic driving, headphones and smartphones. New digital hearing aids have more processing power than their predecessors.
“There’s little question that this is the future of noise reduction,” Healy said.
Article originally appeared on The Lantern.