Alumni Profile

Alumni profile: Nick Lesica, Ph.D. '05

Building better hearing aids through deep learning

Head shot of Nick Lesica, Ph.D. '05

Nick Lesica, Ph.D. '05

Nick Lesica, Ph.D. '05, discovered biomedical engineering towards the end of his undergraduate studies in electrical engineering and computer science at MIT. He’d never considered the human body, especially the brain, as a potential focus for his engineering education, but as he began to look for Ph.D. programs, he was drawn more and more to bioengineering programs. When he met Garrett Stanley, at that time an Assistant Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), he found a lab whose research would permanently reorient his career.

“He’d just completed a study where he’d been able to reconstruct what a cat was looking at from observation of its brain activity,” Lesica said. “Because my experience up to that point wasn’t really related to neuroscience, to be able to see the kind of frameworks I’d become familiar with through my engineering studies being put to use in that application was a big moment. That really drew me to him and that sort of work, and I’ve been doing pretty similar things for the last 25 years.”

Today, Lesica studies how the brain encodes external sensory input. He’s a Professor of Neuroengineering in the Ear Institute at University College London (UCL), where his lab studies the neurophysiology of hearing and hearing loss. He’s also co-founder of a biotech start-up, Perceptual Technologies, which is using that research to develop deep learning algorithms that could lead to more advanced and effective hearing aids.

“What’s going on in the ear and brain after hearing loss is extremely complicated, but what a hearing aid is doing is extremely simple,” Lesica said. “It’s more or less just turning up the volume. If you think about the simplicity of that solution and the complexity of the problem, it’s not really shocking that there’s a mismatch there. What’s new is the fact that we can now maybe try to close that gap.”

What makes hearing and hearing loss complicated is the nonlinearity of the cochlea, the spiral-shaped part of the inner ear directly involved in hearing. The way that air vibrations are transformed into signals that are sent to the brain is much more complicated than what happens with light in the eye. For example, when the cochlea detects two frequencies, it generates a third on its own, and encodes all three to the brain. This is why a common hearing loss test is to transmit two tones into the ear, then use a microphone to record how many frequencies the cochlea outputs. After hearing loss, that third tone is gone.

“The ear, when it’s healthy, has the ability to reshape sounds, and it does this in a highly nonlinear way,” Lesica said. “To contrast with what a current hearing aid does, if you just put two tones into it, all it does is turn up the volume. It’s certainly not going to create that third tone, so it really isn’t doing anything to restore the essential nonlinearities in the cochlea that are lost. A hearing aid that is doing what needs to be done would actually create the third tone. No one actually cares about tones, of course, but the same nonlinearity that creates the third tone is what lets us separate speech from noise, or enjoy music.”

Deep learning models can be designed to simulate the auditory system and offer a way to characterize nonlinear relationships between datasets, such as the those between sound and the neural activity that it elicits. The algorithms being developed at Perceptual Technologies could be a way to build hearing aids that can provide that necessary nonlinearity that hearing loss has taken away.

“Until recently, we didn’t really have any tools, either experimental or computational, that we could use to try to bridge that gap,” Lesica said. “Around 5-10 years ago, two things happened simultaneously: Our ability to observe brain activity became much greater, and deep learning came online. Now we do experiments where we record lots and lots of neural activity in response to lots and lots of sounds, feed those data sets to deep learning, and then develop essentially perfect replicas of the auditory system.”

By externally replicating the healthy cochlea’s inherent nonlinearity, hearing aids that use Lesica’s algorithms would change the sound input sent to the brain to mimic what it was before hearing loss occurred. This could have multiple potential benefits, including better distinguishing individual voices in a loud environment like a restaurant.

“Current hearing aids pretty much butcher music, and one of the complaints of hearing aid users is that music sounds distorted or harsh and they can’t enjoy it the way they used to,” Lesica said.

After receiving his Ph.D. in engineering sciences, Lescia spent four years as a postdoctoral fellow at Ludwig Maximilian University of Munich before joining UCL in 2010. He co-founded Perceptual Technologies in 2020 with Andreas Fragner, a former physicist and deep tech entrepreneur.

“In partnership with UCL, we submitted a bid to the government for about a million dollars to do the technology de-risking that we thought would be necessary in order to then get private investment to do product development,” he said. “We’re now about halfway through the three-year cycle of technology de-risking. We’re doing all our work in the lab, and we’re at the point where we’re ready to do some validation, then do the next round of funding for product development.”

Lesica has always stayed in academia because of its potential for impact free of some of the necessities of the commercial world. But since becoming a co-founder, he’s found that entrepreneurship has enhanced his lab experience, not inhibited it.

“It’s actually made me a vastly better scientist,” he said. “When you put yourself on the hook of ensuring that your science is leading you towards a particular, tangible, real-world outcome, it becomes much harder to convince yourself that what you’re doing is successful just because you’ve found something you believe is interesting or published a paper in a high-impact journal. When we’re doing things in the lab, I ask if it’s actually getting us closer to a better hearing aid. If not, it’s not good enough, or it’s the wrong direction to pursue.”

Press Contact

Matt Goisman | mgoisman@g.harvard.edu