Could this be the answer to our Holy Grail? I have seen a few other articles on AI in aids (e.g., Starkey’s press release of a year ago), but what intrigued me about this timely article is that it’s the EXACT reason I’m going in to see my aud-guy lately!

I’ve found - as so many others have, too - that distinguishing SPEECH in noisy places is my biggest challenge. In smaller groups and quiter settings - no prob! But put a dozen people in a crowded room, or go out to a noisy pub in town and YE GODS! The ambient noise competing with ordinary conversations - even at a screaming pitch! - is simply overwhelming for me to zero in on and comprehend.

Part of my brain is in anxiety mode - TOO overloaded with noise from all over! The other part is trying desperately to just face people right next to me and read lips if need be to figure out what they’re saying. So my hubs stumbled across this article today and sent it my way, which I’m now putting up here.

Summary: <<A new piece of technology coming out of Columbia University School of Engineering and Applied Science could make things even better, however — courtesy of a hearing aid that is designed to read brain activity to determine which voice a hearing aid user is most interested in listening to and then focusing in on it. The resulting “cognitive hearing aid” could be transformative in settings like crowded rooms in which multiple people are speaking at the same time.>>

I realize this type of HA is probably a few years out, but ironically, I’ve just been in to see my aud-guy to boost the speech frequency gain on my Phonak Marvels in an attempt to help me hear speech better. Yes, the setting now boosts all other noises in the same frequencies, but at least it helps me comprehend speech in noisy places much better!

Professor Nima Mesgarani at Columbia U shares her goal: <<“Working at the intersection of brain science and engineering, I saw a unique opportunity to combine the latest advances from both fields, to create a solution for decoding the attention of a listener to a specific speaker in a crowded scene which can be used to amplify that speaker relative to others.”>>

Let’s face it: <<.… up until now, no hearing aid on the market has addressed this specific problem. While the latest hearing aids feature technology designed to suppress background noise, these hearing aids have no way of knowing which voices a wearer wants to listen to, and which are the distractors.>>

I’m SUPER excited about this development, and I can honestly see the day when folks with normal hearing would use such a device at parties, conferences and large gatherings, cuz let’s face it: we ALL have problems hearing speech in noisy places. I’m impaired with the additional challenge of just plain trying to HEAR, but if AI could be harnessed to help me distinguish speech, I bet it could be fine-tuned to improve the quality of just about any frequency.

^^^ Do you know what I just told my husband? “If I could have my aids focus on SPEECH only and NO other noise, I’d be happy!” Of course that was said in a moment of frustration at my aids simply not doing enough to boost speech.

I do like the sounds of nature outside, but I ABHOR the city sounds of sirens, roaring traffic, buses, loud subways, et al.

And at the end of the day, what keeps our brains sharp as razors is HEARING speech, COMPREHENDING speech and ANSWERING speech if we actually hear the question or comment correctly.

To each your own, I drive, and I ride motorcycles, and I also do a lot of walking and I have to be able to hear it all, for me it has taken years to retrain my brain to accept the sounds. Ever since my last fitting adjustments just over a month ago I have been amazed at how my brain has adapted to new sounds and learned how to hear better in noise. Every Sunday morning at church I notice the difference in way things sound, every time I go to out to eat I notice that I am hearing speech in noise better. Every time I get in the car and drive I notice that I am hearing better and while I still hear road noise it does not bother me. I can be in the car with the radio on and still carry on a conversation with my wife or other even in the back seat.

So here is my take on this we do not need machines to take over what our brains can and should be doing. And this is coming from someone that made my living due to computers and software development.

I have to say I agree with cvkemp. I know that speech is important, but to me so are music, rain, ocean waves, my cat meowing for attention, and cars coming up behind me on my bicycle. I personally would not want to miss all the other sounds to sacrifice them for better speech.

You say your problem is in crowded places with a dozen people, or noisy places like pubs. That is normal and even full hearing people have trouble hearing friends’ conversations in that type of environment. I go out karaokeing about once a month or so and the place my friends and I go to is obviously loud, even my friends with perfect hearing need to raise their voices to hear each other. It isn’t a fault of the aids, it is a product of that type of environment, just a lot going on.

They aren’t going to make a one-trick-pony HA that only amplifies and filters speech. Give me a break.

AI is way, way, way over-hyped.

AI today is bandied about like “electricity” and “magnetism” was in the 19th century. Yes, it’s a technological advancement. Yes, it’s going to give us incremental improvements in technological artifacts. But it’s not magic.

AI was one of my concentrations in the university and I’ve been involved in the early development of neural networks and today my company is developing machine-learning algorithms in signal processing pattern recognition of physiological data patterns.

It’s improving things but it’s not a panacea or a revolutionary breakthrough. Just as “electricity” never gave us Frankenstein-style reanimation of the dead and “magnetism” never gave us time travel, AI isn’t going to give us either killer robots or magical hearing aids.

What you’ll see is an improvement in speech-in-noise programs, and otherwise the HA’s will be identical to the previous model.

And as a parting thought: Notwithstanding what I said before about AI not being magic, God forbid the military employing an AI computer in control of the nuclear arsenal. Doesn’t anyone read or watch science fiction??

You say your problem is in crowded places with a dozen people, or noisy places like pubs. That is normal and even full hearing people have trouble hearing friends’ conversations in that type of environment. I

That is true! My hubs will tell me afterwards that he also struggled to hear at loud places. But the key difference is that most people with hearing in a pretty normal range are able to reply pretty fast to conversation and questions being bantered about. They may be shouting at each other over a din, and leaning forward, straining to hear the person talking, but by golly they respond without any repetition of the comment or question.

THAT is what I’d like MORE of after my aud-guy tweaks Speech in Noise and Speech in LOUD Noise programs. When I start distinguishing speech better, I think I’ll get better at comprehension. It’s a brain/ear thing, and I need more of a workout in that area.

It is a nuke bomb that is exploding in the upper atmosphere that emits electrical interference that destroys electrical, computers, and almost all forms of communication. And there is really no defense of it other the. Preventing it from exploding

Hey jorge - this is a great article and UTTERLY hits the nail on the eardrum for me! What I found pertinent to my situation:
<<… people with normal hearing may only require an SNR of 2-3 dB to correctly perceive words in noise, whereas people with a mild-to-moderate hearing loss may need an 8 dB SNR to achieve the same success. In other words, if you have a patient with an SNR requirement of 8 dB, and you aid them with technology that provides a 2-4 dB improvement, they still will not be able to understand speech in noise.>>

Now consider my own audiogram! I bet I’d need an SNR boost of up to 10dB to better understand WHAT’S being said as opposed to hearing a babble of speech!

My aud-guy just boosted the speech frequencies’ SNR for me by 4dB for Speech in Noise the other day. Now I will have a better idea of what to ask for when I return next week to have my Speech in LOUD Noise boosted really significantly (while the program has already diminished all other frequencies to just about a murmer!).

YES! I would keep this 4th program on my aids to use half a dozen times a year it is that key to me to hear in super loud places. I don’t search out these places, but normal hearing folks I socialize with seem to LOVE the VIBE of super noisy places, LOL!

early this year, something was mention about a proactive assistant that would use deep neural networks or machine thinking (someone would remember the CES award 2019). What we will get is a different story, my estimation is we will see this on the next generation of hearing aids.

my estimation is we will see this on the next generation of hearing aids.

Fingers crossed on that one! For now, I feel very blessed to be able to tweak the speech frequencies on my Marvels - boosting the SNR here.

The article you’d linked to reminded me how frustrated I was using the Oticon Opn aids 2 yrs ago. Their whole product strategy appears to be targeting our DNN connections - actually training our brains to distinguish speech better and better over time.

I wore those dang aids for 9 mos and was still DUMB AS A STICK discerning speech in any kind of noise. Like my brain is just a block of cement. I’m very healthy, never had a stroke, granted am deaf as a doorknob, but could simply never train my brain to get any better distinguishing what folks were saying in any kind of noise.

That’s why the manual changes made on my Phonak Marvel Speech in Noise/LOUD Noise are so critical for me - and remain my crutch for better hearing.

ask your tech to set up a memory setting in your aids that is full time directional. All of our complex algorithms and fancy buzz words about noise management are still a far second to directional microphones when it comes to picking a voice out of many voices.

Another thing that I harp on all the time, and that many, even very experienced audiologists and hearing aid dispensers don’t understand is the over use of compression, which most digital hearing aids do right out of the box. Raising the knee points, if possible, and reducing compression ratios from the top up make a huge difference when it comes to hearing in noise.

Great thread and thanks to all!
Eric _ can you elaborate a little on what you mean by “. Raising the knee points, if possible, and reducing compression ratios from the top”?
I’ve tried to get my head around why the compression technique doesn’t seem to be the cure all for high frequency loss I expected!
Thanks

WDRC, also known as wide dynamic range compression, is the prevalent manner by which hearing aids control sound level.

A linear hearing aid uses peak clipping, with no compression meaning that when sound gets to a certain point, is is just hacked off. Think of Gandalf: “You shall not pass!”

Digital aids have “knee points” which is the place where compression kicks in. Say that point is 60 db. Then the compression ratio dictates how much reduction of sound the device will perform, so if it is a 3:1 ratio, which isn’t uncommon in contemporary digital aids, it takes 3 db in to get one into your ear.

So if the ambient level of the environment is above 60db, the hearing aid is compressing out those sounds you need to hear for speech clarity. Not to mention that the higher the compression, the more noise there is in the circuit at idle (noise floor) and the louder the noise gets, the more distortion is induced.

At a 50 db ambient noise level in quiet conversation, the circuit may be a. .3 or .4 Total Harmonic distortion. With a 60 db knee point and 3:1 compression, at a 75 db ambient noise level, the THD can rise to 2% or more, so basically the signal quality is getting destroyed.

Many instruments today do not have variable knee points, but by reducing the compression by pulling up the “loud line” on the graph in the fitting software, the compression can be reduced to where the hearing aid is almost linear, which produces a better quality sound signal in noise.

Thanks Eric - sophomore sound physics was a while ago! I actually thought compression referred to compressing the frequency bands closer together to enable hearing frequencies outside your range of amplication!
So, if you raise the compression points and lower the ratios, I can see where loud speech in loud background would be easier to discern, say understanding the lyrics at a rock concert, but wouldn’t that simply overwhelm softer speech in a noisy environment, say speech from across the table at a noisy bar?
Maybe what I need is a pointer towards a basic primer for speech and hearing physics?

It all depends on the instrument, how many compression channels it has, and the tech running the software.

I have a surround sound system in my office That I use to simulate restaurant/pub/bar environmental noise and play around with the setting in real time.

It usually ends up working best with a bit more top down compression in the lower channels, and less in the high frequency channels. like maybe 1.5-1.7:1 top and bottom, up to about 2k, and above 2k 1.2-1.3:1 on the top side, and 1.4-1.5:1 on the bottom.