Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We're building hearing aids that work in noisy places (AudioFocus[1], YC S19). We use novel machine learning and microphone array design to help patients hear better in loud restaurants, weddings, & family gatherings better than any other AI hearing aid.

It's a big deal because untreated hearing loss is associated with social isolation & depression and while 37M people have hearing loss in the United States, only 8M use hearing aids. Hearing in noisy places is the biggest reason for lack of adoption.

We just got our behind-the-ear (BTE) hardware prototype running and already have several excited patients. Listen to an audio recording from it here[2]. We're currently working on a pilot study with a professor in San Francisco.

If you, or someone you know, is interested in participating in the pilot study let me know. And if you know interested investors, I'm happy to chat with them. I can be reached at shariq@audiofocus.io

[1] www.audiofocus.io

[2] https://www.youtube.com/watch?v=orU5Wx6_RfA&t=24s



Love this idea. I hoped that my AirPods with Live Listen would do the same but was disappointed that it sounded similar to your benchmark example or worse.

I wonder if you all can use another layer in your ML stack to "fill out" the voices once you've isolated them. Your example leaves voices sounding very thin/hollow and even a bit garbled.


Do you have a Coral Edge TPU in your device?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: