Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's sort of a shame the industry is in the state it's in, because I personally know a couple of folks who have disabilities that make something like an Echo truly a godsend.

There are a lot of things that get much, much easier from an accessibility standpoint if someone can operate a device by voice, instead of having to maneuver next to a switch, panel, or button.

Getting next to the button can be much more difficult if it means getting into a wheelchair, fitting your prosthetic, or having to find help because it's positioned too high or in an area that's too narrow.

Additionally, you can say a single phrase and turn off all lights in your dwelling, regardless of room. Minor win for able-bodied folks who might take only a minute or two to walk the house, big win for someone less capable.

---

I'm technically capable enough to have removed Amazon/Google anyways with HomeAssistant in combination with WIS (https://github.com/toverainc/willow-inference-server) and Willow autocorrect (https://github.com/toverainc/willow-autocorrect).

Which gets me most of the things I cared about from an Echo without ever having anything leave my LAN. But it's a real PITA the setup.

There is basically zero middleground in this industry between "Amazon/Google have a permanent microphone that listens to everything you do" and "You have to manually flash ESP32s and configure servers".



I had a parallel thought when a loved one was in the hospital. I noticed the nurses were so busy that small requests like an extra blanket were often forgotten entirely. How incredible would it be to have a voice system that would build a task list and capture other notable bits of information that the hospital staff could review on an integrated device (like a phone)?

Sadly, I don't see this ever happening due to the insane number of regulations you'd have to satisfy and the major players in this space only doing it to collect PII.


I find hospitals to be somewhat clueless when it comes to simple things like what you’re describing.

When a relative of mine got hospitalized for a laryngectomy, post op she obviously couldn’t talk since her voice box was gone.

She was either expected to learn to speak with her oesophagus ( doable but many people fail and it takes months ) or … remain mute.

That’s when I introduced her to the wonders of text to speech. Soon after she was the star of the department, which had never considered this !


Good use case for Siri AI with iPad in kiosk mode, big button for each task for visibility at distance, pressed by nurse when done, notification sent to loved ones.


> zero middleground in this industry.. manually flash ESP32s and configure servers

Market opportunity for local "home electronics contractor" network.


Makes sense. As often happens, it is the most vulnerable ones who are being taken advantage of.


Which mic/speaker do you use with WIS?


https://www.adafruit.com/product/5290 (Original - I don't believe it's made anymore)

and

https://www.adafruit.com/product/5835 (Updated version of the above)

---

Neither are particularly awesome as far as devices go, but they're cheap enough for what they are, and they're the targeted hardware for willow (https://heywillow.io/hardware/)

You won't be playing music with them, and the audio pickup isn't as good as an Echo. But they work and my family/folks who enter my home don't get recorded automatically by Google/Amazon.

It's very, very similar hardware to what Home Assistant is trying to use (ESP32 in a box with some mics).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: