Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Computers use binary logic and circuits, but with neural networks you can and pretty much always use floating point numbers. That's very analog.

There are around 2^52 doubles between 0 and 1. Surely that's enough to represent any analog signal you might care about.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: