Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's better than that. As an analog computer, both the inputs and outputs are _continuous_. So it's possible to get down to very small deltas that are only limited by the internal precision of the system itself, and the precision of measuring those inputs and outputs.

At the same time, precision is dictated by machining tolerances for the instruments in the calculation chain, as well as any mechanical forces in play at the time. Even the temperature of parts can change the dimensions of parts which can introduce error. And then there's the accumulation of error across a deep enough mechanical "pipeline".

What really gets me is how there is this tradeoff between analog and digital computers. Digital systems don't have precision errors from miss-shaped parts, but instead opt for errors in quantization (digitization) instead.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: