Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like this comment because I think it highlights the exact difference between AI optimists and AI cynics.

I think you'll find that AGI cynics do not agree at all that "engineering a 10x/100x version" of what we have and making it attempt "AGI algorithms 24/7 in an evolutionary setting" is a "safe ticket" to AGI.



I wouldn’t say I’m a cynic, I’d just say how can one possibly know what a safe ticket is in this space? The logic you described is basically simple extrapolation, like in the xkcd wedding dress comic. There’s no guarantee that will get you anywhere in finite time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: