Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a way, they were not correct so far (in part also because of their actions, but not only due to their actions).


Pretty much exclusively due to their actions and some particularities of the specific technology which don’t seem to apply to AI.


If you put, for example, US presidents in the concerned group, i.e., actual decision makers then fair enough. But it wasn't just concerned scientists and the public.


Uhh correct. Unsurprisingly though, many of the people with the deepest insight and farthest foresight were the people closest to the science. Many more were philosophers and political theorists, or “ivory tower know-nothings.”


Maybe. There were also those scientists working actively on various issues of deterrence, including on how to prevail and fight if things were to happen - and there were quite a few different schools of thought during the cold war (the political science of deterrence was quite different from physical science of weapons, too).

But the difference to AI is that nuclear weapons were then shown to exist. If the lowest critical mass had turned out to be a trillion tons, the initial worries would have been unfounded.


None of whom were people saying "there's no risk here just upside baby!"


People were on totally opposing sides on how to deal with the risk, not dissimilar to now (with difference that the existential risk was/is actual, not hypothetical).


Sure, there are also some (allegedly credible) people opening their AI-optimist diatribes with statements of positive confidence like:

“Fortunately, I am here to bring the good news: AI will not destroy the world”

My issue is not with people who say “yes this is a serious question and we should navigate it thoughtfully.” My issue is with people who simply assert that we will get to a good outcome as an article of faith.


I just don't see the point in wasting too much effort on a hypothetical risk when there are actual risks (incl. those from AI). Granted, the hypothetical existential risk is far easier to discuss etc. than to deal with actual existential risks.

There is an endless list of hypothetical existential risks one could think of, so that is a direction to nowhere.


All risks are hypothetical

Many items on the endless list of hypothetical x-risks don’t have big picture forces acting on them in quite the same way e.g. roughly infinite economic upside by getting within a hair’s breadth of realizing the risk.


No, some risks are known to exist, other just might exist. If you walk across a busy street without looking, there is a risk of being run over - nothing hypothetical about that risk. In contrast, I might fear the force gravity suddenly disappearing but that isn't an actual risk as far as we understand our reality.

Not sure where infinite economic upside comes from, how does that work?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: