Not at all. But I think one's feelings on global warming & nukes can be influenced by previous exposure to eschatology. I was raised in American evangelicalism which puts a heavy emphasis on the end of the world stuff. I left the church behind long ago. But the heavy diet of Revelations, etc. has left me with a nihilism I can't shake. That whatever humanity does is doomed to fail.
Of course, that isn't necessarily true. I know there's always a chance we somehow muddle through. Even a chance that we one day fix things. But, emotionally, I can't shake that feeling of inevitable apocalypse.
Weirdly enough, I feel completely neutral on AI. No doomerism on that subject. Maybe that comes from being old enough to not worry how it's going to shake out.
It's obvious to me that humans will eventually go extinct. So what? That doesn't mean we should stop caring about humanity. People know they themselves are going to die and that doesn't stop them from caring about things or make them call themselves nihilists...
I don't think we're dealing with "concerned" citizens in this thread, but with people who presuppose the end result with religious certainty.
It's ok to be concerned about the direction AI will take society, but trying to project any change (including global warming or nuclear weapons) too far into the future will put you at extremes. We've seen this over and over throughout history. So far, we're still here. That isn't because we weren't concerned, but because we dealt with the problems in front of us a day at a time.
The people who, from Trinity (or before), were worried about global annihilation and scrambled to build systems to prevent it were correct. The people saying “it’s just another weapon” were incorrect.
It's kind of infuriating to see people put global thermonuclear conflict or a sudden change in atmospheric conditions (something that has cause 4 of the 5 biggest mass extinctions in the entire history of the planet) on the same pedestal as a really computationally intense text generator.
My worries about AI are more about the societal impact it will have. Yes it's a fancy sentence generator, the problem is that you already have greedy bastards talking about replacing millions of people with that fancy sentence generator.
I truly think it's going to lead to a massive shift in economic equality, and not in favor of the masses but instead in favor of the psychopathic C-suite like Altman and his ilk.
I'm personally least worried about short-term unemployment resulting from AI progress. Such structural unemployment and poverty resulting from it happens when a region loses the industry that is close to the single employer there and people affected don't have the means to move elsewhere or change careers.
AI is going to replace jobs that can be done remotely from anywhere in the world. The people affected will (for the first time in history!) not mostly be the poorest and disenfranchised parts of society.
Therefore, as long as countries can maintain political power in their populations, the labor market transition will mostly be fine. The part where we "maintain political power in populations" is what worries me personally. AI enables mass surveillance and personalized propaganda. Let's see how we deal with those appearing, which will be sudden by history's standards... The printing press (30 years war, witch-hunts) and radio (Hitler, Rwandan genocide) might be slow and small innovations compared at what might be to come.
I don't think existing media channels will continue to be an effective way to disseminate information. The noise destroys the usefulness of it. I think people will stop coming to platforms for news and entertainment as they begin to distrust them.
The surveillance prospect however, is frightening.
I think people aren't thinking about these things in the aggregate enough. In the long term, this does a lot of damage to existing communication infrastructure. Productivity alone isn't necessarily a virtue.
I've recently switched to a dumb phone. Why keep an internet browsing device in my pocket if the internet's largest players are designing services that will turn a lot of its output into noise?
I don't know if I'll stick with the change, but so far I'm having fun with the experience.
The Israel/Gaza war is a large factor - I don't know what to believe when I read about it online. I can be more slow and careful about what I read and consume from my desktop, from trusted sources. I'm insulated from viral images sent hastily to me via social media, from thumbnails of twitter threads of people with no care if they're right or wrong, from texts containing links with juicy headlines that I have no hope of critically examining while briefly checking my phone in traffic.
This is all infinitely worse in a world where content can be generated by multi-modal LLMs.
I have no way to know if any of the horrific images/videos I've already seen thru the outlets I've identified were real or AI generated. I'll never know, but it's too important to leave to chance. For that reason I'm trying something new to set myself up for success. I'm still informed, but my information intake is deliberately slowed. I think that others may follow in time, in various ways.
It’s kind of infuriating to see people put trench warfare or mustard gas on the same pedestal as a tiny reaction that couldn’t even light a lightbulb.
There are different sets of concerns for the current crop of “really computationally intense text generators” and the overall trajectory of AI and the field’s governance track record.
...you do realize that a year or two into the earliest investigations into nuclear reactions what you would have measured was less energy emission than a match being lit, right?
The question is, "Can you create a chain reaction that grows?", and the answer is unclear right now with AI, but it's hard to say with any confidence that the answer is "no". Most experts five years ago would have confidently declared that passing the Turing test was decades to centuries away, if it ever happened, but it turned out to just require beefing up an architecture that was already around and spending some serious cash. I have similarly low faith that the experts today have a good sense that e.g. you can't train an LLM to do meaningful LLM research. Once that's possible, the sky is the limit, and there's really no predicting what these systems could or could not do.
It seems like a very flawed line of reasoning to compare very early days nuclear science to an AI system that has already scaled up substantially.
Regarding computing technology, I think the positive feedback you're describing happened with chip design and vlsi stuff, eg. better computers help design the next generation of chips or help lead to materials breakthroughs. I'm willing to believe LLMs have a macro effect on knowledge work in a similar way search engines, but as you said, it remains to be seen whether the models can feed back into their own development. From what I can tell, gpu speed and efficiency along with better data sets are the most important inputs for these things. Maybe synthetic data works out, who knows.
The people who thought Trinity was “scaled up” were also wrong.
The only reason we stopped making larger nuclear weapons is because they were way, way, way beyond useful for anything. There’s no reason to believe an upper bound exists in the physical universe (especially given how tiny and energy efficient the human brain is, we’re definitely nowhere near it) and there’s no reason to believe an upper bound exists on the usefulness of marginally more intelligence. Especially when you’re competing for resources with other nearly-as-intelligent superintelligences.
The problem is we have stigmatized the concept of cults into more or less any belief system we disagree with. Everyone has a belief system and in my mind is a part of a kind of cult. The more anyone denies this about themself, the more cultlike (in the pejorative sense) their behavior tends to be.
Are nuclear weapons and their effects only hypothesized to exist? You could still create cults around them, for example, by claiming nuclear war is imminent or needed or some other end-of-times view.
There were people who were concerned about global annihilation from pretty much the moment the atom was first split. Those people were correct in their concerns and they were correct to act on those concerns.
If you put, for example, US presidents in the concerned group, i.e., actual decision makers then fair enough. But it wasn't just concerned scientists and the public.
Uhh correct. Unsurprisingly though, many of the people with the deepest insight and farthest foresight were the people closest to the science. Many more were philosophers and political theorists, or “ivory tower know-nothings.”
Maybe. There were also those scientists working actively on various issues of deterrence, including on how to prevail and fight if things were to happen - and there were quite a few different schools of thought during the cold war (the political science of deterrence was quite different from physical science of weapons, too).
But the difference to AI is that nuclear weapons were then shown to exist. If the lowest critical mass had turned out to be a trillion tons, the initial worries would have been unfounded.
People were on totally opposing sides on how to deal with the risk, not dissimilar to now (with difference that the existential risk was/is actual, not hypothetical).
Sure, there are also some (allegedly credible) people opening their AI-optimist diatribes with statements of positive confidence like:
“Fortunately, I am here to bring the good news: AI will not destroy the world”
My issue is not with people who say “yes this is a serious question and we should navigate it thoughtfully.” My issue is with people who simply assert that we will get to a good outcome as an article of faith.
I just don't see the point in wasting too much effort on a hypothetical risk when there are actual risks (incl. those from AI). Granted, the hypothetical existential risk is far easier to discuss etc. than to deal with actual existential risks.
There is an endless list of hypothetical existential risks one could think of, so that is a direction to nowhere.
Many items on the endless list of hypothetical x-risks don’t have big picture forces acting on them in quite the same way e.g. roughly infinite economic upside by getting within a hair’s breadth of realizing the risk.
No, some risks are known to exist, other just might exist. If you walk across a busy street without looking, there is a risk of being run over - nothing hypothetical about that risk. In contrast, I might fear the force gravity suddenly disappearing but that isn't an actual risk as far as we understand our reality.
Not sure where infinite economic upside comes from, how does that work?