> What happens when there’s software you think should exist, and you no longer need to hire a bunch of people at $150k-$250k per year to build it?
What happens when 200 out-of-work former software engineers take a look at your software and use LLMs to quickly build their own version each undercutting everyone else's prices in a race to the bottom?
I think what I’m saying is that there’s a lot of software that doesn’t get built at all because the cost of serving a particular niche market is still too high, and that AI may put some of those markets within reach.
So, those software engineers may be able to move sideways instead of competing to build the same software.
> Basically Yudkowsky invented AI doom and everyone learned it from him. He wrote an entire book on this topic called If Anyone Builds It, Everyone Dies. (You could argue Vinge invented it but I don't know if he intended it seriously.)
Nick Bostrom (who wrote the paper this thread is about) published "Superintelligence: Paths, Dangers, Strategies" back in 2014, over 10 years before "If Anyone Builds It, Everyone Dies" was released and the possibility of AI doom was a major factor in that book.
I'm sure people talked about "AI doom" even before then, but a lot of the concerns people have about AI alignment (and the reasons why AI might kill us all, not because its evil, but because not killing us is a lower priority than other tasks it may want to accomplish) come from "Superintelligence". Google for "The Paperclip Maximizer" to get the gist of his scenario.
"Superintelligence" just flew a bit more under the public zeigeist radar than "If Anyone Builds It, Everyone Dies" did because back when it was published the idea that we would see anything remotely like AGI in our lifetimes seemed very remote, whereas now it is a bit less so.
Though tbh I'm far more worried about the societal impacts of large scale job displacement across so many professional industries at the same time.
I think it is likely to be very, very ugly for society in the near term. Not because the problems are unsolvable, but because everyone is choosing to ignore the threat of them.
And I realize a lot of people will handwave my concerns away with stories of Luddites and Jevon's paradox, but we've never had a tidal wave this big hit all at once and I think the scale (combined with speed of change) fundamentally changes things this time.
I stopped worrying. Western societies have about 30 to 40% of the people doing knowledge work, which contributes to the economy that employs the other 60%.
If that 40% is automated away in one go, there's no economy as we know it anymore. Either it acts as a negative void coefficient and moderates it into something sustainable, or it blows up.
Haven't read that book, but agree that if anyone thinks the workers are likely to capture the value of this productivity shift, they haven't been paying attention to reality.
Though at the same time I also think a lot of the CEO-types (at least in the pure software world) who believe they are going to capture the value of this productivity shift are also in for a rude awakening because if AI doesn't stall out, its only a matter of time from when their engineers are replaceable to when their company doesn't need to exist at all anymore.
On top of this, they're going to have mandatory bed position assignments. Just like you currently can't choose which desk you're going to sit at, and have to put up with the most annoying person on the team as your deskmate, in the near future you're going to have to cuddle with him/her at night too, whether you like it or not, and regardless of his/her bad hygiene, just because your manager decided to stick you two together.
> in the near future you're going to have to cuddle with him/her at night too, whether you like it or not
A solid solution to reduce heating costs. Maybe one can go a step further and remove the bed though, a large mattress (or let's say rubber mat) should be enough.
This has already happened. During the industrial revolution sharing bed shifts was common. You just rediscovered the reason why worker protection laws have maximum working times and forbid employers from demanding outside of working hours.
Don't worry, the employers will make sure these worker protection laws are all rescinded, so we can go back to workers having to share beds. The workers are happily voting for this, because they believe regulations are bad.
A common cited use case of LLMs is scheduling travel, so being able to pretend it’s somebody somewhere else is for sure important to incentivize going somewhere!
> It gives me hope that Trump will replace the top generals and a few layers down with yes-men who will spend the military budget on coke and then the US will be less of a threat to the rest of the world.
I realize this is kind of a joke, but...
The US will continue to be the most powerful military in history for a very long time even with a highly incompetent top-layer. It will just have less people with the wisdom and power to push back on the president's worst impulses.
Unfortunately(?) there's not enough coke in the world to put much of a dent in our current military spending (which they hope to increase even further to 1.5 trillion dollars in 2027). And if the price of coke ever did become a problem, well the US now believes it reserves the right to the entire western hemisphere which includes Columbia...
On a more serious note there is also likely to be a rapid burst of nuclear proliferation across the globe as everyone else adjusts to this new reality sans the traditional post-WW II world order.
On the current Trump path the world is going to get far more dangerous and chaotic, not less.
What's really fun is that conventional weapons can protect you from a crazy aggressor if you're strong enough, but nuclear weapons may not. They only act as a deterrent, so they require your enemy to believe you'll use them, believe that they can't destroy all of them before you use them, and understand the horrible consequences of retaliation.
I get the impression that Trump is pretty negative on nuclear weapons and I don't think he'd do something that could provoke nuclear retaliation. But I doubt he'll be our last mad king. I think the odds are pretty high of at least a small nuclear war within my lifetime. Even if the US keeps it together, proliferation means much higher odds of some idiot leader somewhere pressing the Button.
But I'm pretty certain that outcome also wouldn't be a net positive for the rest of the world in the short or medium term. Very little of the rest of the world is insulated from the US economy.
Open, public non-academic prediction markets basically exist to be manipulated by people with insider knowledge.
Filter out all the noise of people random ass guessing what will happen in the future and focus on people making big bets late in the game. That's your important "prediction".
See: Anonymous person who made $400,000 betting on Maduro being out of office, etc.
I'd be surprised if there weren't already people running HFT-like setups to look for these anomalously large late stage trades to piggyback their own bets on the insider information.
Off on a tangent here but I'd love for anyone to seriously explain how they believe the "AI race" is economically winnable in any meaningful way.
Like what is the believed inflection point that changes us from the current situation (where all of the state-of-the-art models are roughly equal if you squint, and the open models are only like one release cycle behind) to one where someone achieves a clear advantage that won't be reproduced by everyone else in the "race" virtually immediately.
I _think_ the idea is that the first one to hit self improving AGI will, in a short period of time, pull _so_ far ahead that competition will quickly die out, no longer having any chance to compete economically.
At the same time, it'd give the country controlling it so much economic, political and military power that it becomes impossible to challenge.
I find that all to be a bit of a stretch, but I think that's roughly what people talking about "the AI race" have in mind.
They ultimately want to own everyone's business processes, is my guess. You can only jack up the subscription prices on coding models and chatbots by so much, as everyone has already noted... but if OpenAI runs your "smart" CRM and ERP flows, they can really tighten the screws.
If you have the greatest coding agent under your thumb, eventually you orient it toward eating everything else instead of letting everybody else use your agent to build software & make money. Go forward ten years, it's highly likely GPT, Gemini, maybe Claude - they'll have consumed a very large amount of the software ecosystem. Why should MS Office exist at all as a separate piece of software? The various pieces of Office will be trivial for the GPT (etc) of ten years out to fully recreate & maintain internally for OpenAI. There's no scenario where they don't do what the platforms always do: eat the ecosystem, anything they can. If a platform can consume a thing that touches it, it will.
Office? Dead. Box? Dead. DropBox? Dead. And so on. They'll move on anything that touches users (from productivity software to storage). You're not going to pay $20-$30 for GPT and then pay for DropBox too, OpenAI will just do an Amazon Prime maneuver and stack more onto what you get to try to kill everyone else.
Google of course has a huge lead on this move already with their various prominent apps.
Dropbox is actually a great example of why this isn't likely to happen. Deeper pocketed competition with tons of cloud storage and the ability to build easy upload workflows (including directly into software with massive install base) exists, and showed an active interest in competing with them. Still doing OK
Office's moat is much bigger (and its competition already free). "New vibe coded features every week" isn't an obvious reason for Office users to switch away from the platform their financial models and all their clients rely on to a new upstart software suite
> Off on a tangent here but I'd love for anyone to seriously explain how they believe the "AI race" is economically winnable in any meaningful way.
Because the first company to have a full functioning AGI will most likely be the most valuable in the world. So it is worth all the effort to be the first.
> Because the first company to have a full functioning AGI will most likely be the most valuable in the world.
This may be what they are going for, but there are two effectively religious beliefs with this line of thinking, IMO.
The first is that LLMs lead to AGI.
The second is that even if the first did turn out to be true that they wouldn't all stumble into AGI at the same time, which given how relatively lockstep all of the models have been for the past couple of years seems far more likely to me than any single company having a breakthrough the others don't immediately reproduce.
What happens when 200 out-of-work former software engineers take a look at your software and use LLMs to quickly build their own version each undercutting everyone else's prices in a race to the bottom?
reply