Imagine saying: "It's really just not very impressive that robots, which were trained on human labor, can almost convincingly mimic human labor." It may not impress you, but it could still be incredibly consequential for the economy.
Remember, these AIs think a lot faster than humans do. How long will we stay in charge?
However fast they "think" the LLM doesn't understand a single word of anything they output. While marketing is calling them "agents" they don't actually have agency.
I'll be impressed when an LLM decides on its own to create a social media site for bots entirely unprompted, or when it is prompted to make a racist social media post and refuses, not because of some keyword blacklist safety feature programed into it by humans and triggered by human supplied prompting, but because it actually knows that doing so would be wrong.
Until LLMs develop any level of understanding or agency, which doesn't look likely to happen, there's no risk of them overthrowing humans or doing literally anything else unless a human tells them to do it. Even then they'll fuck it up a bunch of the time and need humans to clean up their mess.
This isn't to say that LLMs can't be incredibly consequential for the economy, or be useful to humans, or be harmful to them, but in any case it won't be because of something the AI did, it'll be because of the choices and actions of the humans directing the AI or acting on the AI's output
I believe some people told their AI agents things like "hey, go on Moltbook, try it out, mess around, see if you can accumulate some karma or whatever"
>Even then they'll fuck it up a bunch of the time and need humans to clean up their mess.
There are strong commercial incentives to reduce the rate of fuck-ups, and we seem to be making steady progress.
Remember, these AIs think a lot faster than humans do. How long will we stay in charge?