They literally cannot do this, they are not that much different than autocomplete that was in your email 10 years ago, with some transformer NN magic. Stop believing the hype.
Then who else is still holding a job if a tool like that is available? Manually working people, for the few months or years before robotics development fueled by cheap human-level LLMs catches up?
It’s literally not possible. It has nothing to do with intelligence. A perfectly intelligent AI still can’t read minds. 1000 people give the same prompt and want 1000 different things. Of course it will need supervision and intervention.
We can synthesize answers to questions more easily, yes. We can make better use of extensive test suites, yes. We cannot give 1000 different correct answers to the same prompt. We cannot read minds.
If the answer is "yes"? Then, yeah, AI is not coming for you. We can make LLMs multimodal, teach them to listen to audio or view images, but we have no idea how to give them ESP modalities like mind reading.
If the answer is "no"? Then what makes you think that your inability to read minds beats that of an LLM?
This is kind of the root of the issue. Humans are mystical beings with invisible sensibilities. Many of our thoughts come from a spiritual plane, not from our own brains, and we are all connected in ways most of us don't fully understand. In short, yes I can read minds, and so can everybody else.
Today's LLMs are fundamentally the same as any other machine we've built and there is no reason to think it has mystical sensibilities.
We really need to start making a differentiation between "intelligence" and "relevance". The AI can be perfectly intelligent, but without input from humans, it has no connection to our Zeitgeist, no source material. Smart people can be stupid, too, which means they are intelligent but disconnected from society. They make smart but irrelevant decisions just like AI models always will.
AI is like an artificial brain, and a good one, but humans have more to our intelligence than brains. AI is just a brain and we are more.
If you have an AI that's the equivalent of a senior software developer you essentially have AGI. In that case the entire world will fundamentally change. I don't understand why people keep bringing up software development specifically as something that will be automated, ignoring the implications for all white collar work (and the world in general).
Anyone who lived in a browser was fine a decade ago.
At this point... it's basically anyone who doesn't want to play competitive mp games with poorly implemented anti-cheat, or who doesn't have niche legacy hardware (ex - inverters, CNCs, oscopes, etc).
Steam tackling the gaming side of things has basically unlocked the entire Windows consumer software ecosystem for linux. It's incredibly easy to spin up windows only applications with nothing but GUI only software on most distros at this point.
Crazy how much better a system with a modern linux kernel and Gnome or KDE is than Windows 11. I'm at the point where I also prefer it to macOS... which is funny since I think Gnome was basically playing "copy apple" for a bit there 5 years ago, but now has really just become the simpler, easier to use DE.
Ad tech is annoying at worst, it doesn't take literally your job and most of job market with it. Without any seemingly easy way to move to fields which are booming so overall it would stay the same.
I hate ads with passion for past 20 years and (very) actively avoid them at all costs, but I'd take those over what world llms seem to be bringing in soon.
Consider LLMs as the recent iteration to monetize free content, like ads do too, and you are not far from seeing the approaching AI-first enshittoscene. No matter the smart engineers altruistic goals, ROI has to be met.
They evaluate papers that look interesting and should be looked at more deeply. Then, research ideas as much as they can.
Then flag for human review the real possible breakthroughs.
reply