The idea is that cheap and readily available and upgradeable intelligence is going to massively increase our purchasing power and what everyone can order for the same cost basically.
If artificial doctors are cents on hour then you can see how that changes our behaviors and level of life.
But on the other hand from the other direction there is a wage decrease incoming from increased competition at the same time. What happens if these two forces clash? Will cheap labour allow us to buy anything for pennies or will it just make us unable to make a single penny?
In my view the labour will fundamentally shift with great pain and personal tragedies to the areas that are not replaceable by AI (because no one wants to watch robots play chess). Such as sports, entertainment and showmanship. Handcrafted goods. Arts. Attention based economy. Self advertisement. Digital prostitution in a very broad sense.
However before it gets there it will be a great deal of strife and turmoil that could plunge the world into dark ages for a while at least. It is unlikely for our somewhat politically rigid society to adapt without great deal of pain. Additionally I am not sure if hypothetical future attention based society could be a utopia. You could have to mount cameras in your house so other people see you at all times for amusement just to have any money at all. We will probably forever need to sell something to someone and I am unsettled by ideas what can we sell if we cannot sell our hard work.
Someone who sees the roads ahead should now make preparations at government level for this shock but it will come too fast and with people at the steering wheel that don’t exactly care.
LLMs cannot possess consciousness for three reasons: they execute as a sequence of Transformer blocks with extremely limited information exchange, these blocks are simple feed-forward networks with no recurrent connections, and the computer hardware follows a modular design.
Shardlow & Przybyła, "Deanthropomorphising NLP: Can a Language Model Be Conscious?" (PLOS One, 2024)
Nature: "There is no such thing as conscious artificial intelligence" (2025)
They argue that the association between consciousness and LLMs is deeply flawed, and that mathematical algorithms implemented on graphics cards cannot become conscious because they lack a complex biological substrate. They also introduce the useful concept of "semantic pareidolia" - we pattern-match consciousness onto things that merely talk convincingly.
They are making a strong argument and I think they are correct. But really these are two different things as I said originally.
You are making as strawman about sentience when I was talking about economical impact of abundant intelligence. I should just ignore it but I was curious yet you have nothing valuable to say aside from common misconceptions conflating the two. Thanks for trolling I guess
I get that I was being poetically vague (but not subtle), but it's really silly to imagine an argument from me and then when you're told that wasn't what I was alluding to you tell me I'm trolling.
We could also literally have Star Trek. Think of all the scientific discoveries we could make if we had armies of scientists the size of our labor force.
But we will have to (painfully) shed our current hierarchies before that comes to pass.
Maybe so but humans have this strange primal need to hoard resources.
Probably a remnant from prehistoric times when it was a matter of life and death.
Will we ever be able to overcome this basic instinct that made capitalism such an unstoppable force? Will this ancient PTSD be ever cured?
I find the insinuation that mental illness is a fundamental part of the human experience to be deeply revolting. There is no excuse for hoarders and rapists.
Man if only there was a singular episode that covered this exact topic in Star Trek and resolved that no, actually slavery wasn't any different for artificial life.
> The idea is that cheap and readily available and upgradeable intelligence is going to massively increase our purchasing power and what everyone can order for the same cost basically.
Seriously? You really don’t see who wins from this and who doesn’t?
> If artificial doctors are cents on hour then you can see how that changes our behaviors and level of life.
Yes, hundreds of thousands lose jobs and a couple of neuro surgeons become multimillionaires.
Okay, I see from the rest of the comment that we understand each other where it goes.
Just a year ago, Elon Musk was gleefully destroying the US government agency that provides food and medicine for many of the poorest, most desperate people on earth. He was literally tweeting about missing out on great parties to put USAID into the "wood chipper".
The tech overlords don't even want to spend a minuscule percentage of the federal budget helping starving people, even when it benefits the US. They are not going to give us a post-scarcity society.
It's very addictive indeed. After I subscribed to Claude, I've been on a sort of hypomanic state where I just want to do stuff constantly. It essentially cured my ADHD. My ability to execute things and bring ideas to fruition skyrocketed. It feels good but I'm genuinely afraid I'll crash and burn once they rug pull the subscriptions.
And I'm being very cautious. I'm not vibecoding entire startups from scratch, I'm manually reviewing and editing everything the AI is outputting. I still got completely hooked on building things with Claude.
Nobody actually cares "what it takes to do it", that's not our problem. You're not entitled to knowing even a single bit of information about us without our consent. Try innovating a way to do it without spying on people.
> I usually find it easier to take their branch, do all of that work myself (attributing authorship to commits whenever appropriate), push it to the master branch and close the PR than puppeteering someone halfway across the globe through GitHub comments into doing all of that for me.
My most negative experiences with free and open source contributions have been like that. The one maintainer who engaged with me until my patches got into master was the best experience I ever had contributing to open source software to this day.
Pretty sad that people see engaging with other developers as "puppeteering someone halfway across the globe through GitHub comments"...
Ordinarily, I would agree. I don't do that with my other GitHub repositories nor at work.
But ghidra-delinker-extension's domain is unusually exacting. Mistakes or shortcuts there will cause undefined behavior of the exotic kind that can be exceedingly difficult to troubleshoot. The only way I've found to keep things from imploding is through unusually exacting quality standards that a drive-by contributor (who usually only cares about a particular ISA/object file combination on a given platform) can't be expected to match.
To be clear, I don't go silent when I receive a PR for this project. In those cases, I do engage with the contributor and give high-level feedback. But my spare time is finite and so is the other person's willingness to jump through all of my hoops.
Seriously doubt any country on Earth is going to attack Russia and risk global thermonuclear annihilation over anything other than a direct attack on their own lands.
I think if Russia drops a nuke on Ukraine then even China will desert them.
India for sure will stop trading with Russia, lest it be seen to condone such insanity (India has a nuclear armed rival next door-India will not want Pakistan get any ideas).
I think this is the only reason Russia did not nuke Ukraine.
That's the wrong question. Nobody cares how the elites in the government feel. They exist to serve us. That is the only reason they have any power at all.
The right question is: how can we make it mathematically impossible for the government to oppress us in any way, regardless of how much they seethe and rage about it? Their happiness does not matter. In fact their anger is probably a good sign that the technology is working as intended. The angrier they get, the freer you are.
The angrier they get, the higher is the chance that they make your technical solution illegal. What, you're using technology that might endanger children? All the concerned parents are suddenly your enemoies, democratically speaking. What? Your technology can be used to do money laundering? And you're using it still? You can now anonymously pay only darknet vendors and other shady bussinesses. Have your anonymity, but cut off from the rest of "good" society.
Given the original motivation to actually invent crypto, I am surprised it wasn't outlawed a long time ago. No goverment likes to be overthrown...
It's just the usual politico-technological arms race. Governments make laws, people make technology that works around the laws in such a way that the government can do nothing about it.
Governments must continuously increase their tyranny in order to maintain the exact same level of control they used to have before. There are two possible outcomes: a free and uncontrollable population emancipated by ubiquitous subversive technology, or a totalitarian government so oppressive that even your concerned parents feel the weight of its boot on their faces.
It's my sincere hope that we'll discover the true limits of the government's tyranny in the process. The harsh truth is people need to accept the existence of some amount of crime if they want to live with basic human dignity. It's just like how the banking industry accepts some degree of fraud as a business expense. They could stamp it out, but the security requirements would add so much friction to everyday transactions nobody would buy anything.
reply