The calculator analogy comes up very often, and it’s a good one because it also illustrates where AI diverges.
The other analogy is taking a forklift to the gym. Sure you lift weights, but you don’t really do any exercise to develop your own muscles.
AI automates a significant chunk of the exercises. So you are left with people who didn’t build any mental muscles.
This would be bad enough, but it’s worse because AI severely benefits experts who have build mental reflexes/taste and can judge / verify output with minimum information.
We’ve had economies where the majority of rich people existed in a different economy, and everyone else lived in a different economy. Class mobility was poor.
Take the current K shaped economy, where a majority of retail spending is from rich people, and not the majority.
Have humans ever had a grand plan for society and succeeded in carrying it out such that the desired goal was reached? I'm not talking about something like the Marshall plan, which was a reaction to previous failed attempts and was created during an urgent need for such action, but a plan where a group of people figured out the best course of action, despite humanity existing in a different paradigm, and then enacted that plan and saw it come to fruit as they predicted?
I never got to a full plan but over the decades I had many creative ideas that try to be good but might not be. The interesting pattern is that few people are curious enough to thing (or help think) about it. Best most can do is find flaws or state it will never happen.
Never is a long time and grand plans need to be executed for their real flaws to appear. The kind of flaws that are never what we thought in advance, usually much worse.
We do need to think about it until the end of humanity. We've build countless societies/civilizations and non of them survived the test of time. It's our ultimate puzzle.
There is probably [say] someone at MS who knows how an OS should work but replacing parts in a running machine isn't easy. Burning everything to the ground isn't ideal either but it does make building more attractive.
The point I was making was more that trying to proactively shape society for some goal will always miss something critical and fail in a spectacular way. Look at Communism or the neocons of Bush Jr era with the Iraq invasion. It sounds smart on paper but when you execute it then it falls apart with tremendous human cost, and the people who are doing it refuse to acknowledge it until they are physically removed from the levers of power.
Now that I think about it though, it is more to do with inflexibility of the plan, rather than having a plan itself. If you are working off of a ideological commitment, rather than setting an end goal with a fuzzy time frame and a loose path to get there, then that's when you land in trouble.
Adaptation is good but it might turn reactionary if not populist. You also want to limit experimentation to measure results. Like when you change few or many lines of code and performance changes.
There was a recent Stanford study which showed that AI enthusiasts and experts and the normies had very different sentiment when it came to AI.
I think most people are going to say they dont want it. I mean, why would anyone want a tool that can screw up their bank account? What benefit does it gain them?
Theres lots of cases of great highly useful LLM tools, but the moment they scale up you get slammed by the risks that stick out all along the long tail of outcomes.
I agree, in general we are going to find that ultimately most employee end users don't want it. Assuming it actually makes you more productive. I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase? You're just giving away that value to your employer.
On the other hand, entrepreneurs and managers are going to want it for their employees (and force it on them) for the above reason.
I want. If I get 10X more productive, I can unilaterally increase my compensation 10X by doing my stuff in 1 unit of time instead of 10 it took, and splitting the remaining 9 units of time into, say, 4 units of time doing more work, securing my position and setting myself up for promotion, and 5 units of time doing whatever the fuck I want. Not all compensation shows up in a bank account - working less, or under less stress, are also valuable.
Of course, such situation is only temporary - if I can suddenly be 10X productive, then so can everyone else, and then the baseline shifts so 10X is the new 1X.
You want it, but then you closed by explaining exactly why you shouldn't want it. Plus, the new baseline isn't neutral (as in, everyone is the same again). If humans can now do 10x the work as before, the employer doesn't need the same number of humans to carry out its work. So the new baseline is actually "let's keep 1 employee and fire the other 9", unless the business can find a way to suddenly expand 10x so that it needs 10x as much work done.
> So the new baseline is actually "let's keep 1 employee and fire the other 9", unless the business can find a way to suddenly expand 10x so that it needs 10x as much work done.
If they have any surplus of money (or loans) they'll try, so those 9 employees may end up becoming team leads or middle management, trying to start new initiatives to get the 10x expansion (and 100x improvement).
The market isn't anywhere near efficient enough to directly translate productivity improvements into labor reductions. Thankfully, because everything that's nice and hopeful and human lives within the market inefficiency; a fully efficient market would be a hell worse than any writer or preacher ever imagined.
lol that has nothing to do with market efficiency.
I’ve seen a number of your posts where you talk about topics you clearly are not all that well versed in, with such confidence when you’re plain wrong.
Of course it does have to do with market efficiency, of which the inertia and surplus within companies (especially large ones) is a part.
> I’ve seen a number of your posts where you talk about topics you clearly are not all that well versed in, with such confidence when you’re plain wrong.
I'm sure it's true. However, since you brought it up, can you be more specific and name three?
Yes, but in the long run, the market expects growth and innovation, not just doing the same thing with fewer workers. Especially when every other company can just buy the exact same advantage for the same price.
Your first paragraph is so short sighted that its message didn't even make it beyond the next one. It's a race to the bottom and your "doing whatever the fuck I want" will obviously never materialize.
The typical work week today is 40 hours. Just like it was 80 years ago. The typical worker is dramatically more productive than 80 years ago yet "doing whatever the fuck I want" time has not increased. Why would it? Employers don't need to pay such that 20 hour work weeks give you the same income. Because everybody around you is ok with working 40 hours.
This won't be different with AI, no matter if the overall effect is 1.1x or 10x or 100x productivity. Because it's not a technological problem but a sociological one.
Good point. My rant assumed that "10x productivity" meant 10x output in 1x time, rather than 1x output in 0.1x time. Only one of those are actually objectionable.
> I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase? You're just giving away that value to your employer.
Those are productivity increases that got our standard of living to where it is. Fewer people doing the same amount of work has, historically speaking, freed people from their current job, allowing them to work on something else.
It's that analogy of the horse, they used to be farm animals. Now, fewer of them are 'employed' but they're much nicer jobs. I'm not sure if the same is true for us this time around though as new jobs being created have increasingly been highly skilled which means the majority can't apply.
If everyone becomes 10x more productive it won’t mean the companies cash flow 10x’s. Where value is loose there is competition, so in theory everyone should win. Unless nobody else can compete to capture that loose 10x value, in which case congratulations, you are now a unicorn.
Of course in reality in the short term what happens is companies lay off people to increase margins. Times will be tough for workers, and equity keeps gravitating towards those who already had it.
>Assuming it actually makes you more productive. I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase?
Given sane working arrangements or at minimum presence of remote work, it would be a bit shortsighted not to want to get done with your work in a tenth amount of time. In the very least, you're competing for a promotion against less effective people, all while having more time for yourself. If not, you're building labor market skillset in an efficient way so you can hop to a better employer.
> I think most people are going to say they dont want it. I mean, why would anyone want a tool that can screw up their bank account? What benefit does it gain them?
I'm not so sure. Matter of marketing and social pressure, big time.
Consider this: "Always-on pervasive google/fb/... login? I think most people are going to say they dont want it. I mean, why would anyone want a tool that would track their every move on the internet?" That could easily have been a statement 20 years ago. And look where we are.
> My current expectation is that the Cowork/Codex set of "professional agents" for non-technical users will be one of the most important and fastest growing product categories of all time, so far.
I disagree. There is a major gap between awesome tech and market uptake.
At this point, the question is whether LLMs are going to be more useful than excel. AI enthusiasts are 100% sure that it’s already more useful than excel, but on the ground, non-technical views do not reflect that view.
All the interviews and real life interactions I have seen, indicate that a narrow band of non-technical experts gain durable benefits from AI.
GenAI is incredible for project starts. A 0 coding experience relative went from mockup to MVP webapp in 3 days, for something he just had an idea about.
GenAI is NOT great for what comes after a non-technical MVP. That webapp had enough issues that, if used at scale, would guarantee litigation.
Mileage varies entirely on whether the person building the tool has sufficient domain expertise to navigate the forest they find themselves in.
Experts constantly decide trade offs which novices don’t even realize matter. Something as innocuous as the placement of switches when you enter the room, can be made inconvenient.
No - as a society we cannot say that its a “vast net” positive. The externalities that harm the commons are not accounted for.
We (or lobbyists) resist having carbon costs included in the prices we pay at the pump.
Edit: More transportation is good; I am not throwing the baby out with the bathwater, just that our accounting for costs makes things look better than they are.
From my experience, LLM performance in these areas is being massively oversold. I have repeatedly tried using Claude to modify a range of models typical of investment banking / private equity / sellside research contexts, and the results have been generally disastrous. On multiple occasions, the xlsx would no longer open.
Just my experience, it’s not a solution but rather a productivity tool. I mostly use it for tasks I can do myself but it would probably take 20-30min to dial in - now Claude can do it in 2-3min. (E.g. in a data table - add a new column that checks column a if the data is a, do x, if the data is b, do y, if the data is c, do z - then combine that with the word after the hyphen in column b —- or another example —- create a new sheet that is the same format as sheet one but show calculates the difference between column a and b bot for sheets 1-12 in a summary)
I don’t get good results when I just have Claude build things on its own - but for these types of specific productivity tasks I can save a couple of hours here and there.
The other analogy is taking a forklift to the gym. Sure you lift weights, but you don’t really do any exercise to develop your own muscles.
AI automates a significant chunk of the exercises. So you are left with people who didn’t build any mental muscles.
This would be bad enough, but it’s worse because AI severely benefits experts who have build mental reflexes/taste and can judge / verify output with minimum information.
reply