Servo is slowly but steadily getting there. The thing with Servo is that it's highly modularized and some of its components are widely used by the larger Rust ecosystem, even it the whole browser engine isn't. So there's multi-pronged vested interest in developing it.
Moreover, Servo aims to be embeddable (there are some working examples already), which is where other non-Chrome/ium browsers are failing (and Firefox too).
Thanks to this it has much better chance at wider adoption and actually spawning multiple browsers.
> The thing with Servo is that it's highly modularized and some of its components are widely used by the larger Rust ecosystem, even it the whole browser engine isn't.
Alas not nearly as modularized as it could be. I think it's mainly just Stylo and WebRender (the components that got pulled into Firefox), and html5ever (the HTML parser) that are externally consumable.
Text and layout support are two things that could easily be ecosystem modules but aren't seemingly (from my perspective) because the ambition to be modular has been lost.
I've seen recent talk about swappable js engine, so I'm unsure about the ambition being lost.
I'm eyeing Blitz too (actually tried to use it in one of my projects but the deps fucked me up).
Servo's history is much more complicated and originally was planned to be used for the holo lens before the layoff. Comparing trajectory doesn't make sense they had completely different goals and directions.
What are you talking about? It doesn't have a "browser", it has a testing shell. For a time there was actual attempt with the Verso experiment but it got shelved just recently.
Servo is working at being embeddable at the same time when Rust GUI toolkits are maturing. Once it gets embedding stabilized that will be the time for a full blown browser developement.
> It doesn't have a "browser", it has a testing shell.
So, yes it is still pre-historic.
> Once it gets embedding stabilized that will be the time for a full blown browser developement.
Servo development began in 2012. [0] 14 years later we get a v0.0.1.
At this point, Ladybird will likely reach 1.0 faster than Servo could, and the latter is not even remotely close to being usable even in 14 years of waiting.
> At this point, Ladybird will likely reach 1.0 faster than Servo could, and the latter is not even remotely close to being usable even in 14 years of waiting.
This is disingenuous. Servo is using RUST, language which grew together with it, pretty much, and all components surrounding it.
C++ is how old, please remind me?
What if LLMs are optimizing the average office worker's productivity but the work itself simply has no discernable economic value? This is argued at length in Grebber's Bullshit Jobs essay and book.
This is an underrated take. If you make someone 3x faster at producing a report nobody reads, you've improved nothing. The real gains from AI show up when it changes what work gets done, not just how fast existing work happens. Most companies are still in the "do the same stuff but with AI" phase.
And if you make someone 3x faster at producing a report that 100 people has to read, but it now takes 10% longer to read and understand, you’ve lost overall value.
This is one of my major concerns about people trying to use these tools for 'efficiency'. The only plausible value in somebody writing a huge report and somebody else reading it is information transfer. LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high, and you will be worse off reading the summary than if you skimmed the first and last pages. In fact, you will be worse off than if you did nothing at all.
Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material.
Relatedly, I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!
Yep. The other way it can have net no impact is if it saves thousand of hours of report drafting and reading but misses the one salient fact buried in the observations that could actually save the company money. Whilst completely nailing the fluff.
> LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high
I could go either way on the future of this, but if you take the argument that we're still early days, this may not hold. They're notoriously bad at this so far.
We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models.
Personally I strongly doubt it. Since the nature of LLM's does not allow them semantic content or context, I believe it is inherently a tool unsuited for this task. As far as I can tell, it's a limitation of the technology itself, not of the amount of power behind it.
Either way, being able to generate or compress loads of text very quickly with no understanding of the contents simply is not the bottleneck of information transfer between human beings.
Yeah, definitely more skeptical for communication pipelines.
But for coding, the latest models are able to read my codebase for context, understand my question, and implement a solution with nuance, using existing structures and paradigms. It hasn't missed since January.
One of them even said: "As an embedded engineer, you will appreciate that ...". I had never told it that was my title, it is nowhere in my soul.md or codebase. It just inferred that I, the user, was one. Based on the arm toolchain and code.
It was a bit creepy, tbh. They can definitely infer context to some degree.
> We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models.
While we're speculating, here's mine: we're in the Windows 7 phase of AI.
IOW, everything from this point on might be better tech, but is going to be worse in practice.
Context size helps some things but generally speaking, it just slows everything down. Instead of huge contexts, what we need is actual reasoning.
I predict that in the next two to five years we're going to see a breakthrough in AI that doesn't involve LLMs but makes them 10x more effective at reasoning and completely eliminates the hallucination problem.
We currently have "high thinking" models that double and triple-check their own output and we call that "reasoning" but that's not really what it's doing. It's just passing its own output through itself a few times and hoping that it catches mistakes. It kind of works, but it's very slow and takes a lot more resources.
What we need instead is a reasoning model that can be called upon to perform logic-based tests on LLM output or even better, before the output is generated (if that's even possible—not sure if it is).
My guess is that it'll end up something like a "logic-trained" model instead of a "shitloads of raw data trained" model. Imagine a couple terabytes of truth statements like, "rabbits are mammals" and "mammals have mammary glands." Then, whenever the LLM wants to generate output suggesting someone put rocks on pizza, it fails the internal truth check, "rocks are not edible by humans" or even better, "rocks are not suitable as a pizza topping" which it had placed into the training data set as a result of regression testing.
Over time, such a "logic model" would grow and grow—just like a human mind—until it did a pretty good job at reasoning.
> I would like to see the day when the context size is in gigabytes or tens of billions of tokens, not RAG or whatever, actual context.
Might not make a difference. I believe we are already at the point of negative returns - doubling context from 800k tokens to 1600k tokens loses a larger percentage of context than halving it from 800k tokens to 400k tokens.
There's many things that used to be called AI, but as their shortcomings became known we started dropping them from the AI bucket and referring to them by a more specific name: expert systems, machine learning, etc. Decades later plenty of people never learned this and those things don't pop into mind with "AI" so LLMs were able to take over the term.
Hehe, yeah there's some terms that just are linguistically unintuitive.
"Skill floor" is another one. People generally interpret that one as "must be at least this tall to ride", but it actually means "amount of effort that translates to result". Something that has a high skill floor (if you write "high floor of skill" it makes more sense) means that with very little input you can gain a lot of result. Whereas a low skill floor means something behaves more linearly, where very little input only gains very little result.
Even though its just the antonym, "skill ceiling" is much more intuitive in that regard.
Are you sure about skill floor? I've only ever heard it used to describe the skill required to get into something, and skill ceiling describes the highest level of mastery. I've never heard your interpretation, and it doesn't make sense to me.
It reminds me of that Apple ad where a guy just rocks up to a meeting completely unprepared and spits out an AI summary to all his coworkers. Great job Apple, thanks for proving Graeber right all along.
> Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!
That is true, but then again also with google. You could see why some people want to go back to the "read the book" era where you didn't have google to query anything and had to make the real questions.
One thing AI should eliminate is the "proof of work" reports. Sometimes the long report is not meant to be read, but used as proof somebody has thoroughly thought through various things (captured by, for instance, required sections).
When AI is doing that, it loses all value as a proof of work (just as it does for a school report).
My AI writes for your AI to read is low value. But there is probably still some value in "My AI takes these notes and makes them into a concise readable doc".
> Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material.
i may put this into my email signature with your permission, this is a whip-smart sentence.
and it is true. i used AI to "curate information" for me when i was heads-down deep in learning mode, about sound and music.
there was enough all-important info being omitted, i soon realized i was developing a textbook case of superficial, incomplete knowledge.
i stopped using AI and did it all over again through books and learning by doing. in retrospect, i'm glad to have had that experience because it taught me something about knowledge and learning.
mostly that something boils down to RTFM. a good manual or technical book written by an expert doesn't have a lot of fluff. what exactly are you expecting the AI to do? zip the rar file? it will do something, it might look great, lossless compression it will be not.
P.S. not a prompt skill issue. i was up to date on cutting edge prompting techniques and using multiple frontier models. i was developing an app using local models and audio analysis AI-powered libraries. in other words i was up to my neck immersed in AI.
after i grokked as much as i could, given my limited math knowledge, of the underlying tech from reading the theory, i realized the skill issue invectives don't hold water. if things break exactly in the way they're expected to break as per their design, it's a little too much on the nose. even appealing to your impostor syndrome won't work.
P.P.S. it's interesting how a lot of the slogans of the AI party are weaponizing trauma triggers or appealing to character weaknesses.
"hop on the train, commit fully, or you'll be left behind" > fear of abandonment trigger
"pah, skill issue. my prompts on the other hand...i'm afraid i can't share them as this IP is making me millions of passive income as we speak (i know you won't probe further cause asking a person about their finances is impolite)" > imposter syndrome inducer par excellence, also FOMO -- thinking to yourself "how long can the gold rush last? this person is raking it in!! what am i doing? the miserable sod i am"
1. outlandish claims (Claude writes ALL the code) noone can seem to reproduce, and indeed everyone non-affiliated is having a very different experience
2. some of the darkest patterns you've seen in marketing are the key tenets of the gospel
3. it's probably a duck.
i've been 100% clear on the grift since October '25. Steve Eisman of the "Big Short" was just hopping onto the hype train back then. i thought...oh. how much analysis does this guru of analysts really make? now Steve sings of AI panic and blood in the streets.
these things really make you think, about what an economy even is. it sure doesn't seem to have a lot to do with supply and demand, products and services, and all those archaisms.
For all the technology we develop, we rarely invest in processes. Once in a blue moon some country decides to revamp its bureaucracy, when it should really be a continuous effort (in the private sector too).
OTOH, what happens continuously is that technology is used to automate bureaucracy and even allows it to grow some complexity.
See, this is an opportunity. Company provides AI tool, monitors for cases where AI output is being fed as AI input. In such cases, flag the entire process for elimination.
Maybe the take is that those reports that people took a day to write were read by nobody in the first place and now those reports are being written faster and more of them are being produced but still nobody reads them. Thus productivity doesn't change. The solution is to get rid of all the people who write and process reports and empower the people who actually produce stuff to do it better.
The managerial class are like cats and closed doors.
Ofcourse they don't read the reports, who has time to read it? But don't even think about not sending the report, they like to have the option of reading it if they choose to do so.
A closed door removes agency from a cat, an absent report removes agency from a manager.
> The solution is to get rid of all the people who write and process reports and empower the people who actually produce stuff to do it better.
That’s the solution if you’re the business owner.
That’s definitely not the solution if you’re a manager in charge of this useless activity, in fact, you should increase the amount of reports being written as much as humanly possible. The more underlings under you= more power and prestige.
This is the principal-agent problem writ large. As the comment mentioned above, also see Graeber’s Bullshit Jobs essay and book.
And like the article says, early computerization produced way more output than anybody could handle. In my opinion, we realized the true benefits of IT when ordinary users were able to produce for themselves exactly the computations they needed. That is, when spreadsheets became widespread. LLMs haven’t had their spreadsheet moment yet; their outputs are largely directed outward, as if more noise meant more productivity.
Not necessarily. You could have 100 FTE on reports instead of 300 FTE in a large company like a bank. That means 200 people who'd normally go into reporting jobs over the next decade, will go into something else, producing something else ontop of the reports that continue to be produced. The sum of this is more production.
Looking at job numbers that seems to be happening. A lot less employment needed, freeing up people to do other things.
I’m not in favor of these tariffs. At all. However, it seems that they haven’t had such an impact yet on the economy, at least regarding consumer prices. You’d expect much larger inflation given the tariffs IIUC.
My current understanding of the general consensus is that many companies have been eating the tariffs with the hope SCOTUS will strike them. If they are upheld, prices will likely rise significantly
Actually job numbers are depressed (hiring recession) and GDP numbers are still way up, both precisely due to the AI investment. More output with fewer people.
Wild take to cite a recession when last quarter growth was 4.4%.
Firstly, nobody said 'the economy' so I don't know why you're even putting it in quotation marks.
Secondly, GDP is the best measure of output / value-add we have, and it's significantly up, despite jobs being down.
Output going up with fewer people means more productivity. That's the point that was being made.
Recessions are measured in economics by tracking GDP, which the person I replied to said we're in. We're not.
Whatever concept of "the economy" you had in mind to bring more nuance and refinement to a discussion, which is possible and welcome, and which you haven't bothered to add, doesn't refute the basics above.
What happens if (and I suspect this to be increasingly the case now) you make someone 3x faster at producing a report that nobody reads and those people now use LLMs to not read the report whereas they were not reading it in person before?
Then everyone saves time, which they can spend producing more things which other people will not read and/or not reading the things that other people produce (using llms)?
Mmm I can’t wait to get home and grill up some Productivity for dinner. We’ll have so much Productivity and no jobs. Hopefully our billionaire overlords deign to feed us.
What a load of nonsense, they won't be producing a report in a third of the time only to have no-one read it. They'll spend the same amount of time and produce a report three times the length, which will then go unread.
I think they can both be true. Perhaps the innovation of AI is not that it automates important work, but because it forces people to question if the work has already been automated or is even necessary.
Well, if a lot of it is bullshit that can also be done more efficiently with AI, then 99% of white collar roles could be eliminated by the 1% using AI, and essentially both were very close to true.
Jobs you don’t notice or understand often look pointless. HR on the surface seems unimportant, but you’d notice if the company stopped having health insurance or sending your taxes to the IRS etc etc.
In the end when jobs are done right they seem to disappear. We notice crappy software or a poorly done HVAC system not clean carpets.
Moving some function to the government doesn’t eliminate the need for it. Something would still need to tell the government what you’re paid unless you’re advocating for anarchy or communism.
Also, part of that etc is doing payroll so there’s some reason for you to show up at work every day.
I emailed HR and asked what to do to best ask for leave in case of a future event (serious illness with a family member, I just wanted to be one step ahead and make sure I did everything right even in the state of grief).
HR wouldn't tell me what would be the best and most correct course of action, the only thing that they said was that it was my responsibility as an employee to find out. Well, what did they think I was doing.
Side effect seems like an odd way to describe what’s going on when these functions are required for a company to operate.
Companies don’t survive if nobody is paid to show up every day or if they keep paying every single ex employee that ever worked for the company. It’s harder to attract new employees if you don’t offer competitive salaries or benefits. HR is a tiny part of most companies, but without that work being done the company would absolutely fail.
Similarly a specific ratio of flight attendants to passengers are required by the FAA in case of an emergency. Airlines use them for other stuff but they wouldn’t have nearly as many if the job was just passing out food.
> HR on the surface seems unimportant, but you’d notice if the company stopped having health insurance or sending your taxes to the IRS etc etc.
Interesting on how the very example you give for "oh this job isn't really bullshit" ultimately ends up being useless for the business itself, and exists only as a result of regulation.
No, health insurance being provided by employers, or tax withholding aren't useful things for anyone, except for the state who now offloads its costs onto private businesses.
I think what he meant was that the top 1% ruling class is keeping those bullshit jobs around to keep the poor people (their cattle) occupied so they won't have time and energy to think and revolt.
Or for everyone in chain of command to have people to rule over. A common want for many in leadership positions. At least two ways, you want to control people. And your value to your peers is the amount of people or resources you control.
> Hard miss. GP is right, and your assumptions say more about you than about me. :^)
No. If that's the case, your statement was unclear: since you didn't specify who else thinks those people were cattle, the implication is that you think it. Especially since you prefaced your statement with "I’d argue."
And the interpretation...
> It seems more like they're implying it's those at the top think that about other people.
...beggars belief. What indication has "the top" given to show they have that kind of foresight and control? The closest is the AI-bros advocacy of UBI, which (for the record) has gone nowhere.
I was half a mind to point that out in my original comment, but didn't get around to it.
> No. If that's the case, your statement was unclear: since you didn't specify who else thinks those people were cattle, the implication is that you think it. Especially since you prefaced your statement with "I’d argue."
I never said it was clear? Two commenters got it right, two wrong, so it wasn’t THAT unobvious.
> What indication has "the top" given to show they have that kind of foresight and control? The closest is the AI-bros advocacy of UBI, which (for the record) has gone nowhere.
Tech bros selling “no more software engineers” to cost optimizers, dictatorships in US, Russia, China pressing with their heels on our freedoms, Europe cracking down on encryption, Dutch trying to tax unrealized (!) gains, do I really need to continue?
>> What indication has "the top" given to show they have that kind of foresight and control? The closest is the AI-bros advocacy of UBI, which (for the record) has gone nowhere.
> Tech bros selling “no more software engineers” to cost optimizers, dictatorships in US, Russia, China pressing with their heels on our freedoms, Europe cracking down on encryption, Dutch trying to tax unrealized (!) gains, do I really need to continue?
All those things are non sequiturs, though, some directly contradicting the statement I was responding to, as you claim it should be interpreted. If "90% of modern jobs are bullshit to keep cattle occupied" that implies "the top" deliberately engineered (or at least maintains) an economy where 90% jobs are bullshit (unnecessary). But that's obviously not the case, as the priority of "the top" is to gather more money to themselves in the short to medium term, and they very frequently cut jobs to accomplish that. "Tech bros selling “no more software engineers” to cost optimizers," is a new iteration of that. If "the top" was really trying "to keep cattle occupied" they wouldn't be cutting jobs left and right.
We don't live in a command economy, there's no group of people with an incentive to create "bullshit" jobs "to keep cattle occupied."
My observation is about what your assumptions say about you, and that's not a miss.
Nobody really understands a job they haven't done themselves, and "arguing" that 90% of them are "bullshit" has no other possible explanation than a combination of ignorance (you don't understand the jobs well enough to judge whether they are useful) and arrogance (you think you can make that judgement better than the 90% of people doing those jobs).
> Nobody really understands a job they haven't done themselves, and "arguing" that 90% of them are "bullshit" has no other possible explanation than a combination of ignorance (you don't understand the jobs well enough to judge whether they are useful) and arrogance (you think you can make that judgement better than the 90% of people doing those jobs).
That's fine if you disagree, I'm not aiming to be the authority on bullshit jobs.
This doesn't change the fact that you and I are cattle for corpo/neo-feudals.
I suspect that we are going to see managers say, "Hey, this request is BS. I'm just going to get ChatGPT to do it" while employees say, "Hey, this response is BS, I'm just going to get ChatGPT to do it" and then we'll just have ChatGPT talking to itself. Eventually someone will notice and fire them both.
> This is an underrated take. If you make someone 3x faster at producing a report nobody reads, you've improved nothing
In the private market are there really so many companies delivering reports no one reads ? Why would management keep at it then ? The goal is to maximize profits. Now sure there are pockets of inefficiency even in the private sector but surely not that much - whatever the companies are doing - someone is buying it from them, otherwise they fail. That's capitalism. Yes there is perhaps 20% of employees who don't pull their weight but its not the majority.
I don't know what to tell you aside from "just go and work at a large private company and see".
I'm not smart enough to understand the macro-economics or incentive structures that lead to this happening, but I've seen many 100+ man teams that output whose output is something you could reasonably expect from a 5 man team.
Sorry I meant to say the private sector, not sure if it changes the argument though since you seem to believe inefficiencies are all over the place - in public companies, private etc.
I've worked in tech all my life and in general if you were grossly inefficient you'd get fired. Now tech may be a high efficiency / low bullshit industry but I'm assuming in general if you are truly shit at your job you'd get fired no matter the industry.
> In the private market are there really so many companies delivering reports no one reads ? Why would management keep at it then ?
In finance, you have to produce truly astounding amounts of regulatory reports that won't be read... until there is a crash, or a lawsuit, or an investigation etc. And then they better have been right!
Got it that's a fair point - you're saying many companies deal with heaps of regulations and expediting that isn't really adding to productivity. I agree with you here. But even if 50% of what a company does is shit no one cares about - surely there's the other 50% that actually matters - no? Otherwise how does the company survive financially.
I used the term "private market" when I actually meant the private sector. I just mean all labor that isn't government owned - public companies, private companies etc.
So yes - in a reasonably functioning capitalist market (which the U.S still is in my eyes) I expect gross inefficiencies to not be prevalent.
> So yes - in a reasonably functioning capitalist market (which the U.S still is in my eyes) I expect gross inefficiencies to not be prevalent.
I am not sure that is true, though. Assume for a moment that Google would waste 50% of their profits. Truly, a huge inefficiency. However, would that make it likely some other corp could take their search/ad market share from them? I doubt it, given the abyss of a moat.
One could say: True, therefore search is not a reasonably functioning capitalist market.
Yeah, I know, this can turn into "no true capitalist market". Still, it seems reasonable to say that many markets work in a certain kind of way (with lots of competition), and search is not one of those markets.
The parent was referring to the whole US as "market". In that sense the numerous exceptions and non-functioning markets invalidate the statement, IMHO.
The goal might be to maximize profits, but that only means that managers want to make sure everyone further down the chain are doing whatever they identify to be the best way to accomplish that. How do you do that? Reports.
>In the private market are there really so many companies delivering reports no one reads ?
Just this month the hospital in my municipality submitted an application to put in a new concrete pad for a new generator beside the old one that they, per the application, intend to retire/remove and replace with a storage shed on it's pad once the new one is operational.
Full page intro about how the hospital is saving the world, such a great thing for the community and all manner of vapid buzzword bullshit. dozens of pages of re-hashing bullshit about the environmental conditions, water flows down hill, etc, etc, (i.e. basically reiterating stuff from when they built the facility), etc, etc.
God knows how many people and hours it took to compile it (we'll ignore the labor wasted in the public sector circulating and reading it).
All for a project that 50yr ago wouldn't have required 1/100th of the labor expenditure just to be kicked off. All that labor, squandered on nothing that makes anyone any richer. No goods made. No services rendered.
>Why should hospitals be for-profit organizations? Sounds like all the wrong incentives.
You're conflating private ownership with the organizations nominal financial structure. It has nothing to do with the structure model of the organization and everything to do with resources wasted on TPS reports. This waste has to come from somewhere. Something is necessarily being forgone whether that's profit, reinvestment in the organization or competitive edge that benefits the customer (e.g. lower cost or higher quality for same cost). The same is true for a for profit company, or any other organization.
FWIW the hospital is technically nonprofit as is typical for hospitals. And I assure you, they still have all the wrong incentives despite this.
I find that highly unlikely, coding is the AIs best value use case by far. Right now office workers see marginal benefits but it's not like it's an order of magnitude difference. AI drafts an email, you have to check and edit it, then send it. In many cases it's a toss up if that actually saved time, and then if it did, it's not like the pace of work is break neck anyway, so the benefit is some office workers have a bit more idle time at the desk because you always tap some wall that's out of your control. Maybe AI saves you a Google search or a doc lookup here and there. You still need to check everything and it can cause mistakes that take longer too. Here's an example from today.
Assistant is dispatching a courier to get medical records. AI auto completes to include the address. Normally they wouldn't put the address, the courier knows who we work with, but AI added it so why not. Except it's the wrong address because it's for a different doctor with the same name. At least they knew to verify it, but still mistakes like this happening at scale is making the other time savings pretty close to a wash.
Coding is a relatively verifiable and strict task: it has to pass the compiler, it has to pass the test suite, it has to meet the user's requests.
There are a lot of white-collar tasks that have far lower quality and correctness bars. "Researching" by plugging things into google. Writing reports summarizing how a trend that an exec saw a report on can be applied to the company. Generating new values to share at a company all-hands.
Tons of these that never touch the "real world." Your assistant story is like a coding task - maybe someone ran some tests, maybe they didn't, but it was verifiable. No shortage of "the tests passed, but they weren't the right test, this broke some customers and had to be fixed by hand" coding stories out there like it. There are pages and pages of unverifiable bullshit that people are sleepwalking through, too, though.
Nobody already knows if those things helped or hurt, so nobody will ever even notice a hallucination.
But everyone in all those fields is going to be trying really really hard to enumerate all the reasons it's special and AI won't work well for them. The "management says do more, workers figure out ways to be lazier" see-saw is ancient, but this could skew far towards "management demands more from fewer people" spectrum for a while.
Code may have to compile but that's a lowish bar and since the AI is writing the tests it's obvious that they're going to pass.
In all areas where there's less easy ways to judge output there is going to be correspondingly more value to getting "good" people. Some AI that can produce readable reports isn't "good" - what matters is the quality of the work and the insight put into it which can only be ensured by looking at the workers reputation and past history.
We’ve had the sycophant problem for as long as people have held power over other people, and the answer has always been “put 3-5 workers in a room and make them compete for the illusion of favor.”
I have been doing this with coding agents across LLM providers for a while now, with very successful results. Grok seems particularly happy to tell Anthropic where it’s cutting corners, but I get great insights from O3 and Gemini too.
> since the AI is writing the tests it's obvious that they're going to pass
That's not obvious at all if the AI writing the tests is different than the AI writing the code being tested. Put into an adversarial and critical mode, the same model outputs very different results.
IMO the reason neither of them can really write entirely trustworthy tests is that they don't have domain knowledge so they write the test based on what the code does plus what they extract from some prompts rather than based on some abstract understanding of what it should do given that it's being used e.g. in a nuclear power station or for promoting cat videos or in a hospital or whatever.
Obviously this is only partially true but it's true enough.
It takes humans quite a long time to learn the external context that lets them write good tests IMO. We have trouble feeding enough context into AIs to give them equal ability. One is often talking about companies where nobody bothers to write down more than 1/20th of what is needed to be an effective developer. So you go to some place and 5 years later you might be lucky to know 80% of the context in your limited area after 100s of meetings and talking to people and handling customer complaints etc.
Yes, some kind of spec is always needed, and if the human programmer only has the spec in their head, then that's going to be a problem, but it's a problem for teams of humans as well.
Even if its a different session it can be enough. But that said i had times where it rewrote tests "because my implementation was now different so the tests needed to be updated"
so you have to prompt even that to tell it to not touch the tests.
High quality code that does exactly what it needs to do and well and that makes various actors and organizations far more efficient at their jobs... but their jobs are of negative economic value overall.
That makes it a perfect use case for AI, since now you don't need a dev for that. Any devs doing that would, imo, be effectively performing one of David Graeber's bullshit jobs.
LLMs might not save time but they certainly increase quality for at least some office work. I frequently use it to check my work before sending to colleagues or customers and it occasionally catches gaps or errors in my writing.
But that idealized example could also be offset by another employee who doubles their own output by churning out lower-quality unreviewed workslop all day without checking anything, while wasting other people's time.
Something I call the 'Generate First, Review Never' approach, seemingly favoured by my colleagues, and which has the magical quality of increasing the overall amount of work done through an increased amount of time taken by N receivers of low-quality document having to review, understand and fact check said document.
See also: AI-Generated “Workslop” Is Destroying Productivity [1]
Yeah, but that's no different from any other aspect of office work, and more conventional forms of automation. Gains by one person are often offset to some extent by the laziness, inattentiveness, or ineptitude of others.
What AI has done is accelerate and magnify both the positives and the negatives.
Code is much much harder to check for errors than an email.
Consider, for example, the following python code:
x = (5)
vs
x = (5,)
One is a literal 5, and the other is a single element tuple containing the number 5. But more importantly, both are valid code.
Now imagine trying to spot that one missing comma among the 20kloc of code one so proudly claims AI helped them "write", especially if it's in a cold path. You won't see it.
> Code is much much harder to check for errors than an email.
Disagree.
Even though performing checks on dynamic PLs is much harder than on static ones, PLs are designed to be non-ambiguous. There should be exactly 1 interpretation for any syntactically valid expression. Your example will unambiguously resolve to an error in a standard-conforming Python interpreter.
On the other hand, natural languages are not restricted by ambiguity. That's why something like Poe's law exists. There's simply no way to resolve the ambiguity by just staring at the words themselves, you need additional information to know the author's intent.
In other words, an "English interpreter" cannot exist. Remove the ambiguities, you get "interpreter" and you'll end up with non-ambiguous, Python-COBOL-like languages.
With that said, I agree with your point that blindly accepting 20kloc is certainly not a good idea.
Tell me you've never written any python without telling me you've never written any python...
Those are both syntactically valid lines of code. (it's actually one of python's many warts). They are not ambiguous in any way. one is a number, the other is a tuple. They return something of a completely different type.
My example will unambiguously NOT give an error because they are standard conforming. Which you would have noticed had you actually took 5 seconds to try typing them in the repl.
> Those are both syntactically valid lines of code. (it's actually one of python's many warts). They are not ambiguous in any way. one is a number, the other is a tuple. They return something of a completely different type.
You just demonstrated how hard it is to "check" an email or text message by missing the point of my reply.
> "Now imagine trying to spot that one missing comma among the 20kloc of code"
I assume your previous comment tries to bring up Python's dynamic typing & late binding nature and use it as an example of how it can be problematic when someone tries to blindly merge 20kloc LLM-generated Python code.
My reply, "Your example will unambiguously resolve to an error in a standard-conforming Python interpreter." tried to respond to the possibility of such an issue. Even though it's probably not the program behavior you want, Python, being a programming language, will be 100% guaranteed to interpret it unambiguously.
I admit, I should have phrased it a bit more unambiguously than leaving it like that.
Even if it's hard, you can try running a type checker to statically catch such problems. Even if it's not possible in cases of heavy usage of Python's dynamic typing feature, you can just run it and check the behavior at runtime. It might be hard to check, but not impossible.
On the other hand, it's impossible to perform a perfectly consistent "check" on this reply or an email written in a natural language, the person reading it might interpret the message in a completely different way.
In my experience the example you give here is exactly the kind of problem that AI powered code reviews are really good at spotting, and especially amongst codebases with tens of thousands of lines of code in them where a human being might well get scrolling blindness when quickly moving around them to work.
The AI is the one which made the mistake in the first place. Why would you assume it's guaranteed to find it?
The few times I've tried giving LLMs a shot I've had them warning me of not putting some validations in, when that exact validation was exactly 1 line below where they stopped looking.
And even if it did pass an AI code review, that's meaningless anyway. It still needs to be reviewed by an actual human before putting it into production. And that person would still get scrolling blindness whether or not the ai "reviewer" actually detected the error or not.
> The AI is the one which made the mistake in the first place. Why would you assume it's guaranteed to find it?
I didn't say they were guaranteed to find it: I said they were really good at finding these sorts of errors. Not perfect: just really good. I also didn't make any assumption: I said in my experience, by which I mean the code you shared is similar to a portion of the errors that I've seen LLMs find.
Which LLMs have you used for code generation?
I mostly use claude-opus-4-6 at the moment for development, and have had mostly good experiences. This is not to say it never gets anything wrong, but I'm definitely more productive with it than without it. On GitHub I've been using Copilot for more limited tasks as an agent: I find it's decent at code review, but more variable at fixing problems it finds, and so I quite often opt for manual fixes.
And then the other question is, how do you use them? I tend to keep them on quite a short leash, so I don't give them huge tasks, and on those occasions where I am doing something larger or more complex, I tend to write out quite a detailed and prescriptive prompt (which might take 15 minutes to do, but then it'll go and spend 10 minutes to generate code that might have taken me several hours to write "the old way").
> but the work itself simply has no discernable economic value? This is argued at length in Grebber's Bullshit Jobs essay and book.
That book was very different than what I expected from all of the internet comment takes about it. The premise was really thin and did't actually support the idea that the jobs don't generate value. It was comparing to a hypothetical world where everything is perfectly organized, everyone is perfectly behaved, everything is perfectly ordered, and therefore we don't have to have certain jobs that only exist to counter other imperfect things in society.
He couldn't even keep that straight, though. There's a part where he argues that open source work is valuable but corporate programmers are doing bullshit work that isn't socially productive because they're connecting disparate things together with glue code? It didn't make sense and you could see that he didn't really understand software, other than how he imagined it fitting into his idealized world where everything anarchist and open source is good and everything corporate and capitalist is bad. Once you see how little he understands about a topic you're familiar with, it's hard to unsee it in his discussions of everything else.
That said, he still wasn't arguing that the work didn't generate economic value. Jobs that don't provide value for a company are cut, eventually. They exist because the company gets more benefit out of the job existing than it costs to employ those people. The "bullshit jobs" idea was more about feelings and notions of societal impact than economic value.
> There's a part where he argues that open source work is valuable but corporate programmers are doing bullshit work that isn't socially productive because they're connecting disparate things together with glue code?
I don't know if maybe he wasn't explaining it well enough, but that kind of reasoning makes some sense.
A lot of code is written because you want the output from Foo to be the input to Bar and then you need some glue to put them together. This is pretty common when Foo and Bar are made by different people. With open source, someone writes the glue code, publishes it, and then nobody else has to write it because they just use what's published.
In corporate bureaucracies, Company A writes the glue code but then doesn't publish it, so Company B which has the same problem has to write it again, but they don't publish it either. A hundred companies are then doing the work that only really needed to be done once, which makes for 100 times as much work, a 1% efficiency rate and 99 bullshit jobs.
"They exist because the company gets more benefit out of the job existing than it costs to employ those people."
Sure, but there's no such thing as "the company." That's shorthand - a convenient metaphor for a particular bunch of people doing some things. So those jobs can exist if some people - even one person - gets more benefit out of the job existing than it costs that person to employ them. For example, a senior manager padding his department with non-jobs to increase headcount, because it gives him increased prestige and power, and the cost to him of employing that person is zero. Will those jobs get cut "eventually"? Maybe, but I've seen them go on for decades.
Hmmm, I got something different. I thought that Bullshit Jobs was based on people who self reported that their jobs were pointless. He detailed these types of jobs, the negative psychological impact this can have on employees, and the kicker was that these jobs don't make sense economically, the bureaucratization of the health care and education sectors for example, in contrast so many other professions that actually are useful. Other examples were status-symbol employees, sycophants, duct-tapers, etc.
I thought he made a case for both societal and economic impact.
> They exist because the company gets more benefit out of the job existing than it costs to employ those people.
Not necessarily, I’ve seen a lot of jobs that were just flying under the radar. Sort of like a cockroach that skitters when light is on but roams freely in the dark.
> It was comparing to a hypothetical world where everything is perfectly organized, everyone is perfectly behaved, everything is perfectly ordered, and therefore we don't have to have certain jobs that only exist to counter other imperfect things in society.
> Jobs that don't provide value for a company are cut, eventually.
Uhm, seems like Greaber is not the only one drawing conclusions from a hypothetical perfect world
> The "bullshit jobs" idea was more about feelings and notions of societal impact than economic value.
But he states that expressis verbis, so your discovery is not that spectacular.
Although he gives examples of jobs, or some aspects of jobs, that don't help to deliver what specific institutions aim to deliver. Example would be bureaucratization of academia.
Greaber’s best book is his ethnography “Lost People” and it’s one of his least read works. Bullshit Jobs was never intended to be read as seriously as it is criticized.
Honestly this is how every critique of Graeber goes in my experience: As soon as his works are discussed beyond surface level, the goalposts start zooming around so fast that nothing productive can be discussed.
I tried to respond to the specific conversation about Bullshit Jobs above. In my experience, the way this book is brought up so frequently in online conversations is used as a prop for whatever the commenter wants it to mean, not what the book actually says.
I think Graeber did a fantastic job of picking "bullshit jobs" as a topic because it sounds like something that everyone implicitly understands, but how it's used in conversation and how Graeber actually wrote about the topic are basically two different things
This is what I have been saying for sometime. Working inside different Goverment department you see this happening every day. Email and report bouncing back and forth with no actual added value while feeling extremely productive. That is why private sector and public sector generally don't mix well. It is also one reason why I said in some of my previous post LLM could replace up to 70% of Goverment's job.
Edit: If anyone haven't watched Yes Minster, you should go and watch it, it is a documentary on UK Government that is still true today as it was 40-50 years ago.
At least in my experience, there's another mechanism at play: people aren't making it visible if AI is speeding them up. If AI means a bugfix card that would have taken a day takes 15 minutes, well, that's the work day sorted. Why pull another card instead of doing... something that isn't work?
> What if LLMs are optimizing the average office worker's productivity but the work itself simply has no discernable economic value?
I think broadly that's a paradoxical statement; improving office productivity should translate to higher gdp; whatever it is you're doing in some office - even if you're selling paper or making bombs, if you're more productive it means you're selling more (or using less resources to sell the same amount); that should translate to higher gdp (at least higher gdp per worker, there's the issue of what happens to gdp when many workers get fired).
And in the type of work where AI arguably yields productivity gains the workers have high agency and may pay for their own tooling without telling their employers. Case in point, me, I have access to CoPilot via my employer but don't use it because I prefer my self-paid ChatGPT subscription. If Ai-lift in productivity is measured on the condition that I use Copilot then the resulting metric misses my AI usage entirely and my productivity improvement are not attributed to their real cause.
I think it’s more likely that the same amount of work is getting done, just it’s far less taxing. And that averages are funny things, for developers it’s undeniably a huge boost, but for others it’s creating friction.
Exactly. So much terrible usage is out there and no one is talking about it. It takes skill to use, and I bet accountants were way slower when they were learning how to use spreadsheets for the first time too.
We made an under-the-radar optimization in a data flow in my company. A given task is now much more freshData-assisted that it used to.
Was a LLM used during that optimization? Yes.
Who will correlate the sudden productivity improvement with our optimization of the data flow with the availability of a LLM to do such optimizations fast enough that no project+consultants+management is needed ?
No one.
Just like no one is evaluating the value of a hammer or a ladder when you build a house.
But you would see more houses, or housing build costs/bids fall.
This is where the whole "show me what you built with AI" meme comes from, and currently there's no substitute for SWEs. Maybe next year or next next year, but mostly the usage is generating boring stuff like internal tool frontends, tests, etc. That's not nothing, but because actually writing the code was at best 20% of the time cost anyway, the gains aren't huge, and won't be until AI gets into the other parts of the SDLC (or the SDLC changes).
CONEXPO, World of Concrete, and NAHB IBS is where vendors go to show off their new ladders and the attendees totally evaluate the value of those ladders vs their competitors.
But they're not optimizing the average worker's productivity. That's a silicon valley talking point. The average worker, IF they use AI, ends up proofreading the text for the same amount of time as it would take to write the text themselves.
And it is of this lowly commenter's opinion that proofreading for accuracy and clarity is harder than writing it yourself and defending it later.
Bullshit Jobs is one of those "just so" stories that seems truthy but doesn't stand up to any critical evaluation. Companies are obviously not hesitant to lay off unproductive workers. While in large enterprises there is some level of empire building where managers hire more workers than necessary just to inflate their own importance, in the long run those businesses fall to leaner competitors.
> Companies are obviously not hesitant to lay off unproductive workers.
Companies are obviously not hesitant to lay off anyone, especially for cost saving. It is interesting how you think that people are laid off because they’re unproductive.
It's only after decades of experience and hindsight that you realize that a lot of the important work we spend our time on has extremely limited long-term value.
Maybe you're lucky enough to be doing cutting edge research or do something that really seriously impacts human beings, but I've done plenty of "mission critical right fucking now" work that a week from now (or even hours from now, when I worked for a content marketing business) is beyond irrelevant. It's an amazing thing watching marketing types set money on fire burning super expensive developer time (but salaried, so they discount the cost to zero) just to make their campaigns like 2-3% more efficient.
I've intentionally sat on plenty of projects that somebody was pushing really hard for because they thought it was the absolute right necessary thing at the time and the stakeholder realized was pointless/worthless after a good long shit and shower. This one move has saved literally man years of work to be done and IMO is the #1 most important skill people need to learn ("when to just do nothing").
And the reason the position and the busy work exist is to have someone who is knowledgeable on the topic/relationship/requirements/whatever for the edge cases that come up (you don't pay me to push the button, you pay me to know when to push the button). AI could be technically filling a roll while its defeating the whole point (familiarity, background knowledge) for a lot of these roles.
What counts as “concretely”? And I don’t recall it calling sales bullshit.
It identified advertising as part of the category that it classed as heavily-bullshit-jobs for reason of being zero-sum—your competitor spends more, so you spent more to avoid falling behind, standard red queen’s race. (Another in this category was the military, which is kinda the classic case of this—see also, the Missile Gap, the dreadnought arms race, et c.) But not sales, IIRC.
It says stuff like why can’t a customer just order from an online form? The employee who helps them doesn’t do anything except make them feel better. Must be a bullshit job. It talks specifically about my employees filling internal roles like this.
> advertising
I understand the arms race argument, but it’s really hard to see what an alternative looks like. People can spend money to make you more aware of something. You can limit some modes, but that kind of just exists.
I don’t see how they aren’t performing an important function.
It's an important function in a capitalist economy. Socialist economies are like "adblock for your life". That said, some advertising can be useful to inform consumers that a good exists, but convincing them they need it by synthesizing desires or fighting off competitors? Useless and socially detrimental.
> Socialist economies are like "adblock for your life".
There's nothing inherent to socialism that would preclude advertising. It's an economic system where the means of production (capital) is owned by the workers or the state. In market socialism you still have worker cooperatives competing on the market.
Plus, a core part of what qualifies as a bullshit job is that the person doing it feels that it's a bullshit job. The book is a half-serious anthropological essay, not an economic treaty.
An odd tendency I’ve noticed about Graeber is that the more someone apparently dislikes his work, the more it will seem like they’re talking about totally different books from the ones I read.
Because he uses private framings of concepts that are well understood. So if your first encounter is through Graeber you’re going to have friction with every other understanding. If you’ve read much else you will say “hold on a minute, what’s about …”
Please read my comments here engaging with the ideas in the text and specifically your concern that bullshit jobs are just jobs that don’t feel important.
You have written bunch of comments regarding advertising, single comment criticizing Graeber for using concepts in uncommon way, and one reply to my comment that doesn't really connect with the content of that comment.
Yes, David graeber talks a lot about the idea of bullshit jobs but fails to identify them concretely. As we see in this thread, whenever someone puts up an example, there is actually value he misses because he is unfamiliar with the workplace and business.
The example used here was advertising. And then when we push on the example the fallback is to the subjective - feeling unfilled, definition.
So I am still look for concrete examples of bullshit jobs to justify the original comment that AI will find efficiencies by letting us throw these away.
Got any? You are an expert on the text so I’m hoping you can identify one.
> And that book sort of vaguely hints around at all these jobs that are surely bullshit but won’t identify them concretely.
See what I mean? We push on where these fake jobs are and you fallback to a subjective internal definition we can’t inspect.
And now let me remind you of the context. If the real definition of bullshit isn’t economic slack, but internal dissatisfaction then this comment would be false:
> What if LLMs are optimizing the average office worker's productivity but the work itself simply has no discernable economic value? This is argued at length in Grebber's Bullshit Jobs essay and book.
"Socialist economies are like "adblock for your life"."
Ever actually lived in anything approaching one? Yeah, if the stores are empty, it does not make sense to produce ads for stuff that isn't there ...
... but we still had ads on TV, surprisingly, even for stuff that was in shortage (= almost everything). Why? Because the Plan said so, and disrespecting the Plan too openly would stray dangerously close to the crime of sabotage.
Socialist economies larger than kibbutzes could only be created and sustained by totalitarian states. Socialism means collective ownership of means of production.
And people won't give up their shops and fields and other means of production to the government voluntarily, at least not en masse. Thus they have to be forced at a gunpoint, and they always were.
All the subsequent horror is downstream from that. This is what is inherent to building a socialist economy: mass expropriation of the former "exploitative class". The bad management of the stolen assets is just a consequence, because ideologically brainwashed partisans are usually bad at managing anything including themselves.
This is exactly what I meant, a centrally-planned economy where the state owns everything and people are forced to give everything up is just one terrible (Soviet) model, not some defining feature of socialism.
Yugoslavia was extremely successful, with economic growth that matched or exceeded most capitalist European economies post-WW2. In some ways it wasn't as free as western societies are today but it definitely wasn't totalitarian, and in many ways it was more free - there's a philosophical question in there about what freedom really is. For example Yugoslavia made abortion a constitutionally protected right in the 70s.
I don't want to debate the nuances of what's better now and what was better then as that's beside the point, which is that the idiosyncrasies of the terrible Soviet economy are not inherent to "socialism", just like the idiosyncrasies of the US economy aren't inherent to capitalism.
It is the model, introduced basically everywhere where socialism was taken seriously. It is like saying that cars with four wheels are just one terrible model, because there were a few cars with three wheels.
Yugoslavia was a mixed economy with a lot of economic power remaining in private hands. You cannot point at it and say "hey, successful socialism". Tito was a mortal enemy of Stalin, stroke a balanced neither-East-nor-West, but fairly friendly to the West policy already in 1950, and his collectivization efforts were a fraction of what Marxist-Leninist doctrine demands.
You also shouldn't discount the effect of sending young Yugoslavs to work in West Germany on the total balance sheet. A massive influx of remittances in Deutsche Mark was an important factor in Yugoslavia getting richer, and there was nothing socialist about it, it was an overflow of quick economic growth in a capitalist country.
You've created a tautology: Socialism is bad because bad models are socialism and better models are not-socialism.
> You cannot point at it and say "hey, successful socialism"
Yes I can because ideological purity doesn't exist in the real world. All of our countries are a mix of capitalist and socialist ideas yet we call them "capitalist" because that's the current predominant organization.
> Tito was a mortal enemy of Stalin, stroke a balanced neither-East-nor-West, but fairly friendly to the West policy already in 1950, and his collectivization efforts were a fraction of what Marxist-Leninist doctrine demands.
You're making my point for me, Yugoslavia was completely different from USSR yet still socialist. Socialism is not synonymous with Marxist-Leninist doctrine. It's a fairly simple core idea that has an infinite number of possible implementations, one of them being market socialism with worker cooperatives.
Aside from that short period post-WW2, no socialist or communist nation has been allowed to exist without interference from the US through oppressive economic sanctions that would cripple and destroy any economy regardless of its economic system, but people love nothing more than to draw conclusions from these obviously-invalid "experiments".
"You" (and I mean the collective you) are essentially hijacking the word "socialism" to simply mean "everything that was bad about the USSR". The system has been teaching and conditioning people to do that for decades, but we should really be more conscious and stop doing that.
" no socialist or communist nation has been allowed to exist without interference from the US through oppressive economic sanctions that would cripple and destroy any economy regardless of its economic system"
That is what COMECON was supposed to solve, but if you aggregate a heap of losers, you won't create a winning team.
"Socialism is not synonymous with Marxist-Leninist doctrine. It's a fairly simple core idea that has an infinite number of possible implementations, one of them being market socialism with worker cooperatives."
Of that infinite number, the violent Soviet-like version became the most widespread because it was the only one that was somewhat stable when implemented on a countrywide scale. That stability was bought by blood, of course.
No one is sabotaging worker cooperatives in Europe and lefty parties used to given them extra support, but they just don't seem to be able to grow well. The largest one is located in Basque Country and it is debatable if its size is partly caused by Basque nationalism, which is not a very socialist idea. Aside from that one, worker cooperatives of more than 1000 people are rare birds.
"The system has been teaching and conditioning people to do that for decades, but we should really be more conscious and stop doing that."
No one in the former socialist bloc will experiment with that quagmire again. For some reason, socialism is a catnip of intellectuals who continue to defend it, but real-world workers dislike it and defect from various attempts to build it at every opportunity.
We should stop trying to ride dead horses. Collective ownership of means of production on a macro scale is every bit as dead as divine right of kings to rule. There are still Curtis Yarvin types of intellectual who subscribe to the latter idea, but it is pining for the fjords. So is socialism.
> That is what COMECON was supposed to solve, but if you aggregate a heap of losers, you won't create a winning team.
What kind of disingenuous argument is that? Existence of COMECON doesn't neutralize the enormous disadvantage and economic pressure of having sanctioned imposed on you.
> Of that infinite number
I'm glad we agree that Soviet communism is not synonymous with "socialism".
> Aside from that one, worker cooperatives of more than 1000 people are rare birds.
You're applying pointless capitalist metrics to non-capitalist organizations and moralizing about how they don't live up to them.
> No one in the former socialist bloc will experiment with that quagmire again.
You're experimenting with socialist policies and values right now, you just don't want to call it by that name because of your weird fixation. Do public healthcare, transport, education, social security benefits ring any bells?
If you talked to people from ex-Yugoslavia, you'd know that many would be happy to return to that time.
> We should stop trying to ride dead horses.
We should stop declaring horses extinct when it's just your own horse that has died.
This is not really a contradiction. When the world became bipolar, there was a lot of alpha in arbitrage. The most valuable Yugoslav (state owned) company was Genex, which was an import/export company -- it would import from one bloc and export to the other bloc, because neither bloc wanted to admit that the other bloc had something they needed. (This set the Yugoslavs up for failure, like so many other countries that believed that the global market would make them rich).
The Soviets and their satellites (like the DDR), had another problem related to arbitrage, and that is that their professionals (such as doctors and engineers and scientists, all of whom received high quality, free, state-subsidized education), were being poached by the Western Bloc countries (a Soviet or East German engineer would work for half the local salary in France or West Germany, _and_ they would be a second class citizen, easy to frighten with deportation -- the half-salary was _much_ greater than what they could earn in the Eastern Bloc). The iron curtain was erected to prevent this kind of arbitrage (why should the Soviets and satellites subsidize Western medicine and engineering? Shouldn't a capitalist market system be able to sustain itself? Well no, market systems are inefficient by design, and so they only work as _open_ systems and not _closed_ systems -- they need to _externalize_ the costs and _internalize_ the gains, which is why colonialism was a thing to begin with, and why the "third world" is _still_ a thing).
Note that after the Berlin Wall fell, the first thing to happen was mass migrations of all kinds of professionals (such as architects and doctors) and semi-professionals (such as welders and metal-workers), creating an economic decline in the East, and an economic and demographic boom in the West (the reunification of Germany was basically a _demographic_ subsidy -- in spite of the smaller size, East Germany had much higher birth rates for _decades_; and after the East German labor pool was integrated, Western economies sought to integrate the remaining Eastern labor pools (more former Yugoslavs live abroad in Germany than in any other non-Yugo part of the world [the USA numbers are iffy, but if true Croatians are the only exception, with ~2M residents in USA, which seems unlikely]).
The problem, in the end, is that all of these countries are bound by economic considerations (this is thesis of Marx, by the way), and they cannot escape the vicious arbitrage cycle (I mean, here in the USA, we have aggressively been brain-draining _ourselves_ since at least 1980, which is why we have the extreme polarization, stagnation, and instability _today_ -- it is reminiscent of the Soviet situation in the mid 1980s to late 1990s). Not without something like a world government (if there is only one account to manage, there is no possibility of deficit or surplus, unless measured inter-temporally), or an alternative flavor of globalization.
Internationalism is a wonderful ideology, and one that I support. You can make the case that Yugoslavia, the USSR, etc, were an early experiment in Internationalism, that each succumbed to corruption and unclear thinking (a citizenry that is _inclusive_ by nature and can _think_ clearly is a hard requirement for any successful polity). Globalization, on the other hand, has a bit of an Achilles Heel: when countries asked why they should open their borders and economies to outsider/foreigners, they were told, "so that we can all get rich!". The problem is that once the economic gains get squeezed out of globalization, countries will start looking for new ways to rich, even if it means reversing decades of integration. Appealing to people's greed only works to the extent that you can placate their appetites. We should have justified Internationalism using _intrinsic_ arguments: "we should integrate because learning how others see and experience the world is intrinsically beautiful, and worth struggling for".
Note that most of these economic pathologies disappear, when the reserve currency (dollar) is replaced with a self-balancing currency (like Keynes' Bancor: https://en.wikipedia.org/wiki/Bancor). We have the tools, but everyone wants to feel like the only/greatest winner. These are the first people that have to be exiled.
How does that make advertising a bullshit job? The only way advertising won't exist or won't be needed is when humanity becomes a hive mind and removes all competition.
Countries can just ban advertising, and hopefully we will slowly move towards this. There are already quite a few specific bans - tobacco advertising is banned, gambling and sex product advertising is only allowed in certain specific situations, billboards and other forms of advertising on public spaces are often banned in large European cities, and so on.
No. They can ban particular modes. They can’t stop people from using power and money to spread ideas.
In the US hedge funds are banned from advertising and all they did is change their forms of presentation to things like presenting at conferences or on podcasts.
If there was a socialist fantasy of a government review board for which all products were submitted before being listed in a government catalog. Then advertising would be lobbying and jockeying that review board to view your product in a particular way. Or merely to go through the process and ensure correct information was kept.
The parts that are only done to maintain status quo with a competitor aren’t productive, and that’s quite a bit of it. Two (or more) sides spend money, nothing changes. No good is produced. The whole exercise is basically an accident.
Like when a competing country builds their tenth battleship, so you commission another one to match them. The world would have been better if neither had been build. Money changed hands (one supposes) but the aim of the whole exercise had no effect. It was similar to paying people to dig holes a fill them back in again, to the tune of serious money. This was so utterly stupid and wasteful that there was a whole treaty about it, to try to prevent so many bullshit jobs from being created again.
Or when Pepsi increases their ad spending in Brazil, so Coca Cola counters, and much of the money ends up accomplishing little except keeping things just how they were. That component or quality of the ad industry, the book claims, is bullshit, on account of not doing any good.
The book treats of several ways in which a job might be bullshit, and just kinda mentions this one as an aside: the zero-sum activity. It mostly covers other sorts, but this is the closest I can recall it coming to declaring sales “bullshit” (the book rarely, bordering on never, paints even most of an entire industry or field as bullshit, and advertising isn’t sales, but it’s as close as it got, as I recall)
I think you’ve misunderstood what I’ve written here. Graeber’s book might make it clearer for you, he probably did a better job of explaining it than I do. It’s about spendy Red Queen’s races, not just trying to make something better or trying to compete in general. The application of the idea to military spending is pretty much standard and uncontroversial stuff (hell, it was 100-plus years ago) while his observation of a similar effect in some notable component of ad spending is novel (at least, I’d not seen that connection made before).
I agree that advertising has some of the worst symptoms of an arms race. I think regulations can reduce the most annoying modes of ads (I live in an area with no billboards). I don’t think advertising is a bullshit job - it’s essential, and I don’t think there exists a society without it.
See what I mean? What you see as a bullshit job is just completely misunderstanding how human beings work.
- Which products get included in the candidate list? Every product in existence which claims use?
- how many results can it return? And in what order?
- which attributes or description of the product is provided to the llm? Who provides it?
- how are the claims in those descriptions verified?
- what if my business believes the claims or description of our product is false?
- how will the llm change its relative valuations based on demand?
> The only way advertising won't exist or won't be needed is when humanity becomes a hive mind and removes all competition.
I don't need advertisement to pick the best product for myself. I have a list of requirements that I need fulfilled – why do I need advertisement for it?
I get the Anthropic models to screw up consistently. Change the prefix. Say in the preamble that you are going after supper or something. Change the scenario eveey time. They are caching something across requests. Once you correct it, it fixes its response until you mess with the prompt again
Mass extermination through famine, genocide or plague is another outcome. An Elysium earth worked by robots is a vision tech bro billionaires are rooting for and building towards.
As for your idea, I see no signs of their striving for redistributing their wealth.
I thought it was a far superior UI to facebook when it launched. I tried to use it but the gravity of the network effect was too strong on facebook's side.
In the end I'd rather if both had failed. Although one can argue that they actually did. But that's another story.
I very much wanted Google Plus to succeed. Circles was a great idea in my opinion. Google Plus profiles could be the personal home page for the rest of us but of course, Google being Google...
That being said, tying bonuses for the whole company on the success of Google+ was too much even for me.
I actually hope that they do not succeed in the end. Ubiquitous self driving cars will spell the end of what's left of walkable areas in North America and bring about (in time) similar destruction of the urban fabric to Europe and elsewhere. I'm not very articulate and English is my second language but this video below is really worth watching before we all swallow as an axiom the idea that autonomous cars are going to be a good thing:
[EDIT] Most of you seem unwilling to spend an hour to watch a youtube video (although I believe it's worth your time esp if you're from North America) so here's a summary I attempted in another comment:
"Autonomous cars will clog up existing cities by cruisnig around looking to pick up rides or deliver shit and mill around endlessly or occupy every piece of parking in prime real estate to make sure they are quickly available wherever demand is high (i.e. where people want to or have to be). In time they will phase out human driven cars which will lead to higher speed limits and more infrastrcuture that supports autonomous driving. Meaning fewer "difficult" intersections, straighter roads, no bike lanes or pedestrian sidewalks. Everything optimized for autonomous cars to endlessly mill around. People will be blocked from being near autonomous cars as those will be going too fast for human reflexes to cope with so areas where cars drive will not have sidewalkss nor bike lanes. This will lead to urban areas that are even more car dependent with only pockets of urbanism that support human scale. To get anywhere one will need to hail one of those autonomous taxis and then zoom in it to a destination where it's again safe to walk in whatever pocket of human activity. Since cars need a lot more land area than humans the urban infrastructure will mostly cater to them and not to people because the expectation and argument will be that you can always get your ass shuttled to wherever you need to be."
If self driving cars replace humans, I can safely bike on the road again, not having to worry about some exhausted soccer-parent scrolling tiktok on their phone in their minivan as they use me as a speed bump. Also as a parent/part time family taxi driver, I wouldlove to get back the ~10 hours a week I spend staring at the road. Kids will be driven by waymo to Karate, Soccer, Violin lessons etc. I am ready for this future.
I don't even know what areas of the United States I would consider "walkable". I live in San Francisco, don't own a car, we have "pretty good" public transit, and it's still absolutely miserable getting around. It takes me 40 minutes to go from Outer Sunset to downtown by muni. There are many locations in this city that I can physically jog to faster than public transit.
I can appreciate this technology might negatively impact other countries more heavily, but, for me, it's easily the most exciting tech I interact with and I'm rooting for it whole-heartedly. I'm at around 1000 miles logged on Waymo and am part of their beta tester program for freeway usage.
I also think that post-Covid remote work has probably damaged incentives for increasing the density of cities more so than anything autonomous vehicles will do. San Francisco is actively cutting bus routes, bus density, and threatening to significantly cut BART stops due to budget constraints and reduction in ridership.
It's odd because I do get where you're coming from, and I feel like I should be your target audience, but, for me, the ship sailed so long ago that I struggle to relate to your position.
I think this thread conflating between walkable and having good transit. A walkable city means almost everything you need is within walking distance. That doesn’t mean there are buses or trains to take you out of this area. I live in a walkable part of the city. Within a 15-minute walk, there are three supermarkets, perhaps twenty restaurants with different cuisines, four pharmacies, one each of USPS/UPS/FedEx for shipping, four different banks, three dry cleaners… you get the idea. The only transportation tool I need is my two legs.
Now of course sometimes I’m not content staying within this 15-minute circle. Then I simply choose the fastest method of transport to get there. Is BART or Muni faster than the Waymo trip? Then yes I’ll take pubic transportation. That’s what good transit is for.
NotJustBikes doesn't have a particularly great reputation among transit enthusiasts. A lot of his videos have become repetitive and focused on complaints rather than specific ways of making things better. Understandably, few people are willing to spend an hour listening to someone complain on the Internet.
> "Autonomous cars will clog up existing cities..."
Congestion charges. Limited licensing for TNCs. Dedicated public or private holding areas rather than "milling about". All of these have solutions.
> Meaning fewer "difficult" intersections, straighter roads, no bike lanes or pedestrian sidewalks.
It is already best practice in urban design to separate cars that need to quickly transit an area without interacting with it into completely independent routes where there are no bikes or pedestrians, and combine transit/bikes/walking into livable mixed mode streets where cars are not allowed. NotJustBikes has many examples of this, most commonly around Europe.
> To get anywhere one will need to hail one of those autonomous taxis and then zoom in it to a destination where it's again safe to walk in whatever pocket of human activity.
This is what already happens in places that don't have usable, safe, or car-competitive transit, modulo autonomous, including currently most of North America. The solution to needing fewer cars -- self driving or not -- is investment in transit and in ground-up overhaul of existing cities to optimize for transit and deprioritization of cars.
This is my complaint about many types of YouTube pundits.
I had tuned in to some channels for analysis and insightful commentary, for example, film and TV series.
But every one devolved into “Worst episode ever!” and “<studio> has RUINED <franchise>!”
So to sum up, the YouTube recommendations algorithm has ruined independent criticism and there is nothing on anymore. Join my Patreon, “UnJustLikes” for the deep dive!
No, it's orthogonal. But cars that can drive everywhere will show up everywhere, all of the time. Watch the video in its entirety. It makes very strong arguments for why this is a dystopia in the making.
Autonomous cars will clog up existing cities by cruisnig around looking to pick up rides or deliver shit and mill around endlessly or occupy every piece of parking in prime real estate to make sure they are quickly available wherever demand is high (i.e. where people want to or have to be). In time they will phase out human driven cars which will lead to higher speed limits and more infrastrcuture that supports autonomous driving. Meaning fewer "difficult" intersections, straighter roads, no bike lanes or pedestrian sidewalks. Everything optimized for autonomous cars to endlessly mill around. People will be blocked from being near autonomous cars as those will be going too fast for human reflexes to cope with so areas where cars drive will not have sidewalkss nor bike lanes. This will lead to urban areas that are even more car dependent with only pockets of urbanism that support human scale. To get anywhere one will need to hail one of those autonomous taxis and then zoom in it to a destination where it's again safe to walk in whatever pocket of human activity. Since cars need a lot more land area than humans the urban infrastructure will mostly cater to them and not to people because the expectation and argument will be that you can always get your ass shuttled to wherever you need to be.
Meanwhile, in real life San Francisco, I much prefer being around Waymos as a pedestrian and cyclist than human drivers. While most human drivers are competent and considerate, a small percentage are not -- and given the number of encounters in a single trip, I have these dangerous interactions weekly.
Despite being a noticeable presence on the roads, Waymos have not contributed to congestion at all as far as I can tell.
Disagree. A city is walkable because it is dense: daily destinations like your grocery store is close enough to walk to. But density implies congestion for cars because if everyone is in a car the roads will be too congested. This happens regardless of whether we have a human driver driving the car alone, or a human sitting inside a Waymo as a passenger. Congestion happens either way. Waymo does not solve the congestion problem, and therefore will not have any affect on the walkability of cities.
But it makes it worse. Once Waymo cars start clogging streets, cruising around waiting for passengers it will amplify the issue. It will be cheap enough to just have them mill around to be quickly available when requested.
In time, human driving will be phased out and that will precipitate removal of speed limits and traffic lights as autonomous cars will be able to use vehicle to vehicle messaging to negotiate intersections. Of course pesky pedestrians and cyclists could still be in the way. That's where lobbying comes in to restrict the pedestrian areas to pockets where cars and people never share the same space. But since cars require much more space than peeople the result will be more sprawl and less walkable places as it will be people who will get pushed aside.
>In time, human driving will be phased out and that will precipitate removal of speed limits and traffic lights as autonomous cars will be able to use vehicle to vehicle messaging to negotiate intersections. Of course pesky pedestrians and cyclists could still be in the way. That's where lobbying comes in to restrict the pedestrian areas to pockets where cars and people never share the same space. But since cars require much more space than peeople the result will be more sprawl and less walkable places as it will be people who will get pushed aside.
All the incentives you described exists today. On any given road any space devoted to sidewalks or bike lanes means less space for cars, and you already need separation between car lanes and sidewalks. You also have the same incentive to "restrict the pedestrian areas to pockets where cars and people never share the same space", because any controlled access roadway increases speed and throughput. Finally if you restrict pedestrians to certain areas (we all live in megatowers?), that actually makes taxis (including robotaxis) less attractive relative to public transit, because their whole value proposition is that they take you exactly where you want to go. Therefore it's unclear how automated cars would make things worse.
I hate how much space in cities is devoted to cars, and I wish we had much better transit of all sorts.
But - I'm just not sure your analysis is right. Someone who drives a self-owned car will park it in a downtown area for hours. Someone who takes a taxi of any sort will use much, much less amortized parking spot space. New York is a pretty good example of this.
Good public transit beats the snot out of cars, but a dense taxi deployment seems to get more people moved per total car-dedicated-space than private car drivership does.
And if we can reduce the amount of space dedicated to parking, we can increase density, which reduces the need for driving.
So the problem will be if we have self-driving whatevers at the expense of public transit, but perhaps not if it's at the expense of private car drivership.
Why would driverless cars mill around? They would just wait around in underground garages. They can even block each other, so they don't need that much space to park.
Was he entirely wrong? Have you tried to dump the stored proc into a frontier model and ask it to refactor? You'd probably have neat 20 stored procs with well laid out logic in minutes.
I wouldn't keep a ball of mud just because LLMs can usually make sense of them but to refactor such code debt is becoming increasingly trivial.
Yes. I mean... of course he was?. Firstly, I had already gone through this process with multiple LLMs, from various perspectives, including using Deep Research models to find out if any other businesses faced similar issues, and/or if products existed that could help with this. That lead me down a rabbit hole of data science products related to regulatory reporting of a completely different nature which was effectively useless. tl;dr: Virtually all LLMs - after understanding the context - recommended us doing thing we had already been urging the business to do - hire a Technical BA with experience in this field. And yes, that's what we ended up doing.
Now, give you some ideas about why his idea was obviously absurd:
- He had never seen the SP
- He didn't understand anything about regulatory reporting
- He didn't understand anything about financial derivatives
- He didn't understand the difference between Transact SQL and ANSI SQL
- No consideration given to IP
- etc etc
Those are the basics. Let's jump a little bit into the detail. Here's a rough snippet of what the SP looks like:
SELECT
CASE
WHEN t.FLD4_TXT IN ('CCS', 'CAC', 'DEBT', ..... 'ZBBR') THEN '37772BCA2221'
WHEN t.FLD4_TXT IN ('STCB') AND ISNULL(s.FLD5_TXT, s.FLD1_TXT) = 'X' THEN 'EUMKRT090011'
END as [Id When CounterParty Has No Valid LEI in Region]
-- remember, this is around 5000 lines long ....
Yes, that's a typical column name that has rotted over time, so noone even knows if it's still correct. Yes, those are typical CASE statements (170+ of them at last count, and no, they are not all equal or symmetric).
So... you're not just dealing with incredibly unwieldy and non-standard SQL (omitted), noone really understands the business rules either.
So again... yes he was entirely wrong. There is nothing "trivial" about refactoring things that noone understands.
reply