Hacker Newsnew | past | comments | ask | show | jobs | submit | the_af's commentslogin

That's not true. They could have standardized on a few rugged platforms -- and in fact, some in Nazi Germany advocated for that -- but their industry and engineering were generally self-sabotaging and a mess.

They actually did standardize pretty quickly. Panzer III and Panzer IV were the workhorses in Russia, paired up with the StuG (which used the Pz III chassis). I think that it's arguable that no production strategy could have led to German success. Had they tried to produce T-34 or Sherman type tanks (and the Panther was kind of intended to be that tank), they still would have been overwhelmed by the sheer number of tanks built buy the Allies. The Soviets at their peak year produced over 29K tanks, with the US contributing around 21K. The Germans maxed out at around 8k.

IMHO, the Soviets alone could have eventually defeated Germany, thought at much greater cost (as if over 20m casualties wasn't already incredible).


Agreed that arguably no strategy could have helped them against the Soviet Union, it was a major blunder going to war with them.

But the Nazis self-sabotaged constantly. The Panzer IV and the Stug III (with the outdated Panzer III chassis) were arguably the closest standard for armor, but they were constantly diverting effort to alternative platforms that were too complex to mass produce and maintain. And the same for other weapons.


The thing I don't get about accounts of NDE and what people say about them afterwards is this: if they lived to tell the tale, their near death wasn't actual death. They didn't "peak over to the other side". So whatever they experienced was what the brain experienced well within the realm of the living. And we know it was within the living enough that the person recovered and was able to recount the experience! How can there be any argument about this? How anyone can draw any conclusions about an alleged afterlife from this is beyond me.

Isaac Asimov famously reflected upon this. When he had a close call with death, he didn't see anything. He didn't expect to, and he didn't. It's very likely that our expectations shape what we see, at least partly... that's the brain conjuring imagery and trying to make sense of what it can, I suppose.

Whenever I've been under anesthesia, it was like an on/off switch. I didn't even dream, even though I do remember some of my dreams.


Not all of them pass on to the other side but some are allowed to and come back. Search youtube for "atheist dies and sees Hell", for example.

I do remember dreams. The times where I was anesthesized it was that on/off switch, I completely lost time. No NDE or even dreams.

It's not about capability. It's about who "holds the key". And sure, many currently with deep pockets and pushing for AI will miscalculate and get pushed by the wayside. I think many people who are not in the 0.001% are miscalculating right now in HN.

What's important is that ultimately some small subset owns this, and it doesn't matter how smart they are, only that they own the thing and that it cannot be employed against them (because they hold the key).


That's not the rebuke you think it is. You made a claim (not original, I've read it before), someone expressed doubts about your claim (which if proven false, will have dire consequences) and you cannot wave it off with "there are no guarantees in life".

Sorry, you made a claim, there's good reason to believe your claim may not pan out, and if it doesn't the consequences are dire.


I don't think it's a rebuke. I'm just explaining the reality of the situation.

You said

> New companies will appear doing things that we can't even imagine yet

I have a really big imagination, so I will believe it when I see it. If you have any real idea what these new companies might be doing in the future then I'm all ears. But until then maybe stop trying to claim some kind of future knowledge based on some handwaved nonsense like "we can't even imagine what the future will look like"

And then trying to claim that's "the reality of the situation", please be serious

Edit: Maybe if you think the future is so unimaginable, you should take a look around at the present. Can you identify anything in our lives today that was not imagined by anyone in the past? Think about how every piece of technology ever made nowadays, someone can say "it's like the Torment Nexus from Famous Piece of Literature!"


There are more options:

Mass unemployment, consolidation of all AI-related benefits in the hands of a few, an increase in demand that doesn't outpaced the loss of employment, increase in capabilities (not AGI) that mean a few chosen people can do most things without hiring other people, etc.


If there is mass unemployment, who is going to buy anything from anyone? The "few" don't need or want us to be scraping in the dirt. They want us spending lots of money on their products, so their wealth increases.

I know it is the classic sci-fi dystopia where somehow despite endless advances in tech and automation, the masses can't figure out how to make it work for themselves and end up living in shanty towns on top of each other waiting for gifts from the elite, or scraping in dirt outside the cities, but come on... I just don't see that as being credible.


> If there is mass unemployment, who is going to buy anything from anyone? The "few" don't need or want us to be scraping in the dirt.

> They want us spending lots of money on their products, so their wealth increases.

If we're considering scifi scenarios, imagine this: if full blown automation of everything is achieved, why would the "haves" need the "have-nots" buying anything at all? Why would they need them to exist, at all? Think about it. It's an extreme and we're not near it... yet.

> despite endless advances in tech and automation, the masses can't figure out how to make it work for themselves

If the tech (or the really helpful tech) is guarded behind a lock, and they don't hold a key, it's not a matter of figuring things out. Unless by figuring out you mean revolt?


> If we're considering scifi scenarios, imagine this: if full blown automation of everything is achieved, why would the "haves" need the "have-nots" buying anything at all? Why would they need them to exist, at all? Think about it. It's an extreme and we're not near it... yet.

So we reach this post scarcity society, where everyone could be living a life of luxury, but this whole group of "haves" as you call them (who would they be?), somehow form this uniform view that they just don't want 99.9% of other people around and let them all die off while they guard themselves in gated cities or something.

It just makes no sense at all to me. Like in a sci-fi novel or movie where it is a plot requirement, ok, but in reality, I just cannot see the path and all the things required to get to that particular reality. So many ways it would work out differently.


80% of “serious” discussion on contemporary LLMs is no better then sci-fi. Worse, even, because it’s by the readers and not the writers, who ostensibly made some effort to make their works realistic.

I'll add to this that 80% of any discussion of LLMs is instigated by CEOs of AI companies, and they themselves seem to believe scifi is a real-world education.

So yes, it's a bunch of scifi-addled selfish amateurs guiding and predicting the future. The AI people.

(Remember the "do not build the Torment Nexus" meme? It has a point).


> So we reach this post scarcity society

A full automation society, where the implied post scarcity is not necessarily for everyone. Maybe it needs most of the population not to exist in order for the few to enjoy the lack of scarcity. Resources aren't infinite, but greed is.

I mean, resources and wealth could be far better distributed right now, no need for AI, yet most times this is attempted the wealthy fight tooth and nails against it, even though the impact for them would be very small. What makes you think having AI will magically make them better people?

> [...] this whole group of "haves" as you call them (who would they be?) somehow form this uniform view that they just don't want 99.9% of other people around

A uniform view on this matter is easier to achieve by an extremely small subset of people.

And really, do you need to ask "who are they"? I mean, the billionaires and owners of concentrated capital of the world?

> I just cannot see the path and all the things required to get to that particular reality.

You cannot see a path from unchecked capitalism and extreme concentration of capital, via total automation, to this particular reality?

It sounds like a failure of imagination. I see the people at the top being lying sociopaths and have no trouble believing this.


Powerful people like to wield power over others. They want the masses to exist specifically so that they can feel superior and exercise their authority over others. They simply want the masses to be forever below them.

This is actually an argument I find convincing. The powerful need the less powerful to exist, because otherwise in relation to whom would they be powerful? Who would show them they are powerful?

But even then, how many of the others would they need to exist?


> It sounds like a failure of imagination.

I see it as the opposite. Doomerism is the easy path. It takes no imagination to repeat doomer memes and sci-fi dystopian tropes, without articulating exactly how we get there. I think what is far more likely is that as these tools proliferate, we continue on the path we've always done, some discomfort, probably negatively impacting some, but ultimately a better life when measured on the median. I don't see a way the billionaires take all power away from 99.999% of the rest of humanity without literally murdering them. And why would they want to murder them? It's much easier to just let everyone benefit.


> Doomerism is the easy path. It takes no imagination to repeat doomer memes and sci-fi dystopian tropes, without articulating exactly how we get there.

It's not "doomerism" because there is a call to action, impractical as it may seem. TFA is stating one possible, if flawed course of action. There may be others. Doomerism just cries "the comet is coming, end your lives now!". Also, if you're honest, there is some articulation of how this may come to be, it's just that nobody is an oracle and the particulars are shifting.

> I don't see a way the billionaires take all power away from 99.999% of the rest of humanity without literally murdering them. And why would they want to murder them?

They don't need to actively murder them, they just need to restrict access to resources required for living (maybe made worse by the climate crisis) and this would alone cull the population "naturally".

Imagine a world of full, total automation of everything. The rich always needed the less rich to work for them, make things for them, pick up raw materials for them, take care of them, even be their security forces. But all of this would be unneeded with an inexhaustible force of robot labor [1]. This is one of my worries if they ever go all-in with the automation of the military... who will be there to have a crisis of conscience if given immoral orders? We're not there yet, but this is something to ponder.

> It's much easier to just let everyone benefit.

There are things right now that would be easy to do that do not get done. And in any case, I don't think anybody is arguing about what would be easier? Also, before you say it: who cares if it's self-destructive? There's a current subset of rich people who don't care if we're destroying the planet, presumably they don't care that much about their children or their children's children. Or maybe they hand wave it away, "someone, somehow, will take care of this problem in the future".

----

[1] a funny tangent, obligatory Bob the Angry Flower: https://www.angryflower.com/atlass.gif


I just object to your reasoning on so many levels. I regard it as the current zeitgeist of anti-capitalism. Just lazy blame.

We are objectively living in the best times of human history, ever. The global median person in the world is much better off than their predecessors.

Is wealth inequality growing? Yes! This makes people angry. Does that automatically extrapolate to billionaires will murder people (actively or inactively) simply because they can?

A resounding, emphatic, NO. It doesn't extrapolate to that.

What will almost certainly happen is the same as every other time. The technology will disrupt, cause short term pain for some, but ultimately become just another commodity and push up the standard of living for the median person. Billionaires will continue to be billionaires, normal people will adjust, we'll find out ways to put human productivity to use, life will go on.


> What will almost certainly happen is the same as every other time.

This is what seems to me like a failure of imagination. As I said, I envision other possible and even likely futures. I'm not an oracle so I don't guarantee them, I'm just saying we should be aware of those possible futures, and if possible do something about them.

There's no inevitability of progress. That's just wishful thinking.

I respect that you come from a different ideological perspective, but don't disregard mine as lazy. Chalking this up to "lazy anti-capitalism" is, in itself, a lazy position to adopt.


Like you said, it is a failure of imagination. When someone says, "the billionaires and trillionaires won't need anyone else," the dystopian scenerio is not neccesarily "therefore other people won't exist or will eventually become extinct or killed" it's that other people will be straight out enslaved. With all the torture and suffering that entails. You know, the dystopian scenario that is more in line with centuries of recorded human history...The point is the rich won't need to listen to anyone else.

Why on earth would billionaires want to do this?!

It is complete dystopian fantasy.


They don't just wake up one day and want to do this. They fear losing their power and want and try to maintain it at considerable cost to others due to that fear. The dynamics of society become such that the power imbalance and wealth inequality continues to increase, until eventually the threshold to something that is indistinguishable from slavery is passed.

Edit: By the way, just the other day the Trump admin trotted out a Doordash grandma in front of the cameras and asked her what she thought of trans women in sports. This grandma is doing doordash to pay off the medical debt of the cancer treatment of her dead husband because the US of A does not provide the minimum healthcare befitting of the richest country on Earth. We are already living in a dystopian fantasy.


Eh, nope.

We’ve had economies where the majority of rich people existed in a different economy, and everyone else lived in a different economy. Class mobility was poor.

Take the current K shaped economy, where a majority of retail spending is from rich people, and not the majority.


> It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people.

I don't think the comment you're replying to is saying that an evil AI bot will kill people. They are saying something along the lines of: mass job loss doesn't bother the AI companies because in the AI-powered future they envision, population reduction is a positive side effect.


I'm pro FOSS, militantly so. FSF-style.

But... playing devil's advocate, if AI makes it very easy to find exploits without the source code, wouldn't it be doubly effective finding them with the source code as well? And why is the dichotomy posed by this blog post "open source with AI reviews by everyone" vs "closed source but only the bad guys use AI"? What if the scenario was: closed source and the authors/security team use every AI tool at their disposal to find bugs? What do the community's eyeballs add to this equation, assuming (big if) AI review of exploits is such a force multiplier?

Before any knee-jerk reactions: big fan of open source, I'm not arguing this will kill it, I don't have the faintest idea what Cal.com is and I think a world without FOSS would be a tragedy, I run linux and most of my software on my personal PC (other than games) is FOSS.


> As an engineer, I'm never more excited about this job.

How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

Everyone thinks it won't be them, it will be others that will be impacted. We all think what we do is somehow unique and cannot be automated away by AI, and that our jobs are safe for the time being.


> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

The exciting part of the job is, and always has been, listening to idle chitchat where you pick up on the subtle cues of where one is finding difficulty in their life and then solving those problems. I think AI could already largely handle that today just fine, except:

You have to convince, especially non-technical, people to have idle chitchat with machines instead of humans

-or-

Convince them of and into having a machine always listening in to their idle conversations with humans

Neither of those are all that palatable in the current social landscape. If anything, people seem to be growing more weary of letting technology into their thoughts. Maybe there is never a future where humans become accepting of machines being always there trying to figure out what is wrong with them.

The trouble with AI replacing jobs is that a lot of jobs exist only because people want to have other people to talk to and are willing to pay for the company.


> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

Yeah, no one ever thinks beyond "whoa, how cool, I cloned Slack in 15 minutes!"

Personally, the thing I find more depressing is turning a career that was primarily about solving interesting puzzles in elegant ways into managing a swarm of idiot savant chatbots with "OK, that looks good" or "no, do it better" commands.


The problem that I'm trying to solve with agent is similar here, for instance, my comment likely made zero impression on you because I'm against both of the things that you are also against here.

> managing a swarm of idiot savant chatbots with "OK, that looks good" or "no, do it better" commands.

That’s what the management class thinks of software development process anyway, LLMs are just more idiotic and more savant than swes


As someone in 99th percentile in terms of token usage, it's super clear to me where the agent will not be able to replace my judgement, two areas:

1. if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.

2. LLM has zero intention, and rely on you to decide what to build and more importantly not build.

As such, I'm the limit of the numbers of concurrent agents working fo rme, because there is still a limit to my output of engineering judgement. I do get better, both at generating and delivering this judgement. Exceeding this limit, the output becomes garbage.

At this current year and date, the AI does not automate me in anyway, I have something that they just flat out don't have.


Playing devil's advocate here, I'm not antagonizing you but thinking out loud.

> if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.

That's a current technical limitation. Are you so sure it won't be overcome in the near/mid future?

> LLM has zero intention, and rely on you to decide what to build and more importantly not build

But work is being done to even remove or automate this layer, right? It can be hyperbole (in fact, it is) but aren't Anthropic et al predicting this? Why wouldn't your boss, or your boss' boss, do this instead of you? If they lack the judgment currently, are you so sure they cannot gain it, if they don't have to waste time learning how to code? If not now, what about soon-ish?

> At this current year and date, the AI does not automate me in anyway

Not now, granted. But what about soon? In other words, shouldn't you be worried as well as excited?


Well if you do nothing you should definitely be worried, because not using LLM is rapidly becoming untenable.

If you do a lot, you'll grow skeptical about some of the claims and hype, and have a sense of where this is leading to.

My position is that if someone use LLM a lot, they maybe right or wrong about the future of LLM. If they don't, then they definitely are not right or are only lucky.

My personal judgement is both of these are hard caps until they invented something that's not a transformer, start from scratch bascially.


> because not using LLM is rapidly becoming untenable

Completely agreed. This is not what I'm advocating for. And definitely, there's a lot of self-serving hype (and fearmongering can be another kind of hype) by AI companies. But some of it I think will be true, or enough companies will believe it to be true, which amounts to the same.

I'm just worried, I cannot help it. And I'm not saying "don't use AI", I'm pushing back about the feeling of reckless "excitement".


Does it seem to you like those issues will be solved soon? Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?

> Does it seem to you like those issues will be solved soon?

No.

But I was also very skeptical about AI being able to code semi-reliably during the early stages of GPT hype, and look where I'm now: most of the code I produce is written by an AI. So I was wrong before, which makes me doubt my own ability to predict the near future.

> Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?

My boss' boss would probably love to get rid of both me and my direct boss. And a whole class of problems will disappear, freeing time of people higher up the chain to focus on this... either them or a tiny group of engineers, which leaves me out of a job either way. I've already seen people in small shops get fired because their immediate semi-technical boss can now do their job with AI (cannot go details because of privacy reasons. Also, it doesn't matter if the end result is flawed, it matters that "mission accomplished" and someone is out of a job).


> But I was also very skeptical about AI being able to code semi-reliably during the early stages of GPT hype, and look where I'm now: most of the code I produce is written by an AI

My impression from a couple years ago was that it was fairly decent at coding, it was just slow to go from question -> code, and the tooling around that has improved significantly so that it's all pretty quick. I think whether or not the models are fundamentally better at raw coding is a murkier question.

They still fall down at bigger architectural tasks, go off the rails, hallucinate, etc. So, it seems to me like a core problem with the current technology.

> it doesn't matter if the end result is flawed, it matters that "mission accomplished" and someone is out of a job

This is a short term problem. If the market has any sanity left, the shops that maintain the talent to execute well will out-perform the shops that were short-sighted.


> My impression from a couple years ago was that it was fairly decent at coding, it was just slow to go from question -> code, and the tooling around that has improved significantly so that it's all pretty quick

Your experience is very different from mine. Early GPT/LLM tech was hilariously wrong. It famously hallucinated code out of nowhere, made breaking changes all the time, failed to follow very simple instructions. I remember when it couldn't play Tic Tac Toe! It hallucinated board positions and rules. I used to break it all the time, for fun (and it didn't take much, it mostly fell down the stairs on its own). Now it can play far more complex games.

Was I right to be skeptical? Well, based on what I saw, I was right. GPT was impressive and fun but also hilariously wrong most of the time. Until they weren't!


If they never learned to code, it wouldn't be very easy to build or catch the BS that AI generate.

Yes, this is the obvious problem.

We've been through cycles like this before. Back in the day, Dreamweaver was going to put every web developer out of a job. More recently, Squarespace was going to do something similar. However, as soon as you step out of the well-trodden path, you're encountering tougher to debug issue, or you want some customization that the tools aren't aware of or designed to handle, and now you're hiring or paying a specialist again.


> We've been through cycles like this before. Back in the day, Dreamweaver was going to put every web developer out of a job [...]

I get what you're saying. This is why I was also skeptical, initially. But consider this: this time, it's qualitatively different, and more importantly, companies seem to believe so, which has real impact on our jobs.

Dreamweaver never threatened my job. Not once. Neither did Squarespace. I'm sure they did threatened some jobs, but ultimately they simply didn't replace the mind and hands guiding them, and in fact, they never aimed at this. "No code" tools were similarly misguided for a lot of real use case. However, this time, AI seems to be making real progress towards this, and is becoming a real threat to jobs.

The argument of "but when calculators/writing/$SOME_OTHER_TECH was introduced..." don't fly with me. $OLD_TECH is not necessarily analogous to new tech, or AI in particular.

What if this time it's different?


The marketing departments at tech companies have predicted 10 out of the last 2 technology revolutions. (Crypto is the future of money, MOOCs will kill universities, Zoom will kill the office, self driving cars will end personal automobile ownership)

Also, it’s a little too convenient that businesses are getting to spin their layoffs as a result of AI, rather than a weakening overall market (tariffs, higher energy costs) and a misallocation of resources (over-investment in VR, crypto).


I agree, but this time it seems qualitatively different.

I think you (we) are falling into the trap of thinking "it was BS before, therefore it's also BS now." The whole "when calculators were introduced..." yadda yadda.


Yes. This is why senior+ engineers will be fine for a while. It's the future of the industry I'm worried about. We are the last generation that will know the world before GenAI.

If you’re a senior, your upcoming competition is self-lobotomizing by relying too heavily on AI without knowing how things work and more importantly why things break. That should be good for you.

Yes, but not good for the industry in general.

> But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.

Yup. I've spotted former coworkers who I know for a fact can barely write in their native language, let alone in English, working for AWS and writing English-language technical blog posts in full AI-ese. Full of the usual "it's not X, it's Y", full of AI-slop. Most of the text is filler, with a few tidbits of real content here and there.

I don't know before, but now blog posts have become more noise than signal.


It's a strong signal in the negative direction, the best kind of signal really.

The "dead Internet" theory has become more real. It's especially bad on LinkedIn. Everyone is now an "AI expert", posting generated slop and updating their profiles with AI enhanced head shots.

> It's especially bad on LinkedIn

Agreed, but to be fair, LinkedIn was especially bad to begin with.

Even before AI-slop, LinkedIn posts were rightfully mocked. Self-congratulatory or self-pitying, full of empty platitudes and "lessons learned" and "journeys" (ended or started). There was never anything worth reading to begin with.

Now it's of course worse. I don't think I can stand reading about another self-appointed expert on LinkedIn writing about their completely unwarranted strategy and/or lessons and/or skepticism about AI.

I only go to LinkedIn for the daily puzzles!


Yes, we have more "thought leaders" than ever, all acting like copy-and-pasting from a textbox is some sort of unique skill.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: