Hacker Newsnew | past | comments | ask | show | jobs | submit | SirensOfTitan's commentslogin

I don't really think this is at all at the quality bar for posts here. This is obviously AI-slop -- why should I invest more time reading your slop than you took to write it?

Even so, at what point do we consider the LLM-ification of all of tech a hazard? I've seen Claude go and lazily fix a test by loosening invariants. AI writes your code, AI writes your tests. Where is your human judgment?

Someone is going to lose money or get hurt by this level of automation. If the humans on your team cannot keep track of the code being committed, then I would prefer not to use your product.


> I've seen Claude go and lazily fix a test by loosening invariants.

He does pull a sneaky on you from time to time, even nowadays, in v4.6, doesn't he?

To me it's analogous to the current situation at the strait of Hormuz - it's an enormous crisis but since almost everyone has a buffer of oil stockpiles, we can pretend it's not there.


this is extremely strawman - with this your basically saying any software ever that has parts written by automation or cron jobs (even before llms) is not a product worth using? foolish.

Your response reads much more like a strawman than my original comment.

I’d challenge you to identify where in my post I said I wouldn’t use software that employs automation?

It is pretty clear I am not talking about running CI for automated and predictable signals or cron jobs. I am talking about using AI to write code and also fix tests.

It is exceedingly clear in practice that the volume of code produced by LLMs is too much for the humans using these tools to read and understand. We are collectively throwing decades of best practices out of the window in service of “velocity.” Even the FAANG shops I know of who previously had good engineering cultures seem to be endorsing the cult of: AI generated everything with stamp approval.


Cron jobs are not capable of flat-out deceit.

In my mind, AI is making a lot of engineers, including Carmack, seem fairly thoughtless. At the other moments in recent history where technology has displaced workers, labor has either had to fight some very bloody battles or had stronger labor organization. Tech workers are highly atomized now, and if you have to work to live, you're negotiating on your own.

It seems like Carmack, like a lot of tech people, have forgotten to ask the question: who stands to benefit if we devalue the US services economy broadly? Who stands to lose? It seems like a lot of these people are assuming AI will be a universal good. It is easy to feel that way when you are independently wealthy and won't feel the fallout.

Even a small % of layoffs of the US white collar work force will crash the economy, as our economy is extremely levered. This is what happened in 2008: like 7% of mortgages failed, and this caused a cascade of failures we are still feeling today.


Software engineers have been automating away workers' jobs from the beginning. "Computer" was once a job title. There were armies of switchboard operators at the phone company. Companies had typing pools, mail clerks, and file clerks. We write shell scripts and development tools to automate our own jobs.

Most of us got into engineering for the means (programming computers) rather than the ends (automating away jobs).

I guess the people that have been rejoicing from the AI revolution are of the latter type.


Or maybe they find the idea of computers that can think just as exciting as you found programming at the start of your career?

I never found the idea of a thinking computer exciting, just as I don’t find the idea of a thinking screwdriver exciting.

These days I see the ultimate goal to create a super-intelligence to be blasphemous, if not existentially dangerous and I am afraid by how nonchalant everybody is about it.

I quite enjoy a reality where humans and biological life are in control of their destiny, but it’s apparently become a taboo opinion around these parts.


Good for you. But other people are allowed to find things exciting that you don't.

Personally I'd find the idea of thinking screwdriver... Well, weird. But definitely amazing and exciting.


I find the idea of a thinking screwdriver annoying. Thinking things are difficult to reason about, and tools that are difficult to reason about are frustrating to use.

A thinking screwdriver:

"You know what ... screw this."


I think a lot of engineers ignore the ends because they enjoy the means. The ethical impact of their work doesn't matter because they get to work on cool technology.

Those were electrical engineers, digital switches came out later... regardless we are talking about labor of a much larger industry.

I guess 25 years of "unions are for under-performers" is finally going to bite us in the ass.

I'm not aware of any labor efforts that have successfully fought automation long term.

There's been plenty of temporary victories, but even the unions often acknowledge it's temporary.


The point is not to fight automation. The point is to fight for a better distribution model.

Well you are still right though. There were only temporary wins.


> in recent history where technology has displaced workers, labor has either had to fight some very bloody battles or had stronger labor organization

what examples are you thinking of?


Most of 19th and early-20th century history, which is very much recent history.

Look up:

- The Haymarket Affair

- The Homestead Strike

- The Triangle Shirtwaist Factory Fire

- The Ludlow Massacre

- The Battle of Blair Mountain

You could also simply have taken the quote you were responding to and run it through a few LLMs to acquire those examples.


lol this got downvoted - sorry that I studied history!

> You could also simply have taken the quote you were responding to and run it through a few LLMs to acquire those examples.

Wasn't me, but probably because this was unnecessary and rude. An example, or a link, when a claim is made, is always nice, turns a hollow claim into something informative. Better signal to noise is nice.


That’s funny.

I find it pretty rude to ask a question on a fairly well-documented historical topic that you could also very easily have found out with a simple Google search. Back in the day, we used to reply to people, “Let me Google that for you,” when someone asked such a low-effort question.

Your original reply strongly indicated that you were skeptical and questioning the user’s claim. There is a very large body of historical research documenting all of these things.


> Your original reply strongly indicated that you were skeptical and questioning the user’s claim.

No, I was honestly genuinely interested. This is foreign to me and thought there might be an interesting starting point. You should read comments with a charitable interpretation.

You should check out the HN comment guidelines [1], which the mods take seriously.

[1] https://news.ycombinator.com/newsguidelines.html


This is a conversation forum, so it's natural for people to ask questions of each other. Sure, we could, in principle, ask Google, or ChatGPT for everything, but then why have an online conversation at all?

nomel couldn't have downvoted you (HN constraint), stop the attack. LMGTFY has a terrible rep on HN (I'd link a search, but you can easily find).

I think my definition and your definition of what constitutes an attack are fairly different. I’m offering feedback, not an attack.

> Even a small % of layoffs of the US white collar work force will crash the economy, as our economy is extremely levered.

A major economic crash as the only consequence would be the good ending.

The real societal risk here is that software development is not just a field of primarily white men, it was one of the last few jobs that could reliably get one homeownership & an (upper) middle class life.

And the current US government is not, shall we say, the most liberal. There is a substantial risk that when forced with the financial destitution of being unemployed while your field is dying, people will radicalize.

It takes a good amount of moral integrity to be homeless under a bridge and still oppose the gestapo deporting the foreigners who have jobs you'd be qualified for. And once the deportations begin, I doubt they'll stop with only the H1Bs. The Trump admin's not exactly been subtle about their desire to undo naturalizations and even birthright citizenship.


I totally agree. I've written about this topic a lot on this site, probably most recently here:

https://news.ycombinator.com/item?id=47115597

The US is built on-top of a high value service economy. And what we're doing is allowing a couple companies to come in, devalue US service labor, and capture a small fraction of the prior value for themselves on top of models trained on copyrighted material without permission. Of course, to your point: things can get a lot worse than that. I honestly don't think a lot of executives even know how much they're shooting themselves in the foot because they seem unable to think beyond the first order.

I also see a lot of top 1% famous or semi-famous engineers totally ignoring the economic realities of this tech, people like: Carmack, Simon Willison, Mitchell Hashimoto, Steve Yegg, Salvatore Sanfilippo and others. They are blind to the suffering these technologies could cause even in the event it is temporary. Sure, it's fun, but weekend projects are irrelevant when people cannot put food on the table. It's been really something to watch them and a lot of my friends from FAANG totally ignore this side. It is why identity matters when people make arguments.

I also think I'm insulated partially from the likely initial waves of fallout here by nature of a lucky and successful career. I would love it if the influential engineers I mentioned above stopped acting like high modernists and started taking the social consequences of this technology seriously. They could change a lot more minds than I could. And they could ensure through that advocacy for labor that we see the happiest ending with respect to rolling out LLMs.

Unfortunately I don't really believe labor has much teeth anymore, and tech will wake up too late to do anything about it.


> I honestly don't think a lot of executives even know how much they're shooting themselves in the foot because they seem unable to think beyond the first order.

It's just so depressing. You see Microsoft and Google's CEOs being completely reckless with investment & the economy. And it's just ... HAVE THEY NOT LOOKED INTO A MIRROR? DO THEY NOT REALIZE THEY ARE THE FALL GUYS?!

Nevermind how the vast majority of major CEOs can't even run a business anymore. An old boys club of morons running the entire economy.

> And they could ensure through that advocacy for labor that we see the happiest ending with respect to rolling out LLMs.

It's just more of the same old "Software dev doesn't need unions". The top 1% always think they're pointless because they made it without unions.

> Unfortunately I don't really believe labor has much teeth anymore, and tech will wake up too late to do anything about it.

Amusingly, I hold the opposite sentiment.

Labor isn't going anywhere. These executives and managers can barely tie their own shoelaces. Big Tech and the current startup scene are laughably dysfunctional.

The moment the economic recession really starts to set in, everyone's gonna try to cut down their SaaS spending. Then, the days of being able to shit out some (AI or not) slop and charge double price will be well and truly over.

Once software firms have to compete on quality again, labor is going to be more important than ever.

AI may not even be meaningfully involved in software dev. To break even at the API prices would require charging on the order of 1-2 thousand dollars, per month, per seat. Factoring in long term training costs will will make that several times worse.

... Before we consider that we're probably heading into an oil crisis making energy and computer hardware much more expensive.

I doubt employers are going to pay the $10,000/month/seat required to make AI profitable for everyone in the supply chain. Certainly not during the worst recession this side of WWII.


They are not the fall guys. They are at the buffet with the biggest plates, and when the buffet ends, they'll have the most food on their plates.

> the US is built on a high value service economy

The purist forms of capitalism I’ve seen are places with low prices, a large working class, practical marketing, and high competition - often they’re considered “3rd world” places.

The US economy, if it wants to remain “1st world” must have high prices. It has to contain an element of scarcity (however faux) in order to be sold at a premium, or be able to impart some privileged (institutional) knowledge as a firm - which should be as esoteric as it is scarce.

It can’t be quality alone since all building and manufacturing is effectively outsourced. It has to have a premium brand recognition or monopolistic aspects to it that necessitate a high price.

So the challenge for the first world, during the rise of China (Mexico, etc.), is to find new ways to justify the privileged position using this new technology as a lever to do so.


This reads as incredibly damning to me. PR throughput should be a metric that is very supportive of the AI productivity narrative, but the effect is marginal.

Before everyone gets at me: smoking cigarettes increases your risk of lung cancer by 15-30x. Effect size matters. As does margin of error: what is the margin of error? This "increase" could easily be within noise.

PR throughput is also not a metric I would ever use to determine developer productivity for a paradigm shifting technology. I would only ever use it to compare like-to-like to find trailheads: is a team or person suddenly way more or less productive? The primary endpoint for software production is serving your customer or your mission, and PR throughput can't tell you whether any of that got better. It also cannot tell you the cost of your prior work: the increase in PR throughput could be more PRs to fix issues introduced by LLM-assisted work.


I suspect the issue is the SDLC methodology of existing mature products. The "I can build it in a weekend" use case has gotten a massive boost as you can build something which "looks" real faster then ever. Mature teams need to deal with backwards compatibility and real development risk.

The generic elevator music used for the demo video is highly representative of this whole concept: generic and derivative.

Seriously though, Perplexity, like most of the AI wrapper companies, seems unable to innovate much beyond the query-response chat paradigm. I don't understand why VCs continue to fund these ai-slop companies. I see a new company's advertisements on the NY subway every week, and they're all the same: Anthropic/Google/OpenAI resellers who are selling some UI wrapper (or at best a bespoke model worse than the flagships) on top of pretty basic prompt engineering or tools.

This is what happens when we invert the product-paradigm: we're not solving problems with technology, we're taking technology and applying it to problems.

I use AI every day, so I'm hardly a luddite, but this bubble is so ridiculous at this point. This perplexity product, more than any other so far, feels so representative of peak craze.


I'd be willing to bet that every wannabe CEO out there is spooging after seeing that demo. That's clearly the target market: The wantrepreneurs who would surely have their brilliant successful business if only they didn't have to hire a bunch of lazy employees to half-ass it! "If only I could just speak my vague ideas to my computer, and it could do all the hard work of building and running this business, I could just chill out, be an entrepreneur on Insta, and collect the revenue checks.

The rich kids used to want to be rockstars, now they all want to be entrepreneurs.

Back when work was work and we had festivals.


It's what grads do after school - create an AI startup.

If you went to the right school, you can be a Series A company with nothing more than an OPENAI_API_KEY. Most of the young and inexperienced founders mentally retire at this point and start their family planning.

There were a few cities like "Austin", but now I guess it's just SF and NY again (I'm in SF).

The students get what they paid for.

The school gets their metrics.

The VCs get returns, from the increased revenue their portfolios just paid for by investing in their kid's startup.

And the wheel in the sky keeps on turnin!


You're being downvoted, but if Anthropic is going to deploy Claude for decision making in target prosecution it is clearly a "Caesar's wife must be above suspicion" moment. Association with is guilt unless proven otherwise.

Even if you don't care about the needless human suffering the US has caused from this operation, this conflict threatens global stability because of oil supply disruptions, and if the US keeps this up it could get quite bad very quickly.

I worked briefly in defense-tech and there is a huge blindspot in this field. While I worked with a ton of thoughtful, ethical, and talented people from the military, there is a veritable blind spot when it comes to support of the "warfighter." It is certainly noble and worthwhile work to protect soldiers from harm through technology, but I got some sense some people (actually especially the tech people who were never in the military) didn't think enough about the ethical concerns when dealing with people attached to the US's "enemies." And further, what about when the US itself is the aggressor? While active warfighters have to follow chain of command, companies can and should apply ethical constraints--but they often don't because DoD contracts are lucrative and (especially if you're not a prime) hard won.

I've had a lot of fun playing with Claude 4.6, but it is entirely unacceptable that this technology is being used in this conflict with Iran. I will cancel my account once this month's subscription is up in 2 weeks. The US is the aggressor here, that is certain. Support of this conflict as a private company that supposedly is oriented toward ethics is extremely illuminating.

Now with that, I have thought a tremendous amount about whether someone like Dario could even steer the ship away from support of a conflict like this at this point. We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training. There is certainly an argument to be made that if he did so, he might lose confidence of investors and lose control entirely. This begs the question: is shareholder/capital optimization the best way to organize our society?


> We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training.

There's also the consideration that if they come across at too against US military support, the administration can and will make things extremely painful for them. I suspect they've actually gotten off pretty easy just being named a supply chain risk (so far). Imagine the backlash if they'd for example accepted contracts with China. Or even made so much as a hint that they weren't open to most military use cases.


As soon as you accept "we need to survive to do good," survival becomes the priority and the good becomes negotiable. And so every compromise reduces their ethical position a little more.

Living in accordance with an ethical framework only matters when that decision is hard. There are clearly consequences to doing so. But Anthropic has clearly forfeited their right to claim the moral high ground. Their posturing against OpenAI is based on a false dichotomy: they are arguing around a cutout incredibly minor commensurate with their broader exposure.

I think Anthropic can avoid contracting with the military at this stage, with all of their babbling about alignment, and not actively contract with China.


> I think Anthropic can avoid contracting with the military at this stage

I think you're still missing that the US govt could theoretically do all on their power to ensure Anthropic ceases to exist (as a viable company), if they felt it was warranted. I can see Anthropic anticipating and trying to avoid such an outcome.


I've found that reading odds and ends outside of my own academic, professional, or theoretical interests nets some interesting things sometimes.

At one point I got curious about how the US military thinks about insurgencies, so I read their manual on how to fight them. As someone holding a lot of dissident views in the US it was pretty interesting.

One thing I took away was the feeling that at no time did the manual ever define what an "insurgent" is, beyond whoever the US government tells them the insurgents are.

So you have as situation where, ultimately, there's no external reality testing, and reality is simply whatever "reality" is as defined by the command structure.

I know that sounds overly simple- of course military follows a chain of command, unquestionable right up to its civilian commander in chief.

Why I feel that is a useful observation is that, to your question, people are constantly deferring their ethical judgements. And I suspect there is some cognitive bias in play that allows folks to feel that deferral can't happen across all these systems.

In the case of businesses, it is to "the market"-- which is reactive and as such doesn't have "judgement", and even if it did it's needs aren't "human" so relying on it as a human seems dangerous. So to your question, my answer is usually "probably not". And further, unless people stop deferring their judgments to the imaginary of the spectacular market, eventually shits gonna break.

In the case of the military, we can see what happens when radically nihilistic (pedophilic and sociopathic media personalities) are put at the helm.

My larger point, though, is that our usual assumption seems to be that all these other folks are likely to exercise their faculties to test out reality and hopefully, when it doesn't line up with that reality, push back and prevent dumb shit from happening.

But all these systems are set up to prevent that from happening, it doesn't seem at all strange to me that these systems are starting to break in the ways that the seem to be failing.


I don’t see this reality in the style of interview being performed at all.

Everyone has seemingly adopted the FAANG playbook for interviewing that doesn’t really select for people who like getting their hands dirty and building. These kinds of interviews are compliance interviews: they’re for people who will put in the work to just pass the test.

There are so many interviews I’ve been in where if I don’t write the perfect solution on the first try, I’ll get failed on the interview. More than ever, I’m seeing interviewers interrupt me during systems or coding interviews before I have a chance to dig in. I’ve always seen a little bit of this, but it seems like the bar is tightening, not on skill, but on your ability to regurgitate the exact solution the interviewer has in mind.

In the past I’ve always cold applied places and only occasionally leaned on relationships. Now I’m only doing the latter. Interviewees are asked to risk asymmetrically compared to employers.


Their data integration and sale allows for the government to surveil citizens without probable cause or warrants.


The solution is still no different than a decade ago. Far stricter laws on intelligence, federal and local police surveillance, and a reduction in executive power which oversteps checks and balances.

There will always be another IT company willing to do integrations even if Palantir dies. Software isn’t going away.


Right. But this is about Anthropic -- a company frames itself as a responsible and ethical steward of LLM technology. They can't pretend that OpenAI is somehow morally bankrupt here while continuing to deal with companies that undermine peoples' civil liberties.

I'm also a little unsure what you're saying here. Are you saying that it's futile to rely on corporate leaders to commit to ethical acts, as there's always someone else who will debase themselves to make money? I think that solely relying on the state to regulate itself with respect to civil liberties is a fast path to despotism. The well-regulated state was always a partnership between ordinary people bravely standing up for their rights and the norms of the rules and laws that made it socially acceptable to do so.

If I'm grasping you correctly, I think you're right; however, this points to the rottenness of our culture's way of organizing labor: the optimization of the shareholder over everyone else leads to some really awful effects.


Like others have already mentioned: I think Anthropic's relationship with Palantir undermines Amodei's narrative here. It actually feels like Dario is playing Sam's game better than Sam is.

Those who know better please correct me. My current understanding of Palantir (and other surveillance tech companies like Peregrine) is:

1. They facilitate the sale of data to law enforcement, enabling the government to circumvent fourth amendment protections.

2. They fuse cross-government agency data through Foundry and fuse them into unified profiles which the government can use to surveil and pressure citizens without probable cause or a warrant.

ICE also uses a Palantir tool called ELITE to build deportation target lists.

EDIT: Downvoting my comment without any proper rebuttal or clarification is pretty silly.


We don’t know if Palantir is using claude for those uses. Though anthropic would not know for sure either.

I do agree with your point that Amodei is playing a game though. Whether he’s winning the bigger picture or not it’s unclear. His red lines are already so watered out, like how domestic surveillance is not ok, but international? totally fine.


That's true. With the risks of LLMs applied to surveillance though, I think it's a "Caesar's wife must be above suspicion" moment. Association is guilt unless proven otherwise.


It feels more like the are playing good cop/bad cop... There is just something indifferent about all of this that makes me wonder.


They engage with Palantir for non-domestic purposes.


"Non-domestic purposes" specifically includes wiretapping US citizens and residents, and has for at least 25 years:

https://en.wikipedia.org/wiki/NSA_warrantless_surveillance_(...

I suspect the 2007 in the title refers to the fact that bills were passed to ban this stuff in 2007, which is when the PRISM program (also illegal domestic surveillance) got started.

(The title makes it sound like warrantless surveillance lasted from 2001-2007, but I think it means the article only covers that date range.)


So because of his economic realities he should be pragmatic enough to drop his values?

Can't you see what a slippery slope that is? And in fact, how dangerous that level of economic despair is for a functioning democracy?

It's also not fair, because people who are more fortunate to be born into a well-off family can eat vegan their whole lives.

This person did everything he was supposed to do, stood up for things he believed in, and still was left in the lurch along the way. This is not the American dream, it is a clear indication how arrested social mobility is in the US. The rags-to-riches "Horatio Alger" story has been a myth in the US for quite a well, buoyed by anecdotes that are predicated on luck.


> Can't you see what a slippery slope that is?

"Slippery slope" is literally the name of a common logical fallacy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: