> If you hadn't realized, a LOT of companies switched to Claude. No idea why, and this is coming from someone who loves Claude Code.
It is entirely due to Opus 4.5 being an inflection point codingwise over previous LLMs. Most of the buzz there has been organic word of mouth due to how strong it is.
Opus 4.5 is expensive to put it mildly, which makes Claude Code more compelling. But even now, token providers like Openrouter have Opus 4.5 as one of its most popular models despite the price.
The real annoying thing about Opus 4.5 is that it's impossible to publicly say "Opus 4.5 is an order of magnitude better than coding LLMs released just months before it" without sounding like a AI hype booster clickbaiting, but it's the counterintuitive truth, to my personal frustration.
I have been trying to break this damn model since its November release by giving it complex and seemingly impossible coding tasks but this asshole keeps doing them correctly. GPT-5.3-Codex has been the same relative to GPT-5.2-Codex, which just makes me even more frustrated.
Weird, I broke Opus 4.5 pretty easily by giving some code, a build system, and integration tests that demonstrate the bug.
CC confidently iterated until it discovered the issue. CC confidently communicated exactly what the bug was, a detailed step-by-step deep dive into all the sections of the code that contributed to it. CC confidently suggested a fix that it then implemented. CC declared victory after 10 minutes!
The bug was still there.
I’m willing to admit I might be “holding it wrong”. I’ve had some successes and failures.
It’s all very impressive, but I still have yet to see how people are consistently getting CC to work for hours on end to produce good work. That still feels far fetched to me.
Wait, are you really saying you have never had Opus 4.5 fail at a programming task you've given it? That strains credulity somewhat... and would certainly contribute to people believing you're exaggerating/hyping up Opus 4.5 beyond what can be reasonably supported.
Also, "order of magnitude better" is such plainly obvious exaggeration it does call your objectivity into question about Opus 4.5 vs. previous models and/or the competition.
Opus 4.5 does made mistakes but I've found that's more due to ambiguous/imprecise functional requirements on my end rather than an inherent flaw of the agent pipeline. Giving it more clear instructions to reduce said ambiguity almost always fixes it, so I do not consider Opus failing. One of the very few times Opus 4.5 got completely stuck was, after tracing, an issue in a dependency's library which inherently can't be fixed on my end.
I am someone who has spent a lot of time with Sonnet 4.5 before that and was a very outspoken skeptic of agentic coding (https://news.ycombinator.com/item?id=43897320) until I gave Opus 4.5 a fair shake.
I don't know how to say this but either you haven't written any complex code or your definition of complex and impossible is not the same as mine, or you are "ai hyper booster clickbaiting" (your words).
It strains belief that anyone working on a moderate to large project would not have hit the edge cases and issues. Every other day I discover and have to fix a bug that was introduced by Claude/Codex previously (something implement just slightly incorrect or with just a slightly wrong expectation).
Every engineer I know working "mid-to-hard" problems (FANG and FANG adjacent) has broken every LLM including Opus 4.6, Gemini 3 Pro, and GPT-5.2-Codex on routine tasks. Granted the models have a very high success rate nowadays but they fail in strange ways and if you're well versed in your domain, these are easy to spot.
Granted I guess if you're just saying "build this" and using "it runs and looks fine" as the benchmark then OK.
All this is not to say Opus 4.5/6 are bad, not by a long shot, but your statement is difficult to parse as someone who's been coding a very long time and uses these agents daily. They're awesome but myopic.
I resent your implication that I am baselessly hyping. I've open sourced a few Opus 4.5-coded projects (https://news.ycombinator.com/item?id=46543359) (https://news.ycombinator.com/item?id=46682115) that while not moderate-to-large projects, are very niche and novel without much if any prior art. The prompts I used are included with each those projects: they did not "run and look fine" on first run, and were refined just as with normal software engineering pipelines.
You might argue I'm No True Engineer because these aren't serious projects but I'd argue most successful uses of agentic coding aren't by FANG coders.
> I want to see good/interesting work where the model is going off and doing its thing for multiple hours without supervision.
I'd be hesitant to use that as a way to evaluate things. Different systems run at different speeds. I want to see how much it can get done before it breaks, in different scenarios.
I never claimed Opus 4.5 can one-shot things? Even human-written software takes a few iterations to add/polish new features as they come to mind.
> And you clearly “broke” the model a few times based on your prompt log where the model was unable to solve the problem given with the spec.
That's less due to the model being wrong and more due to myself not knowing what I wanted because I am definitely not a UI/UX person. See my reply in the sibling thread.
Apologies, I may have misinterpreted the passage below from your repo:
> This crate was developed with the assistance of Claude Opus 4.5 initially to answer the shower thought "would the Braille Unicode trick work to visually simulate complex ball physics in a terminal?" Opus 4.5 one-shot the problem, so I decided to further experiment to make it more fun and colorful.
Also, yes, I don’t dispute that human written software takes iteration as well. My point is that the significance of autonomous agentic coding feels exaggerated if I’m holding the LLM’s hand more than I have to hold a senior engineer’s hand.
That doesn’t mean the tech isn’t valuable. The claims just feel over exaggerated.
If you click the video that line links to, it one-shot the original problem as very explicitly defined as a PoC, not the entire project. The final project shipped is substantially different, and that's the difference between YOLO vibecoding and creating something useful.
There's also the embarrassing corner physics bugs present in that video, which was something that required a fix in the first few prompts.
It still cannot solve a synchronization issue in my fairly simple online game, completely wrong analysis back to back and solutions that actually make the problem worse. Most training data is probably react slop so it struggles with this type of stuff.
But I have to give it to Amodei and his goons in the media, their marketing is top notch. Fear-mongering targeted to normies about the model knowing it is being evaluated and other sort of preaching to the developers.
Yes, as all of modern politics illustrates, once one has staked out a position on an issue it is far more important to stick to one's guns regardless of observations rather than update based on evidence.
Not hype. Opus 4.5 is actually useful to one-shot things from detailed prompts for documentation creation, it's actually functional for generating code in a meaningful way. Unfortunately it's been nerfed, and Opus 4.6 is clearly worse from my few days of working with it since release.
The use of inflection point in the entire software industry is so annoying and cringy. It's never used correctly, it's not even used correctly in the Claude post everyone is referencing.
It is entirely due to Opus 4.5 being an inflection point codingwise over previous LLMs. Most of the buzz there has been organic word of mouth due to how strong it is.
Opus 4.5 is expensive to put it mildly, which makes Claude Code more compelling. But even now, token providers like Openrouter have Opus 4.5 as one of its most popular models despite the price.