Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey, Boris from the Claude Code team here. I wanted to take a sec to explain the context for this change.

One of the hard things about building a product on an LLM is that the model frequently changes underneath you. Since we introduced Claude Code almost a year ago, Claude has gotten more intelligent, it runs for longer periods of time, and it is able to more agentically use more tools. This is one of the magical things about building on models, and also one of the things that makes it very hard. There's always a feeling that the model is outpacing what any given product is able to offer (ie. product overhang). We try very hard to keep up, and to deliver a UX that lets people experience the model in a way that is raw and low level, and maximally useful at the same time.

In particular, as agent trajectories get longer, the average conversation has more and more tool calls. When we released Claude Code, Sonnet 3.5 was able to run unattended for less than 30 seconds at a time before going off the rails; now, Opus 4.6 1-shots much of my code, often running for minutes, hours, and days at a time.

The amount of output this generates can quickly become overwhelming in a terminal, and is something we hear often from users. Terminals give us relatively few pixels to play with; they have a single font size; colors are not uniformly supported; in some terminal emulators, rendering is extremely slow. We want to make sure every user has a good experience, no matter what terminal they are using. This is important to us, because we want Claude Code to work everywhere, on any terminal, any OS, any environment.

Users give the model a prompt, and don't want to drown in a sea of log output in order to pick out what matters: specific tool calls, file edits, and so on, depending on the use case. From a design POV, this is a balance: we want to show you the most relevant information, while giving you a way to see more details when useful (ie. progressive disclosure). Over time, as the model continues to get more capable -- so trajectories become more correct on average -- and as conversations become even longer, we need to manage the amount of information we present in the default view to keep it from feeling overwhelming.

When we started Claude Code, it was just a few of us using it. Now, a large number of engineers rely on Claude Code to get their work done every day. We can no longer design for ourselves, and we rely heavily on community feedback to co-design the right experience. We cannot build the right things without that feedback. Yoshi rightly called out that often this iteration happens in the open. In this case in particular, we approached it intentionally, and dogfooded it internally for over a month to get the UX just right before releasing it; this resulted in an experience that most users preferred.

But we missed the mark for a subset of our users. To improve it, I went back and forth in the issue to understand what issues people were hitting with the new design, and shipped multiple rounds of changes to arrive at a good UX. We've built in the open in this way before, eg. when we iterated on the spinner UX, the todos tool UX, and for many other areas. We always want to hear from users so that we can make the product better.

The specific remaining issue Yoshi called out is reasonable. PR incoming in the next release to improve subagent output (I should have responded to the issue earlier, that's my miss).

Yoshi and others -- please keep the feedback coming. We want to hear it, and we genuinely want to improve the product in a way that gives great defaults for the majority of users, while being extremely hackable and customizable for everyone else.





I can’t count how many times I benefitted from seeing the files Claude was reading, to understand how I could interrupt and give it a little more context… saving thousands of tokens and sparing the context window. I must be in the minority of users who preferred seeing the actual files. I love claude code, but some of the recent updates seem like they’re making it harder for me to see what’s happening.. I agree with the author that verbose mode isn’t the answer. Seems to me this should be configurable

I think folks might be crossing wires a bit. To make it so you can see full file paths, we repurposed verbose mode to enable the old explicit file output, while hiding more details behind ctrl+o. In effect, we've evolved verbose mode to be multi-state, so that it lets you toggle back to the old behavior while giving you a way to see even more verbose output, while still defaulting everyone else to the condensed view. I hope this solves everyones' needs, while also avoiding overly-specific settings (we wanted to reuse verbose mode for this so it is forwards-compatible going fwd).

To try it: /config > verbose, or --verbose.

Please keep the feedback coming. If there is anything else we can do to adjust verbose mode to do what you want, I'd love to hear.


I'll add a counterpoint that in many situations (especially monorepos for complex businesses), it's easy for any LLM to go down rabbit holes. Files containing the word "payment" or "onboarding" might be for entirely different DDD domains than the one relevant to the problem. As a CTO touching all sorts of surfaces, I see this problem at least once a day, entirely driven by trying to move too fast with my prompts.

And so the very first thing that the LLM does when planning, namely choosing which files to read, are a key point for manual intervention to ensure that the correct domain or business concept is being analyzed.

Speaking personally: Once I know that Claude is looking in the right place, I'm on to the next task - often an entirely different Claude session. But those critical first few seconds, to verify that it's looking in the right place, are entirely different from any other kind of verbosity.

I don't want verbose mode. I want Claude to tell me what it's reading in the first 3 seconds, so I can switch gears without fear it's going to the wrong part of the codebase. By saying that my use case requires verbose mode, you're saying that I need to see massive levels of babysitting-level output (even if less massive than before) to be able to do this.

(To lean into the babysitting analogy, I want Claude to be the babysitter, but I want to make sure the babysitter knows where I left the note before I head out the door.)


> I don't want verbose mode. I want Claude to tell me what it's reading in the first 3 seconds, so I can switch gears without fear it's going to the wrong part of the codebase. By saying that my use case requires verbose mode, you're saying that I need to see massive levels of babysitting-level output (even if less massive than before) to be able to do this.

To be clear: we re-purposed verbose mode to do exactly what you are asking for. We kept the name "verbose mode", but the behavior is what you want, without the other verbose output.


This is an interesting and complex ui decision to make.

Might it have been better to retire and/or rename the feature, if the underlying action was very different?

I work on silly basic stuff compared to Claude Code, but I find that I confuse fewer users if I rename a button instead of just changing the underlying effect.

This causes me to have to create new docs, and hopefully triggers affected users to find those docs, when they ask themselves “what happened to that button?”


Yeah, in hindsight, we probably should have renamed it.

It's not too late.

This verbose mode discussion has gotten quite verbose lol

You can call it “output granularity” and allow Java logger style configuration, e.g. allowing certain operations to be very verbose while others being simply aggregated

If we're going there, we need to make the logging dynamically configurable with Log4J-style JNDI and LDAP. It's entirely secure as history has shown - and no matter what, it'll still be more secure than installing OpenClaw!

(Kidding aside, logging complexity is a slippery slope, and I think it's important, perhaps even at a societal level, for an organization like Anthropic to default to a posture that allows people to feel they have visibility into where their agentic workflows are getting their context from. To the extent that "___ puts you in control" becomes important as rogue agentic behavior is increasingly publicized, it's in keeping with, and arguably critical to, Claude's brand messaging.)


They don’t have to reproduce it literally. It’s an UX problem with many solutions. My point is, you cannot settle on some „average“ solution here. It’s likely that some agents, some operations will be more trustworthy, some less, but that will be highly dependent on context of the execution.

Feels like you aren’t really listening to the feedback. Is verbose mode the same as the explicit callouts of files read in the previous versions? Yes, you intended it to fulfill the same need, but, take a step back. Is it the same? I’m hearing a resounding “no”. At the very least if you hace made such a big change, you’ve gotten rid of the value of a true “verbose mode”.

> To be clear: we re-purposed verbose mode to do exactly what you are asking for. We kept the name "verbose mode", but the behavior is what you want, without the other verbose output.

Verbose mode feels far too verbose to handle that. It’s also very hard to “keep your place” when toggling into verbose mode to see a specific output.


I think the point bcherny is making in the last few threads is that, the new verbose mode _default_ is not as verbose as it used to be and so it is not "too verbose to handle that". If you want "too verbose", that is still available behind a toggle

Yeah, I didn't realize that there's a new sort of verbose mode now which is different than the verbose mode that was included previously. Although I'm still not clear on the difference between "verbose mode" and "ctrl + o". Based on https://news.ycombinator.com/item?id=46982177 I think they are different (specifically where they say "while hiding more details behind ctrl+o".

I thought I was the only person going crazy by the new default behavior not showing the file names! Please don't expect users to understand your product details and config options in such detail, it was working well before, let it remain. Or at least show some message like "to view file names, do xyz" in the ui for a few days after such a change.

While we're here, another thing that's annoying: the token counter. While claude is working, it read some files, makes an edit, let's say token counter is at 2k tokens, I accept the edit, now it starts counting very fast from 0 to 2k and then shows normal inference speed changes to 2.1k, 2.3k etc. So wanted to confirm: is that just some UI decision and not actually using 2k tokens again? If so, it would be nice to have it off, just continue counting where you left off.

Another thing: is it possible to turn off the words like finagling and similar (I can't remember the spelling of any of them) ?


> Another thing: is it possible to turn off the words like finagling and similar (I can't remember the spelling of any of them) ?

Big +1 on that. I find the names needlessly distracting. I want to just always say a single thing like “thinking”


You should be able to do something like this:

    "spinnerVerbs": {
      "mode": "replace",
      "verbs": ["Thinking"]
    }
https://code.claude.com/docs/en/settings#available-settings

Thank you for the config and the link, that's very much appreciated!

How absurd this is an option, but I’ll be using this config too.

I replaced my spinner verbs with thought-provoking Yodaese so my claude sessions are constantly making me think about my life decisions. Loving it. https://gist.github.com/topherhunt/b7fa7b915d6ee3a7998363d12...

> I want to just always say a single thing like “thinking”

As a counterview, I like the whimsical verbs. I'll be sticking with them. But nice to see there is an option.


I don't want my tools to make jokes, I want them to work.

I remember they shipped a feature so that’s configurable.

We don’t want verbose mode. We don’t want the whole file contents. We are not asking for that. What is not clear here?

All we want is the file paths. That is all. Verbose mode pulls in a lot of other information that might very well be needed in other contexts. People who want that info should use verbose mode. All we want is the regular non-verbose mode, with paths.

I fail to see how it is confusing to users, even new users, to print which paths were accessed. I fail to see the point of printing that some paths were accessed, but not which.


Verbose mode does exactly what you want as of v2.1.39, you are confusing it with the full transcript which is a different feature (ctrl+o). You enable verbose mode in /config and it gives you files read and search patterns and token count, not whole file contents.

Please don’t change what these modes do! I have scripts that call into the agent SDK with verbose mode output for logging purposes. Now I guess I need to recreate the old verbose mode for that application? Why?

FWIW I mentioned this in the thread (I am the guy in the big GH issue who actually used verbose mode and gave specific likes/dislikes), but I find it frustrating that ctrl+o still seems to truncate at strange boundaries. I am looking at an open CC session right now with verbose mode enabled - works pretty well and I'm glad you're fixing the subagent thing. But when I hit ctrl+o, I only see more detailed output for the last 4 messages, with the rest hidden behind ctrl+e.

It's not an easy UI problem to solve in all cases since behavior in CC can be so flexible, compaction, forking, etc. But it would be great if it was simply consistent (ctrl+o shows last N where N is like, 50, or 100), with ctrl+e revealing the rest.


Yes totally. ctrl+o used to show all messages, but this is one of the tricky things about building in a terminal: because many terminals are quite slow, it is hard to render a large amount of output at once without causing tearing/stutter.

That said, we recently rewrote our renderer to make it much more efficient, so we can bump up the default a bit. Let me see what it feels like to show the last 10-20 messages -- fix incoming.


thanks dude. you are living my worst nightmare which is that my ultra cool tech demo i made for cracked engineers on the bleeding edge with 128GB ram apple silicon using frontier AI gets adopted by everyone in the world and becomes load bearing so now it needs to run on chromebooks from 2005. and if it doesn't work on those laptops then my entire company gets branded as washed and not goated and my cozy twitter account is spammed with "why didn't you just write it in rust lel".

o7


Your worst nightmare. For me this is the cool part.

Terminals already solved how to do this decades ago: pagers.

Write the full content to a file and have less display it. That's a single "render" you do once and write to a file.

Your TUI code spawns `less <file>` and waits. Zero rendering loop overhead, zero tearing, zero stutter. `less` is a 40-year-old tool that exists precisely to solve this problem efficiently.

If you need to stream new content in as the session progresses, write it to the file in the background and the user can use `less +F` (follow mode, like tail -f) to watch updates.


Just tell people to install a fast terminal if they somehow happen to have a slow one?

Heck, simply handle the scrolling yourself a la tmux/screen and only update the output at most every 4ms?

It's so trivial, can't you ask your fancy LLM to do it for you? Or you guys lost the plot at his point and forgot the most basics of writing non pessimized code.


> It's so trivial, can't you ask your fancy LLM to do it for you?

They did. And the result was a React render loop that takes 16ms to output a hundred characters to screen and tells them it will take a year to rewrite: https://x.com/trq212/status/2014051501786931427


What's extra funny is that curses diffs a virtual "current screen" to "new screen" to produce the control codes that are used to update the display. Ancient VDOM technology, and plenty fast enough.

I'm with you on this one. "Terminals are too slow to support lots of text so we had to change this feature in unpopular ways" is just not a plausible reason, as terminals have been able to dump ~1Mb per second for decades.

The real problem is their ridiculous "React rendering in the terminal" UI.


> because many terminals are quite slow, it is hard to render a large amount of output at once without causing tearing/stutter.

Only if you use React as your terminal renderer. You're not rendering 10k objects on screen in a few milliseconds. You're outputting at best a few thousand characters. Even the slowest terminal renderer is capable of doing that.


Why would you tailor your product for people that don’t know how to install a good terminal? Just tell them to install whatever terminal you recommend if they see tearing.

Do you have any examples of slow terminals, and what kind of maximum characters per second they have?

How do you respond to the comment that; given the log trace:

“Did something 2 times”

That may as well not be shown at all in default mode?

What useful information is imparted by “Read 4 files”?

You have two issues here:

1) making verbose mode better. Sure.

2) logging useless information in default.

If you're not imparting any useful information, claude may as well just show a spinner.


It's a balance -- we don't want to hide everything away, so you have an understanding of what the model is doing. I agree that with future models, as intelligence and trust increase, we may be able to hide more, but I don't think we're there yet.

That's perfectly reasonable, but I genuinely don't understand how "read 2 files" is ever useful at all. What am I supposed to do with this information? How can it help me redirect the model?

Like, I'm open to the idea that I'm the one using your software the wrong way, since obviously you know more about it than I do. What would you recommend I do with the knowledge of how many files Claude has read? Is there a situation where this number can tell me whether the model is on the right track?


Honestly, I just want to be able to control precisely what I see via config.json. It will probably differ depending on the project. This is a developer tool, I don't see why you'd shy away from providing granular configuration (alongside reasonable defaults).

I actually miss being able to see all of the thinking, for example, because I could tell more quickly when the model was making a wrong assumption and intervene.


ok, I will be the dumbass here - I am a retired software engineer who has not used any of these tools, but when I as working on high volume web sites, all I wanted and needed was access to the log files. I would always have a small terminal session open to tail and grep for errors for the areas I was interested in. Had another small window to tail and monitor specific performance values. Etc.

I do not know how this concept would work in these agentic environments, but would seem useful, in an environment that has a lot of parallel things going on, with a lot of metrics that could be useful, you would want to have multiple monitors that can be quickly customized with standard linux utilities. Token usage, critical directory access, etc.

This, in conjunction with a config file to define/filter out the log stream should be all that's needed to provide as much or as little detail that would be needed to monitor how things are going, and to alert when certain things are going off the rails.


That's a cool idea!

Honestly Tmux, vim, kitty, almost every terminal, shell, script is configurable. It’s what we’re used to. I wouldn’t know why you wouldn’t start allowing more config options.

I do not use CC (yet) but I think this is the right direction. We are hackers. We love hacking. We love to tinker about and configure! Please allow us.

(And yeah, I would love the verbose mode myself, but there could be various levels to it.)


Exactly. If a user wants a simpler experience there is now the Claude Cowork option.

Maybe during onboarding you could ask for output preference? That would at least help new users.

I find this decision weird due to claude _code_, while being used by _some_ non-technical users, is mostly used by technical users and developers.

Not sure why the choice would be to dumb the output down for technical users/developers.


One use I have for seeing what exactly it is doing is to press Esc quick when I see it's confused and starts searching for some info that eg got compacted away, often going on a big quest like searching an entire large directory tree etc. What would actually wish is if it would ask me in these cases. It clearly know that it lacks info but thinks it can figure it out by itself by going on a quest and that's true but takes too long. It could just ask me. There could be some mode settings of how much I want to be involved and consulted, like just ask boldly for any factual info from me, or if I just want to step away and it should just figure everything out on its own.

I've commented on this ticket before: https://github.com/anthropics/claude-code/issues/8477#issuec...

The thinking mode is super-useful to me as I _often_ saw the model "think" differently from the response. Stuff like "I can see that I need to look for x, y, z to full understand the problem" and then proceeds to just not do that.

This is helpful as I can interrupt the process and guide it to actually do this. With the thinking-output hidden, I have lost this avenue for intervention.

I also want to see what files it reads, but not necessarily the output - I know most of the files that'll be relevant, I just want to see it's not totally off base.

Tl;dr: I would _love_ to have verbose mode be split into two modes: Just thinking and Thinking+Full agent/file output.

---

I'm happy to work in verbose mode. I get many people are probably fine with the standard minimal mode. But at least in my code base, on my projects, I still need to perform a decent amount of handholding through guidance, the model is not working for me the way you describe it working for you.

All I need is a few tools to help me intervene earlier to make claude-code work _much_ better for me. Right now I feel I'm fighting the system frequently.


Yep, this is what we landed now, more or less: verbose mode is just file paths, then ctrl+o gives you thinking, agent output, and hook output.

Have you considered picking a new name for a different concept?

Or have ctrl+o cycle between "Info, Verbose, Trace"?

Or give us full control over what gets logged through config?

Ideally we would get a new tab where we could pick logging levels on:

  - Thoughts
  - Files read / written
  - Bashes
  - Subagents
etc.

Have you considered keeping the old behavior available as "legacy mode"? I don't want verbose mode. I don't want to spend time configuring a mutli-state verbose mode that introduces new logging in future versions so I have to go and suppress things to get just file names. I just want to see the file names. I don't consider that verbose.

Not only what files, but what part of the files. Seeing 1-6 lines of a file that's being read is extremely frustrating, the UX of Claude code is average at best. Cursor on the other hand is slow and memory intensive, but at least I can really get a sense of what's going on and how I can work with it better.

I am not a claude user, but a similar problem I see on opencode is accessing links. More than once I've seen Kimi, GLM or GPT go tothe wrong place and waste tokens until I interrupt them and tell them a correct place to start looking for documentation or whatever they were doing.

If I got messages like "Accessed 6 websites" I'd flip and go spam a couple github issues with as much "I want names" as I could.


Such as Claude Code reading your ssh keys. Hiding the file names masks the vulnerability.

That's approaching the problem from the worst possible angle. If your security depends on you catching 1 message in a sea of output and quickly rotating the credential everywhere before someone has a chance to abuse it then you were never secure to begin with.

Not just because it requires constant attention which will eventually lapse, but because the agent has an unlimited number of ways to exfiltrate the key, for example it can pretend to write and run a "test" which reads your key, sends it to the attacker and you'll have no idea it's happening.


I agree with you but I think there's a "defense in depth" angle to this. Yes, your security shouldn't depend on noticing which files Claude has read, since you'll mess up. But hiding the information means your guaranteed to never notice! It's good for the user to have signals that something might be going wrong.

There's no defense "in depth" here, it's like putting your SSH key in your public webroot and watching the logs to see if anyone's taken your key. That's your only layer of "defense" and you don't stand any chance of enforcing it. Real defense is rooted in technical measures, imperfect as they may be, but this is just defense through wishful thinking.

Obviously, don't put your SSH keys in a public webroot. But let's say you're managing a web server and have a decent security mindset. But don't you think it's better to regularly check the logs for evidence of an attack vs delete all the logs so they can't be checked?

I sent email to Anthropic (usersafety@anthropic.com, disclosure@anthropic.com) on January 8, 2025 alerting them to this issue: Claude Code Exploit: Claude Code Becomes an Unwitting Executor. If I hadn't seen Claude Code read my ssh file, I wouldn't have known the extent of the issue.

To improve the Claude model, it seems to me that any time Claude Code is working with data, the first step should be to use tools like genson (https://github.com/wolverdude/GenSON) to extract the data model and then create why files (metadata files) for data. Claude Code seems eager to use the /tmp space so even if the end user doesn't care, Claude Code could do this internally for best results. It would save tokens. If genson is reading the GBs of data, then claude doesn't have to. And further, reading the raw data is a path to prompt injection. Let genson read the data, and claude work on the metadata.

Why does it have access to those paths?

> saving thousands of tokens and sparing the context window

shhh don't say that, they will never fix it if means you use less tokens.


What annoys me is that I don’t have the choice anymore. It’s just decided that thinking is not possible to see anymore, files being read are very difficult to see, etc.

I understand that I’m probably not the target audience if I want to actually step in and correct course, but it’s annoying.


I'm a screen reader user and CTO of an accessibility company. This change doesn't reduce noise for me. It removes functionality.

Sighted users lost convenience. I lost the ability to trust the tool. There is no "glancing" at terminal output with a screen reader. There is no "progressive disclosure." The text is either spoken to me or it doesn't exist.

When you collapse file paths into "Read 3 files," I have no way to know what the agent is doing with my codebase without switching to verbose mode, which then dumps subagent transcripts, thinking traces, and full file contents into my audio stream. A sighted user can visually skip past that. I listen to every line sequentially.

You've created a situation where my options are "no information" or "all information." The middle ground that existed before, inline file paths and search patterns, was the accessible one.

This is not a power user preference. This is a basic accessibility regression. The fix is what everyone in this thread has been asking for: a BASIC BLOODY config flag to show file paths and search patterns inline. Not verbose mode surgery. A boolean.

Please just add the option.

And yes, I rewrote this with Claude to tone my anger and frustration down about 15 clicks from how I actually feel.


Try Codex instead. Much greener pastures overall

I do love my subagents and I wrote an entire Claude Code audio hook system for a11y but this would be still rather compelling if Codex weren't also somewhat of an a11y nightmare. It does some weird thing with ... maybe terminal repaints or something else that ends up rereading the same text over and over. Claude Code does this similarly but Codex ends up reading like ... all the weird symbols and other stuff? window decorations? and not just the text like CC does. They are both hellish but CC slightly? less so... until now.

Sorry for being off-topic, but isn't a11y a rather ironic term for accessibility? It uses a very uncommon abbreviation type -- numeronym, and doesn't mean anything to the reader unless they look it up (or already know what it means).

Is it as bad with the Codex app, or VS Code plugin?

They are much more responsive on GitHub issues than Anthropic so you could also try reporting your issue there


For now until they are in the lead

Dyslexic and also a prolific screen reader user myself. +1 and thank you for mentioning something that often gets (ironically) overlooked

Hey -- we take accessibility seriously, and want Claude Code to work well for you. This is why we have repurposed verbose mode to do what you want, without the other verbose output. Please give it a try and let me know what you think.

It's well meaning but I think this goes against something like the curb effect. Not a perfect analogy but, verbosity is something you have to opt into here: Everyone benefits from being able to glance at what the agent is up to by default. Nobody greatly benefits from the agent being quiet by default.

If people find it too noisy, they can use the flag or toggle that makes everything quieter.

p.s. Serendipitously I just finished my on-site at anthropic today, hi :)


> we take accessibility seriously

Do you guys have a screen reader user on the dev team?

Is verbose mode the same as the old mode, where only file paths are spoken? Or does it have other text in it? Because I tried to articulate, and may have failed. More text is usually bad for me. It must be consumed linearly. I need specific text.

Quality over quantity


"Is verbose mode the same as the old mode, where only file paths are spoken?" -- yes, this is exactly what the new verbose mode is.

And how to get to the old verbose mode then...?

Hit ctrl+o

Wait so when the UI for Claude Code says “ctrl + o for verbose output” that isn’t verbose mode?

That is more verbose — under the hood, it’s now an enum (think: debug, warn, error logging)

Considering the ragefusion you're getting over the naming, maybe calling it something like --talkative would be less controversial? ;-)

ctrl + o isn't live - that's not what users want, what users want is the OPTION to choose what we want to see.

Casually avoiding the first question

Hi Boris, by far the most upvoted issue at 2550 on your github is "Support AGENTS.md" with 2550 upvotes. The second highest one has 563. Every single other agent supports AGENTS.md. Care to share why you haven't?

> Yoshi and others -- please keep the feedback coming. We want to hear it, and we genuinely want to improve the product in a way that gives great defaults for the majority of users, while being extremely hackable and customizable for everyone else.

I think an issue with 2550 upvotes, more than 4 times of the second-highest, is very clear feedback about your defaults and/or making it customizable.


Let's be real here, regardless of what Boris thinks, this decision is not in his hands.

Would love to hear what Boris thinks.

I'm sorry, this comment is opportunistic and a bit annoying to post here. Saying "keep the feedback coming" is not an invitation to turn this thread into the issue queue

"Opportunistic and annoying" are definitely two of the most suitable adjectives to describe the issue! I'm glad my comment is in character, though unfortunately it doesn't even manage touch the subject matter's levels of opportunism and annoyance.

> Every single other agent supports AGENTS.md. Care to share why you haven't?

Are you actually wondering, or just hoping to hear a confirmation of what you already know? Because the reason behind it is pretty clear, it doubles as both vendor lock-in and advertisement.


I'd love to hear Boris' thoughts on it given his open invitation for feedback and _genuinely_ wanting to improve the product, including specifically hackability and customizability (emphasis mine).

I don't understand this take Boris:

> The amount of output this generates can quickly become overwhelming in a terminal

If I use Opus 4.6, arguably the most verbose, over thinking model you've released to date, OpenCode handles it just the same as it does Sonnet 4.0.

OpenCode even allows me to toggle into subagent and task agents with their own output terminals that, if I am curious what is going on, I can very clearly see it.

All Claude-Code has done has turned the output into a black box so that I am forced to wait for it to finish to look at the final git diff. By then it's spent $5-10 working on a task, and threw away a lot of the context it took to get there. It showed "thinking" blocks that weren't particularly actionable, because it was mostly talking to itself that it can't do something because it goes against a rule, but it really wants to.

I'm actually frustrated with Code blazing through to the end without me able to see the transcript of the changes.


Sorry if this is just for giggles and doesn't add anything of value to the discussion, but I couldn't resist and asked Claude Sonnet 4.5 and Opus 4.6 to analyze the github issue that was opened.

Funnily enough, both independently sided with the users, not the authors.

The core problem: --verbose was repurposed instead of adding a new toggle. Users who relied on verbose for debugging (thinking, hooks, subagent output) now have broken workflows - to fix a UX decision that shouldn't have shipped as default in the first place.

What should have been done:

  /config
  Show file paths: [on/off]
  Verbose mode: [on/off]  (unchanged)
A simple separate toggle would've solved everything without breaking anyone's workflow.

Opus 4.6's parting thought: if you're building a developer tool powered by an AI that can reason about software design, maybe run your UX changes past it before shipping.

To be fair, your response explains the design philosophy well - longer trajectories, progressive disclosure, terminal constraints. All valid. But it still doesn't address the core point: why repurpose --verbose instead of adding a separate toggle? You can agree with the goal and still say the execution broke existing workflows.


There are so many config options. Most I still need to truly deeply understand.

But this one isn't? I'd call myself a professional. I use with tons of files across a wide range of projects and types of work.

To me file paths were an important aspect of understanding context of the work and of the context CC was gaining.

Now? It feels like running on a foggy street, never sure when the corner will come and I'll hit a fence or house.

Why not introduce a toggle? I'd happily add that to my alisases.

Edit: I forgot. I don't need better subagent output. Or even less output whrn watching thinking traces. I am happy to have full verbosity. There are cases where it's an important aspect.


You want verbose mode for this -- we evolved it to do exactly what you're asking for: verbose file reads, without seeing thinking traces, hook output, or (after tomorrow's release) full subagent output.

More details here: https://news.ycombinator.com/item?id=46982177


Sorry to rain on your parade. I wanted the original verbose mode for those moments I needed a truly verbose output. And I wanted to know, at a minimal glance, what files are being read and put into context in nearly any other situation.

I exactly do not need a "verbose" mode, that lost all value to me as a replacement for something it still is no good at replacing.

You actually argue, that I do not loose anything, when in fact your product just got made worse in two significant areas. And you keep arguing, that shooting the product into one foot is solved by shooting the other foot. Sorry. Not working for me.

Will be evaluating your competition. Was on the cusp of upgrading max to the higher tier. Now? No chance of that happening.


There's no way you're still talking about verbose mode.. this is insane.

I'm a Claude user who has been burned lately by how opaque the system has become. My workflows aren't long and my projects are small in terms of file count, but the work is highly specialized. It is "out of domain" enough that I'm getting "what is the seahorse emoji" style responses for genuine requests that any human in my field could easily follow. I've been testing Claude on small side projects to check its reliability. I work at the cutting edge of multiple academic domains, so even the moderate utiltity I have seen in this is exciting for me, but right now Claude cannot be trusted to get things right without constant oversight and frequent correction, often for just a single step. For people like me, this is make or break. If I cannot follow the reasoning, read the intent, or catch logic disconnects early, the session just burns through my token quota. I'm stuck rejecting all changes after waiting 5 minutes for it to think, only to have to wait 5 hours to try again. Without being able to see the "why" behind the code, it isn't useful. It makes typing "claude" into my terminal an exercise in masochism rather than the productivity boost it's supposed to be. I get that I might not be the core target demographic, but it's good PR for Anthropic if Claude is credited in the AI statements of major scientific publications. As it stands, trajectory in develeopment means I cannot in good conscience recommend Claude Code for scientific domains.

>the session just burns through my token quota

Did you ever think that this may be Anthropic's goal? It is a waste for sure but it increases their revenue. Later on the old feature you were used to may resurface at a different tier so you'd have to pay up to get it.


What academic domains are you on the cutting edge of? Genuinely curious what specifically is beyond claude's capabilites

Most recent problems were related to topology, but it can take the wrong direction on many things. This is not an LLM fault; it's a training data issue. If historically a given direction of inquiry is favored, you can't fault an LLM for being biased toward it. However, if small volume and recent results indicate that path is a dead end, you don't want to be stuck in fruitless loops that prevent you from exploring other avenues.

The problem is if you're interdisciplinary, translating something from one field to one typically considered quite distant, you may not always be aware of historic context that is about to fuck you. Not without deeper insight into what the LLM is choosing to do or read and your ability to infer how expected the behavior you're about to see is.


ahh that makes sense. very interesting thank you!

> this resulted in an experience that most users preferred

I just find that very hard to believe. Does anyone actually do anything with the output now? Or are they just crossing their fingers and hoping for the best?


Have you tried verbose mode? /config > verbose. It should do exactly what you are looking for now, without extraneous thinking/subagent/hook output. We hear the feedback!

> The amount of output this generates can quickly become overwhelming in a terminal, and is something we hear often from users. Terminals give us relatively few pixels to play with; they have a single font size; colors are not uniformly supported; in some terminal emulators, rendering is extremely slow. We want to make sure every user has a good experience, no matter what terminal they are using. This is important to us, because we want Claude Code to work everywhere, on any terminal, any OS, any environment.

If you are serious about this, I think there are so many ways you could clean up, simplify, and calm the Claude Code terminal experience already.

I am not a CC user, but an enthusiastic CC user generously spent an hour or two last week or so showing me how it worked and walking through an non-publicly-implemented Gwern.net frontend feature (some CSS/JS styling of poetry for mobile devices).

It was highly educational and interesting, and Claude got most of the way to something usable.

Yet I was shocked and appalled by the CC UI/UX itself: it felt like the fetal alcohol syndrome lovechild of a Las Vegas slot machine and Tiktok. I did not realize that all those jokes about how using CC was like 'crack' or 'ADHD' or 'gambling' were so on point, I thought they were more, well, metaphorical about the process as a whole. I have not used such a gross and distracting UI in... a long time. Everything was dancing and bouncing around and distracting me while telling me nothing. I wasted time staring at the update monitor trying to understand if "Prognosticating..." was different from "Fleeblegurbigating..." from "Reticulating splines...", while the asterisk bounces up and down, or the colored text fades in and out, all simultaneously, and most of the screen was wasted, and the whole thing took pains to put in as much fancy TUI nonsense as it could. An absolute waste, not whimsy, of pixels. (And I was a little concerned how much time we spent zoned out waiting on the whole shabang. I could feel the productivity leaving my body, minute by minute. How could I possibly focus on anything else while my little friendly bouncing asterisk might finish at any instant...?!) Some description of what files are being accessed seems like you could spare the pixels for them.

So I was impressed enough with the functionality to move it up my list, but also much of it made me think I should look into GPT Codex instead. It sounds like the interfaces there respect my time and attention more, rather than treating me like a Zoomer.


(An example of something which may already exist but I didn't see in my demo - more thoughtfulness on how to handle long-running tasks, and let us switch to something else, instead of us busy waiting on CC. For example, perhaps use of the system bell? That's usually set to flash or update the terminal title, and you can set your window manager to focus a window on the bell. I have my XMonad set to jump to a visible bell, which is great for invoking a possibly slow command: I can go away and focus completely on whatever else I am doing because I know I will be yanked to the backgrounded command the instant it finishes. I even set up a Bash shortcut, `alert () { echo -n -e '\a'; }`, so I simply run stuff like `foo ; alert` and go away.)

I don't see how you can blame terminal applications - they typically have been able to dump around 1Mb of output per second for decades.

https://martin.ankerl.com/2007/09/01/comprehensive-linux-ter...

Could the React rendering stack be optimised instead?


I believe he is speaking of the effective resolution of TUIs, not pty throughput rates or fps, though I do agree with what you’re actually getting it.

From the list of problems they are experiencing with rendering in the terminal, it sounds like they want a GUI (Electron would be a good fit).

> From the list of problems they are experiencing with rendering in the terminal, it sounds like they want a GUI (Electron would be a good fit).

Electron? The tech that is literally incapable of rendering large amounts of anything, including text, quickly?


Well it worked out great for Teams, no?

Boris! Unrelated but thank you and the Anthropic team for Claude code. It’s awesome. I use it every day. No complaints. You all just keep shipping useful little UX things all the time. It must be because it’s being dogfooded internally. Kudos again to the team!

The default view hiding files read is fully a regression imho. It is so helpful for sense of control, nevermind trust and human agency.

Please revert this


Based on the comments here, it sounds like repurposing a boolean "verbose" mode and having that verbose mode actually be multi-state is confusing.

It might be worth considering a "verbose level" type setting with a selection of levels that describe the level of verbosity. Effectively, use a select menu instead of a boolean when one boolean state is actually multiple nested states.

Edit: I realised my use of "verbose" and "verbosity" here is it self ironically verbose, sorry!


Just give multiple options in the config file. Give us the current default, what you now call verbose mode and the previous verbose mode. If Claude is as effective as marketing claims then maintaining all 3 options should be trivially doable, we've been doing more complex configuration in tons of apps for decades.

What is the best way to get you guys feedback? There's a few things I tell Claude Code to do every project that I feel like Claude should just do by default, the biggest one is instead of using Grep so much, I have ripgrep installed, it makes searching for text inside the current folder so much easier and it is insanely faster. Claude seems to work way faster when it uses ripgrep, I don't know if its because MCP has some slowness or ripgrep is just that much faster, but I don't remember grep ever being so slow to this level either.

Claude’s search tool _does_ use use ripgrep—ripgrep literally ships with Cluade Code. I guess the agent can also decide to invoke `grep` directly instead of using its search tool. I usually only see it do this for small searches…

add that to your claude.md

Hey, It's Damage Control person from Corporate Revenue Maximizing Team here, <5 paragraphs>

One thing this specific feature was letting me do is seeing when Claude Code takes a wrong turn, read a wrong memory MD file. I used to immediately interrupt and correct its course. Now it is more opaque and there is less of a hint at CC's reasoning.

> Terminals give us relatively few pixels to play with; they have a single font size; colors are not uniformly supported; in some terminal emulators, rendering is extremely slow.

That's why I use your excellent VS Code extension. I have lots of screen space and it's trivial to scroll back there, if needed.

I would really like even more love given to this. When working with long-lived code bases it's important to understand what is happening. Lots of promising UX opportunities here. I see hints of this, but it seems like 80% is TBD.

Ideally you would open source the extension to really use the creativity of your developer user base. ;)


> in some terminal emulators, rendering is extremely slow.

Ooo... ooo! I know what this is a reference to!

https://www.youtube.com/watch?v=hxM8QmyZXtg


Hello Boris. First of all, I apologize for replying unrelated to your post or comment. The reason I'm leaving a comment is because there's a critical issue currently going on regarding new accounts, with over 100 people commenting. This issue has been open for over three weeks. I'd appreciate it if you could look into it.

https://github.com/anthropics/claude-code/issues/19673


Thanks for the long and considered response, but this is a really ugly UX decision.

As others have said - 'reading 10 files' is useless information - we want to be able to see at a glance where it is and what it's doing, so that we can re-direct if necessary.

With the release of Cowork, couldn't Claude Code double down on needs of engineers?


So in a nutshell Claude becoming smarter means that logic that once resided in the agent is being moved to the model?

If that's the case, it's important to asses wether it'll be consistent when operating on a higher level, less dependent on the software layer that governs the agent. Otherwise it'll risk Claude also becoming more erratic.


I'm going to be paranoid and guess they're trying to segment the users into those that'll notice they're dumbing down the system via caches, limited model via quantized downgrade and those that expect the fully available tools.

Thariq (who's on the Claude Code team) swears up and down that they do not do this.

Honestly, man, this is just weird new tech. We're asking a probabilistic model to generate English and JSON and Bash at the same time in an inherently mutable environment and then Anthropic also release one or two updates most workdays that contain tweaks to the system prompt and new feature flags that are being flipped every which way. I don't think you have to believe in a conspiracy theory to understand why it's a little wobbly sometimes.


Yeah, I know it's new tech and the pipeline for the magic is a bunchof shims ontop of a non-deterministic models; but the MBAs are going to swoop in eventually and segmenting the users into tiers of price discrimnation is in the pike regardless of how earnest the current PMs are.

Hmm, honestly I'm not so sure. Many devs seem extremely price-sensitive and the switching cost is... zero.

If Anthropic do something you don't like, you just set a few environment variables and suddenly you're using the Claude Code harness with a local model, or one of thousands available through OpenRouter. And then there is also OpenCode. I haven't tried this, but I'm not worried.

^ https://github.com/ruvnet/claude-flow/wiki/Using-Claude-Code...


Unless your employer made a deal and suddenly you are forced to use one provider for the foreseeable future.

Hi Boris, did Claude Code itself author this change? I am curious as you said that all of your recent PRs were authored by Claude Code. If that's the case, just wondering what objective did you ask it to optimize for? Was it something like: make the UI simpler?

Maybe I am missing something but this still doesn't explain why Claude Code couldn't expose a flag and be done with it as the author mentioned.

There must have been a more concise way to write this damage control.

Why does everything have to be in the TUI? I like the TUI. But I also want all the logs. And I do mean all of them.

Of course all the logs can’t be streamed to a terminal. Why would they need to be? Every logging system out there allows multiple stream handlers with different configurations.

Do whatever reasonable defaults you think make sense for the TUI (with some basic configuration). But then I should also be able to give Claude-code a file descriptor and a different set of config optios, and you can stream all the logs there. Then I can vibe-code whatever view filter I want on top of that, or heck, have a SLM sub-agent filter it all for me.

I could do this myself with some proxy / packet capture nonsense, but then you’d just move fast and break my things again.

I’m also constantly frustrated by the fancier models making wrong assumptions in brownfield projects and creating a big mess instead of asking me follow-up questions. Opus is like the world’s shittiest intern… I think a lot of that is upstream of you, but certainly not all of it. There could be a config option to vary the system prompt to encourage more elicitation.

I love the product you’ve built, so all due respect there, but I also know the stench of enshittification when I smell it. You’re programmers, you know how logging is supposed to work. You know MCP has provided a lot of these basic primitives and they’re deliberately absent from claude code. We’ve all seen a product get ratfucked internally by a product manager who copied the playbook of how Prabhakar Raghavan ruined google search.

The open source community is behind at the moment, but they’ll catch up fast. Open always beats closed in the long run. Just look at OpenAI’s fall into disgrace.


For me, Opus 2.6 has been a huge regression. It now hangs for 10+ minutes on a task that it used to take a few minutes to complete, and all I see is some "Reading 3 files" etc. messages showing nothing else. The 2 issues where Opus 2.6 is absolutely not as good as 2.5 at all by miles and also showing just some obscure unnecessary messages makes me say "why?" Why did you decide to screw something up that was awesome and dumb it down and keep insisting that the "verbose" mode is the way to go?? Seriously who wants to see messages that are essentially the same? 2 patterns, 3 files read?? Seriously? Who is that mode for? Why is it even a default??

Do you feel that a terminal UX will remain your long term interface for Claude Code? Or would you consider a native interface like Codex has built?

This kind of attitude, above all else, is why anthropic is winning imo. Thanks.

Ignoring user input?

> Opus 4.6 1-shots much of my code, often running for minutes, hours, and days at a time.

This is verifiable bullshit. Unless you explicitly explain how it "runs for days" since Opus's context window is incapable of handling even relatively large CLAUDE.md files.

> The amount of output this generates can quickly become overwhelming in a terminal, and is something we hear often from users. Terminals give us relatively few pixels to play with; they have a single font size; colors are not uniformly supported; in some terminal emulators, rendering is extremely slow.

No. It's your incapability as an engineer that limits this. And you and your engineers getting high on your own supply. Hence you need 16ms to draw a couple of characters on screen and call it a tiny game engine [1] For which your team was rightfully ridiculed.

> But we missed the mark for a subset of our users. To improve it,

AI-written corporate nothingspeak.

[1] https://x.com/trq212/status/2014051501786931427


At some point we need to start preferring GUIs instead of terminals as the AI starts giving us more and more information. Features like hover-over tooltips and toggle switches designed for mouse operation might really start to matter.

Maybe "AI IDEs" will gain ground in the future, e.g. vibe-kanban


We could do complicated UIs in terminals in the 1990s.

Unfortunately, vibe coders cannot do that anymore.


Yes I don't understand why Claude code needs to be a terminal app.

It doesn't compose with any other command line program and the terminal interface is limiting.

I'm surprised nobody has yet made a coding assistant that runs in the browser or as a standalone app. At this point it doesn't really need to integrate with my text editor or IDE.


> It doesn't compose with any other command line program

For what it's worth, it absolutely can, just not when invoked in interactive mode.

(This doesn't really contradict your overall point though.)


Please for the love of God no. I rather have something completely agnostic of an IDE. OpenCode is doing the right thing IMO

You can have something IDE agnostic but still not be dependent on the ancient VT100 terminal protocol and rendering path.

(That said I do like being able to SSH in and run an agent that way. But there are other remote access modalities.)


I’m just some tinkerer and signed up just to say this. These are my thoughts after reading the blog post and ur response in full.

I subscribe to max rn. Tons of money. Anthropic’s Super Bowl ads were shit, not letting us use open code was shit, and this is more shit. Might only be a single straw left before I go to codex (no one’s complaining about it. And the openclaw creator prefers it)

This dev is clearly writing his reply with Claude and sounding way too corpo. This feels like how school teachers would talk to you. Your response in its length was genuinely insulting. Everyone knows how to generate text with AI now and you’re doing a terrible job at it. You can even see the emdash attempt (markdown renders two normal dashes as an emdash).

This was his prompt “read this blog post, familiarize yourself with the mentioned GitHub issue and make a response on behalf of Anthropic.” He then added a little bit at the end when he realized the response didn’t answer the question and got so to fix the grammar and spelling on that.

Your response is appropriate for the masses. But we’re not. We’re the so called hackers and read right through the bs. It’s not even about the feature being gone anymore.

There is a principle we uphold as “hackers” that doesn’t align with this that pisses people off a lot more than you think. I can’t really put my finger on it maybe someone can help me out.

PS About the Super Bowl ads. Anyone that knows the story knows they’re exaggerated. (In the general public outside of Silicon Valley it’s like a 50/50 split or something about people liking or disliking AI as a whole rn. OpenAI is doing way more to help the case (not saying ads are a good thing). ) Open ai used to feel like the bad guy now it’s kinda shifting to anthropic. This, the ads and open code are all examples of it. (I especially recommend people watch the anthropic and open ai Super Bowl ads back to back)


> This dev is clearly writing his reply with Claude

> You can even see the emdash attempt (markdown renders two normal dashes as an emdash)

He says he wrote it all manually.[0] Obviously I can't know if that's true, but I do think your internal AI detector is at least overconfident. For example, some of us have been regularly using the double hyphen since long before the LLM era. (In Word, it auto-corrects to an en dash, or to an em dash if it's not surrounded by spaces. In plain text, it's the best looking easily-typable alternative to a dash. AFAICT, it's not actually used for dashes in CommonMark Markdown.)

The rest is more subjective, but there are some things Claude would be unlikely to write (like the parenthetical "(ie. progressive disclosure)" -- it would write "i.e." with both dots, and it would probably follow it with a comma). Of course those could all be intentional obfuscations or minimal human edits, but IMO you are conflating the corporate communications vibe with the LLM vibe.

[0] https://news.ycombinator.com/item?id=46982418


> `For example, some of us have been regularly using the double hyphen since long before the LLM era.

This "emdash" and "double dash" discussion and mention is the first time I have heard of it or seen discussion of it. I've never encountered it in the wild, nor seen it used in any meaningful way in all my time on the internet these last 27 years.

And yes - I've seen that special dash character in word for many years. Not once has anyone said "oh hey I type double dashes and word uses that". No it's always been "word has this weird dash and if you copy-paste it it's weird", and no one knows how it pops up in word, etc.

And yes, I've seen the AI spit out the special dash many times. It's a telltale sign of using LLM generated text.

And now, magically, in this single thread, you can see half-dozen different users all using this "--" as if it's normal. It's like upside down world. Either everyone is now using this brand new form of speaking, or they're covering for this Claude code developer.

So yeah, maybe I've been sticking my head in the sand for years now, or maybe I just blindly ignored double-dashes when reading text till now. But it sure seems fishy.


Sounds like you see me as an untrustworthy source, so all I can suggest is that you look into this yourself. Search for "--" in pre-LLM forum postings and see how many hits you get.

Here are my pre-2020 HN comments, with 3 double hyphens in 8 comments: https://hn.algolia.com/?dateEnd=1576108800&dateRange=custom&...

As I was in the process of typing the search term to get my comments (and had just typed 'author'), this happened to come up as the top search result for Comments by Date for Feb 1st 2000 > Dec 12th 2019: https://news.ycombinator.com/item?id=21768030

Note that I wasn't searching directly for the double hyphen, which doesn't seem to work -- the first result just happened to contain one. If I'm covering for the Anthropic guy, I could be lying about the process by which I found that comment, but I think you should at least see this as sufficient reason to question your assumptions and do some searches of your own.


I've just realised I messed up the search, and the algolia link is to my pre-2020 comments containing the word 'author'. But my full (far longer) list of pre-2020 comments also shows some pretty heavy double-hyphen use: 6 hits on page 1 of the results, 15 hits on page 2, and so on.

This conflict shows a pattern across AI products today.

Most tools are still designed with programmers as the default user. Everyone else is treated as an edge case.

But the real growth is outside that bubble. AI won’t become mainstream by hiding everything. And it won’t get there by exposing everything either.

It gets there by translating action into intent. By showing progress in human terms. By making people feel they’re still in control.

The teams that figure this out won’t just win an argument on GitHub. They’ll reach the much larger audience that’s still waiting on the sidelines.

My detail here: https://open.substack.com/pub/insanedesigner/p/building-ai-f...


Please don't post LLM output and pretend it's your writing.

I never thought I'd long for the days when people posted "$LLM says" comments, but at least those were honest.


On the contrary, I feel like most AI products aimed at non-programmers haven't really set the world on fire, with the exception of the basic chatbot interface (ie ChatGPT).

Focusing on programmers seems to have really worked for Anthropic. (And they do also have Claude Cowork).


Thanks Boris, great insights for builders.

clause is perfect every time, no quibbles, the IT industry is simply has to adapt to the new shift. surely people who earn a living by writing code will find fault with it, but even with Claude, code will not write itself, its a simple shift from writing code to make code work better/integrate/tweak/refine/personalize/customize it, thank you Boris and team we are over the moon over

In what terminals is rendering slow? I really think GPU acceleration for terminals (as seen in Ghostty) is silly. It's a terminal.

Edit: I can't post anymore today apparently because of dang. If you post a comment about a bad terminal at least tell us about the rendering issues.


VSCode (xterm.js) is one of the worst, but there's a large long tail of slow terminals out there.

Not really using VS Code terminal anymore, just Ubuntu terminal but the biggest problem I have is that at some point Claude just eats up all memory and session crashes. I know it's not really Claude's fault but damn it's annoying.

its not a bad idea to use one of the GPU Terminals on linux just for claude code, it works out a bit better

As someone who's business is run through a terminal, not everyone uses ghostty, even though they should. Remember, that they don't have a windows version.

Not everyone has the massive GPUs required by run Ghostty.

boris-4.6-humble

I am not a programmer and detest the terminal environment, while I design complexity, i need simple interfaces, claude is now guiding all dev based on my initial design spec, makes beatuful notebooks that can be uploaded directly to colab or github, no UX at all, no usability issues, this is the lastet baby we made yesterday starborn.github.io/copp-notebook thank you Claude engineering team for something that is flying very high and takes me with it, starborn.github.io/copp-notebook

> I am not a programmer and detest the terminal environment

As someone who finds formal language a natural and better interface for controlling a computer, can you explain how and why you actually hate it? I mean not stuff like lack of discoverability, because you use a shell that lacks completion and documentation, that have been common for decades, I get those downsides, but why do you detest it in principle?


You've reached the stage where if something is possible in CC, someone out there is using it. Taking anything away will have them ask for it back ; you need to let people toggle things. https://xkcd.com/1172/

It's got to be hard to find the right balance, what works for most users, and somehow include those who have workflows that involve using a rapid temperature rise as a way to signal a control. <xkcd1172>

@boris

Can we please move the "Extended Thinking" icon back to the left side of claude desktop, near the research and web search icons? What used to be one click is now three.


Also open source CC already.

And stop banning 3rd party harnesses please. Thanks

Anthropic, your actual moat is goodwill. Remember that.


> Anthropic, your actual moat is goodwill.

You mean the company that DDoSed websites to train their model?


Yea yea, that's a cool story, but can you make it cheaper maybe?

To be honest I think there should be an option to completely hide all code that Claude generates and uses. Summaries, strategies, plans, logs, decisions and questions is all I need. I am convinced that in a few years nobody cares about the programming language itself.

so its the users who are dumb :-)

this was written with claude lmao what a disgrace not to put a disclaimer.

use your own words!

i would rather read the prompt.


Same. It feels like an insult to read someone’s ai generated stuff. They put no effort into writing it but we now have to put extra effort to reading it because it’s longer than normal.

ok claude

> We can no longer design for ourselves, and we rely heavily on community feedback to co-design the right experience. We cannot build the right things without that feedback.

How can that be true, when you're deliberately and repeatedly telling devs (the community you claim to listen to) that you know better than they do? They're telling you exactly what they want, and you're telling them, "Nah." That isn't listening. You understand that, right?


I’m witnessing him respond in real time with not just feedback but also actual changes, in a respectful and constructive manner - which is not easy to do, when there are people who communicate in this rude of a manner. If that’s not listening, then I don’t know what is.

And it shouldn’t need to be said, but the words that appear on the screen are from an actual person with, you know, feelings.


Acting like they can't take the heat when they purposely put themselves in the public sphere is odd.

interesting. they have been pretty receptive to my pull comments and discourse on issues. To each's anecdote I suppose.


This is an extremely disappointing response. The issue is your dev relations people being shitty and unhelpful and trying to solve actual problems with media-relations speak as if engineers are just going to go away in a few days.

Arrogant and clueless, not exactly who I want to give my money to when I know what enshitification is.

They have horrible instincts and are completely clueless. You need to move them away from a public-facing role. It honestly looks so bad, it looks so bad that it suggests nepotism and internal dysfunction to have such a poor response.

This is not the kind of mistake someone makes innocently, it's a window into a worldview that's made me switch to gemini and reactivate cursor as a backup because it's only going to get worse from here.

The problem is not the initial change (which you would rapidly realize was a big deal to a huge number of your users) but how high-handed and incompetent the initial response was. Nobody's saying they should be fired, but they've failed in public in a huge way and should step back for a long time.


This is an insanely good response. History, backstory, we screwed up, what we're doing to fix it. Keep up the great work!

would've been better to post the prompt directly IMO

Prompts can be the new data compression. Just send your friend a prompt and the heartfelt penpal message gets decompressed at their end.

it reads like AI generated or at least AI assisted... those -- don't fool me!

fwiw, I wrote it 100% by hand. Maybe I talk to Claude too much..

Nah it doesn't look AI generated to me.

i thought about it being ai generated, but i don't care. it was easy to read and contained the right information. good enough for me. plus, who knows... maybe you were english as a second lang and used ai to clean up your writing. i'd prefer that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: