The K/V Cache is just an optimization. But yeah you would expect the attention for the model producing "Ok im doing X" and you asking "Why did you do X?" be similar. So i don't see a reason why introspection would be impossible. In fact trying to adapt a test skill where the agent would write a new test instead of adapting a new one i asked it why and it gave the reasoning it used. We then adapted the skill to specifically reject that reasoning and then it worked and the agent adapted the existing test instead.
It's quite likely that OpenAI is running a significant PR campaign to compensate for the bad rep they earned by stepping in to meet the demands of the Trump administration, after Anthropic refused to assist the administration with mass domestic surveillance and development of lethal autonomous weapons. Presumably OpenAI didn't buy the podcast TBPN just because they like the guys.
I don't think "usage" is exactly the metric they're going for, more like "usage in line with our developmental strategy." Transcripts of people using Claude to write code are probably far more valuable to them than transcripts of OpenClaw trying to set up a calendar invite.
I mean, they don’t train on your data unless you have the setting enabled.
Do you really think they are reading your prompts at all?
Free inference providers sure, but Anthropic?
I have a transformer attention mechanism which seems to be more data-efficient than the usual dot product, and I'm trying to write a performant backwards kernel for it.
reply