Hacker Newsnew | past | comments | ask | show | jobs | submit | AlexCoventry's commentslogin

I went to a conference and talked to all the vendors about their products.

According to the article, you can fix this by turning off Gemini Apps Activity in https://myactivity.google.com/product/gemini

I already have this turned off. It's slow, but I use a browser extension to save my Gemini chats locally, when I want to keep them.


> In a subsequent round, it generally can't meaningfully introspect on its prior internal state

It has the K/V cache, no?


The K/V Cache is just an optimization. But yeah you would expect the attention for the model producing "Ok im doing X" and you asking "Why did you do X?" be similar. So i don't see a reason why introspection would be impossible. In fact trying to adapt a test skill where the agent would write a new test instead of adapting a new one i asked it why and it gave the reasoning it used. We then adapted the skill to specifically reject that reasoning and then it worked and the agent adapted the existing test instead.


It's quite likely that OpenAI is running a significant PR campaign to compensate for the bad rep they earned by stepping in to meet the demands of the Trump administration, after Anthropic refused to assist the administration with mass domestic surveillance and development of lethal autonomous weapons. Presumably OpenAI didn't buy the podcast TBPN just because they like the guys.

https://paulgraham.com/submarine.html


I think there's definitely more scope for ruling out vulnerabilities by implementing simpler designs and architectures.

They have no values of their own, so you have to direct their attention that way.

I don't think "usage" is exactly the metric they're going for, more like "usage in line with our developmental strategy." Transcripts of people using Claude to write code are probably far more valuable to them than transcripts of OpenClaw trying to set up a calendar invite.

I mean, they don’t train on your data unless you have the setting enabled. Do you really think they are reading your prompts at all? Free inference providers sure, but Anthropic?

I just use Ctrl-g to open the prompt in emacs.

You can just use Ctrl + J

I have a transformer attention mechanism which seems to be more data-efficient than the usual dot product, and I'm trying to write a performant backwards kernel for it.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: