Hacker Newsnew | past | comments | ask | show | jobs | submit | chasd00's commentslogin

just put no agent produced code in the Code of Conduct document. People are use to getting shot into space for violating that thing little file. Point to the violation and ban the contributor forever and that will be that.

> If I was a more cynical person I'd be thinking that this entire scenario was totally contrived to produce this outcome so that the author could generate buzz for the article.

even that's being charitable, to me it's more like modern trolling. I wonder what the server load on 4chan (the internet hate machine) is these days?


well i think obviously yes. If i setup a machine to keep trying to break the password on an electronic safe and it eventually succeeds i'm still the one in trouble. There's a couple of cases where an agent did something stupid and the owner tried to get out of it but were still held liable.

Here's one where an AI agent gave someone a discount it shouldn't have. The company tried to claim the agent was acting on its own and so shouldn't have to honor the discount but the court found otherwise.

https://www.cbsnews.com/news/aircanada-chatbot-discount-cust...


> Plus Scenario 5: A human wrote it for LOLs.

i find this likely or at last plausible. With agents there's a new form of anonymity, there's nothing stopping a human from writing like an LLM and passing the blame on to a "rogue" agent. It's all just text after all.


i haven't dug into the article but your comment reminded me about the ClaudeCode Superpowers plugin. I find the plugin great but it's quite "expensive", I use the pay-as-you-go account with CC because i've just been trying it out personally and the superpowers plugin spends a lot of money, relative to regular CC, with all the back and forth.

With CC you can do a /cost to see how much your session cost in dollar terms, that's a good benchmark IMO for plugins, .md files for agents, and so on. Minimize the LLM cost in the way you'd minimize typical resource usage on a computer like cpu, ram, storage etc.


So I wake up this morning and learn the bots are discovering cancel culture. Fabulous.

Well aren’t both of those things crimes? I’m not a fan of mass surveillance either but maybe pick a different example.

The second is clearly not. State governments don't have jurisdiction over their residents when they are out of state.

Read about Texas.

It's a crime to leave the state to get an abortion. They can prosecute when you return home.

There have been vigilante patrols in West Texas, watching the necessary routes out of the state. The law gives any resident the grounds to turn in their neighbor for planning to get an abortion.


Is "crime" one and the same as "wrong"?

I bet you could derive a lot by using a packet sniffer while using CC and just watch the calls go back and forth to the LLM API. In every api request you'll get the full prompt (system prompt aside) and they can't offload all the magic to the server side because tool calls have to be done locally. Also, LLMs can probably de-minimize the minimized Javascript in the CC client so you can inspect the source too.

edit: There's a tool, i haven't used it in forever, i think it was netsaint(?) that let you sniff https in clear text with some kind of proxy. The enabling requirement is sniffing traffic on localhost iirc which would be the case with CC


only if you run it as root, run it as a user and it can't do any more damage than the user running it could. It can still certainly send any data the user has access to anywhere on the inet though, that's a big problem. idk if there's a way to lock down a user so that they can only open sockets to an IP on a whitelist.. maybe that could be an option to at least keep the data from going anywhere except to Anthropic (that's not anywhere close to perfect/correct either but it's something i guess).

Funny how mass surveillance concerns are popping up here and there these days. That boat sailed 20 years ago.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: