Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Perhaps a more effective approach would be for their users to face the exact same legal liabilities as if they had hand-written such messages?

(Note that I'm only talking about messages that cross the line into legally actionable defamation, threats, etc. I don't mean anything that's merely rude or unpleasant.)





This is the only way, because anything less would create a loophole where any abuse or slander can be blamed on an agent, without being able to conclusively prove that it was actually written by an agent. (Its operator has access to the same account keys, etc)

Legally, yes.

But as you pointed, not everything has legal liability. Socially, no, they should face worse consequences. Deciding to let an AI talk for you is malicious carelessness.


Alphabet Inc, as Youtube owner, faces a class action lawsuit [1] which alleges that platform enables bad behavior and promotes behavior leading to mental health problems.

[1] https://www.motleyrice.com/social-media-lawsuits/youtube

In my not so humble opinion, what AI companies enable (and this particular bot demonstrated) is a bad behavior that leads to possible mental health problems of software maintainers, particularly because of the sheer amount of work needed to read excessively lengthy documentation and review often huge amount of generated code. Nevermind the attempted smear we discuss here.


just put no agent produced code in the Code of Conduct document. People are use to getting shot into space for violating that thing little file. Point to the violation and ban the contributor forever and that will be that.

Liability is the right stick, but attribution is the missing link. When an agent spins up on an ephemeral VPS, harasses a maintainer, and vanishes, good luck proving who pushed the button. We might see a future where high-value open source repos require 'Verified Human' checks or bonded identities just to open a PR, which would be a tragedy for anonymity.

>which would be a tragedy for anonymity.

Yea, in this world the cryptography people will be the first with their backs against the wall when the authoritarians of this age decide that us peons no longer need to keep secrets.


I’d hazard that the legal system is going to grind to a halt. Nothing can bridge the gap between content generating capability and verification effort.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: