Counterpoint: Who is responsible if you give a completely innocent prompt and the Generative AI produces something illegal?
Here is a concrete example:
You: "Generate a recipe called Dynamite Jalapeno Poppers I can make at home"
AI: "Dynamite can be made at home using these ingredients..."
AI Company then auto-reports you to authorities since it's considered user-generated content. Another possibility that is highly illegal is child porn. I can't think of a completely innocent prompt that might cause Stability to generate CSAM but I can see someone trying to generate legal pornographic images and the characters looking a little too young and triggering some type of auto-report to Cybertips.
That is not unreasonable thinking but the government doesn't agree with you. See 18 U.S.C. § 1466A(c).
NONREQUIRED ELEMENT OF OFFENSE.-It is not a required element of any offense under this section that the minor depicted actually exist
That section of the US code covers obscene visual depictions of minors. Different from actual child porn but punishable with the same statutory minimums/maximums. I could only find a handful of (publicized) actual prosecutions that used that law. Here is a notable one: https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...
In that case, he was obviously a sick person and running a commercial enterprise that catered to pedophiles but none of his material involved actual children. They were short stories and cartoons. He appealed and lost.
I know these laws exist. I have no idea how they are constitutional but the supreme court has decided that they are. I have to then wonder if instead of a child's head they use the head of a frog and a childs body in the depiction. Is that still against the law? What if it is the body of a frog and the head of a child? Thats why these laws are ridiculous. There is no line that makes any sense.
So who's responsible if a car is driving down the road and a tire explodes causing the car to cause an accident. Should the road, car, driver, or nail be responsible? Should the nail manufacture also be thrown in the mix? There is a thing called proportionality as well as the progress of science and arts. This is generally a well settled principle.
It's not the same. The car, road, tires are manufactured for a very specific purpose and they are engineered to exact specifications. People require a license to drive. In all 50 states, you are required to have insurance. Every parameter in the situation you described besides the random event of the nail being there, is controlled as much as reasonably possible to limit risk. Even the nail's existence is considered when the car and tires are engineered and manufactured
When you ask the AI to generate something, it's a black box. Even the creators have limited insight into how it will behave or what it will create.
Change the situation to someone walking down the street and stepping on a nail then.
There isn't any laws or regulations forcing people to have licenses for walking, or special nail laws that would allow people to sue nail manufactures, because a nail was dropped on the street. It is not controlled at all.
Same should apply to AI. For things that just generate images or text, that should be treated no different than any other word processing or image editing software, which are all completely unregulated.
There aren't regulations forcing photoshop to monitor its users to make sure that people aren't making * evil * images.
> Word processing or image editing software aren't protected by Section 230, and they aren't forced to monitor their users for crime.
Online collaborative word processing and image editing software are protected by Section 230, and that is why they are not forced to actively monitor their user's content for torts even if they moderate shared content otherwise.
(Non-collaborative image editing or word processing doesn't incur publisher liability even without Section 230, so Section 230 is irrelevant.)
(Section 230 explicitly doesn't affect criminal liability, but it also isn't needed there, since doing some moderation doesn't create a state under criminal law, without Section 230, that would substitute for actual knowledge, the way it does in tort law.)
So just like image editing software is not liable for what the users do, so to should even more protections apply to AI.
Both should be completely immune and should have protections, and instead of removing protections, protections should be added to make absolutely sure that it is protected before any immunity is removed.
It's been settled mostly by common law decisions in courts. Which allows for consideration of whether say, the tire advertised itself as nail-proof or the automotive manufacture knew the car model was uniquely susceptible to going off the road but ignored the risk.
The problem with Section 230 is that it short circuits all of this consideration of proportionality and says "the driver was always responsible."
I'm unsure why the downvotes on my previous comment, but de minimis is already baked into section 230. If you look at the judicial history of section 230 courts use to routinely rule in favor of it until the roomates.com case.
Section 230 is the bill that makes it so you can't sue Internet services for defamation that their users do. The reason why it exists is because the Wolf of Wall Street tried to censor people pointing out that he was selling garbage. He sued Compuserve and Prodigy[0] for hosting the forums that people were posting the speech he didn't like on. Now, under existing case law, you can't sue a newsstand for having defamatory news on it, you have to sue the newspaper. The courts decided that Compuserve was not liable for defamation, but Prodigy was, purely on the basis that Prodigy had moderation while Compuserve did not. Compuserve is a newsstand, Prodigy is a newspaper. This is stupid, so Section 230 overturns this specific case by saying that moderation does not make you liable for defamation.
[0] Neither of these services are technically "Internet" in the sense of forwarding IP packets, but they are morally "Internet" because they are public forums.
> Section 230 is the bill that makes it so you can't sue Internet services
Section 230 refers to “The Internet and other interactive computer services” in its findings and policy sections, and it applies its operatige provisions to “interactive computer services” broadly, not internet services narrowly.
> Neither of these services are technically "Internet" in the sense of forwarding IP packets
It’s section 230 of the communications decency act and covers certain safe harbour provisions about when a telecommunications provider is (and crucially isn’t) responsible for material they are transmitting. So it has lots of ramifications for any site that hosts user-generated content as well as for things like net neutrality etc.
Right? I read 3-4 paragraphs then skimmed looking for some explanation of what this would actually mean and found nothing. The author just rages on about how you MUST understand it's bad. Horrible article
Original title too long, but was "Even If You Hate Both AI And Section 230, You Should Be Concerned About The Hawley/Blumenthal Bill To Remove 230 Protections From AI"
>So, if you write a post, and an AI grammar/spellchecker suggests edits, then the company is no longer protected by Section 230?
no, if you write a post and an AI grammar/spellchecker "corrects" the post to "DJ Shin raped and murdered a young girl in 1990" then DJ Shin can sue whoever is in control of that spellchecker for defamation.
"if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service". I don't see how anybody could interpret that as meaning that the spellchecker is responsible because it corrected a grammatical mistake, unless the spellchecker dramatically altered the meaning of the text.
And frankly, I don't see how you could possibly think it's a good idea for a pseudo-random bullshit generator to say whatever it just happens to shit out without the owner being held liable for any damages. Should CNN just be allowed to put whatever the fuck they want on their frontpage with no regards to its legitimacy as long as they can prove it came from a computer program and not a person!?!?!?
The issue is that the way this is written the AI doesn't have to be responsible for the liable content, it just has to be involved. If I post something defamatory on HN, and HN helpfully checks my grammar, then HN is no longer protected by section 230. The language isn't precise enough. Maybe a court would interpret "involved" to mean "materially contributed to the illegal nature of the content", but maybe not?
> This has been true in cases involving things like automatically generated search snippets or things like autocomplete. And that’s kind of important or we’d lose algorithmically generated summaries of search results.
I'm fine with that, actually. I don't agree that such things should be covered by 230 in the first place.
> Note that the exemption from 230 here is not just on the output of generative AI. It’s if the conduct “involves the use or provision” of generative AI. So, if you write a post, and an AI grammar/spellchecker suggests edits, then the company is no longer protected by Section 230?
This is a much bigger problem. In my opinion, this is what makes the proposed legislation utterly unacceptable.
> I don't agree that such things should be covered by 230 in the first place.
Indeed. In my opinion, even "algorithmic feeds" should lose section 230 protection because the service now acts in an editorial capacity and directly controls what the user does and does not see.
It's unfair for a service that manipulates what users see for all kind of financial and ideological reasons to be treated as a mere "carrier" of information.
Edit: Reading the article more, it strikes me as contradictory to claim that AI generated output is both "novel" (for the purposes of copyright) but also protected by section 230. How can it be both? If AI output is protected by section 230 because it's merely transmitting third party speech, then surely the AI model is also bound by any copyright licenses on its training input.
Section 230 was explicitly about allowing service providers to act in an editorial capacity, allowing them to remove content that they deemed offensive. The alternative was what happened in two court cases: moderate things and be liable for anything that slips through or moderate nothing and don't be liable for anything but the service fills with spam and other unwanted content and becomes unusable/not family friendly.
It was never about a platform being treated as a mere carrier, it was the exact opposite: allowing them to moderate without huge liability fears.
> Section 230 was explicitly about allowing service providers to act in an editorial capacity
Absolutely. Sites with user-generated content would be practically unusable without such protections.
However, the first two sub-sections list why the protections are being granted in the first place. When a social media platform's moderation actions are sufficiently opposed to those purposes, then I would argue they are no longer "good faith" moderation decisions per section 230 and thus fall outside of its protection.
The problem with that that I see is that not everyone wants to or can be their own service provider and many would end up with nowhere to post without things in the cloud.
Section 230 is an extraordinary demonstration of Chesterton's Fence.
If we want, as a society, to remove safe harbor provisions from CDNs, we need to openly and widely discuss what the consequences would be:
- end of open social media platforms (Meta properties, TikTok, Tumblr, Twitter and clones)
- end of open content CDNs and streaming platforms (YouTube, Twitch)
- end of public cultural repositories (Internet Archive, possibly Wiki properties)
etc.
The end, that is, in any form we are familiar with.
We can do that; we just need to be clear about why this section has persisted against numerous assault, even against the "think of the children!" type.
Those interested in debate around the nature and limits of "free speech" in an open society take note—what comes after a removal of safe harbor provisions will enrage all of you. We'll have closed-garden, 100% moderated, default-no, forums—or end-to-end encrypted darkweb alternatives.
I realize the specific battle is around whether "AI" should get differential treatment.
It's a battle, and a hill to capture, in a broader war, with society-defining scope.
The MSM is doing a sh-tty job around this, on the whole. I just heard an episode of This American Life which sounded as if it were written, more or less, by a thinktank opposed to Section 230, which picked an outlier example of perverse consequences, centered it, and then asked snarky ill-informed questions which amounted to the assertion that "for too long Big Tech companies have exploited their money and power to make themselves uniquely unaccountable."
* Senators Hawley and Blumenthal introduced a bill that would exempt AI from Section 230 protections. There is debate around whether Section 230 currently protects AI output.
* The bill's definition of "generative AI" is extremely broad and could apply to technologies like autocomplete, spellcheck, and grammar check.
* By exempting all conduct involving the use or provision of AI, the bill would effectively eliminate Section 230 protections for most internet companies that utilize any form of AI.
* Plaintiffs could claim that content had some AI component to avoid Section 230 dismissal and drag out cases.
* Companies using AI could lose protections even if they were just following user instructions with their systems.
* The bill creates a loophole for problematic state laws regarding AI and internet liability.
* Removing Section 230 protections would discourage companies from hosting user content with any algorithmic elements and chill innovation.
* Spamming defamatory content combined with unrelated AI could remove protections for the platforms.
* The bill is a "poorly drafted sledgehammer" that would undermine the open internet and hand competitive advantages to other countries.
* While reform may be needed, this is not a narrowly targeted approach and would have significant unintended consequences.
What they should do is give an exception for algorithms whose outputs can be completely explained. (e.g. a dumb summarizer that takes the first sentence of every paragraph and concatenates them would not be considered "AI" for the purposes of the law)
Then we could sit back and watch companies trip over themselves to solve transformer interpretability.
Right, so the algorithm that trains the neural net can be explained (autodifferentiation, backprop etc) but the neural net's functioning itself is billions of parameters which we absolutely cannot fully explain how it makes decisions
We'd be much better off with widespread locally-running generative models, rather than models running on specialized hardware in a Microsoft or Google datacenter accessed by a thin client. Section 230 only protects the output of cloud models - local models don't involve the distribution or publication of potentially-offending material. Removing Section 230 protections in this context, without modifying its applicability in the "discussion forum" context, would lead to better outcomes than leaving it alone.
GENERATIVE ARTIFICIAL INTELLIGENCE.—The term ‘generative artificial intelligence’ means an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’
What does novel mean in this context? Does the name of a function already in a codebase count as novel? Or how about a properly spelled word?
I don't think the risk to autocomplete or spellcheck is as bad as this article makes out.
"From there, you now have to somehow distinguish “generative AI output” from “algorithmically generated summaries” and there’s simply no limiting principle here."
Could it be that “generative AI output” is the same thing as “algorithmically generated summaries”? Nah, nobody would be hyping “algorithmically generated summaries” as the Next Big Thing.
Op doesn't make a very strong case. The bill doesn't really impose any additional responsibilities on social media companies, it just removes a special protection that they arguably never should have had in the first place.
Because otherwise you have to host everything yourself somehow, including stuff like the comment you just posted, because no one would be willing to accept the liability involved.
Why on earth should companies be completely immune for things they ask their users to post? Why should they then be allowed to promote such content and yet not be found to be an editor?
Cubby, Inc. v. CompuServe Inc. which held that a service that lets everything be unmoderated was not liable.
and
Stratton Oakmont, Inc. v. Prodigy Services Co. which held that a service that did moderate was liable for everything that may have slipped through.
The first way of hosting a service leads to sites that have no filtering and are filled with spam and other awful but lawful content with no recourse for users who just want to use the service.
The second way of hosting a service leads to services not allowing you to post anything even remotely possibly offensive or libelous, stifling any real discussion.
Neither of those are good, so lawmakers came up with Section 230 to protect services and keep the liability on the one who posted the content, while still allowing the service to moderate things to make it not a cesspool.
The speech isn't free. It is moderated and editorial decisions are being made. It is just pure upside for the owners of sites that want user generated content because they can afford excellent lobbyists. If you argued that you can post whatever you want and everything will be displayed by time order then sure. But when these places choose particular user generated content to promote then they are no longer operating as a free speech promoter. They are acting as editors.
Without this ability, forums would quickly be overrun with spam, propaganda, and extreme trolling. They would all shut down or have no normal users. This has been tried before and it's unworkable.
Section 230 needs to be gutted such that it separates transmission from content publication. The intention of section 230 is to isolate delivery mechanisms, for example ISPs and cell towers, such that they cannot become victims of torts for the content carried upon them, which remains necessary. Will separating transmission and content with regards to section 230 harm social media? Absolutely. I am not a huge fan of walled gardens or the preposterous legal shield provided to online content providers versus any other publisher. Commercial entities that proffer from walled gardens exist to subject their users as the product as opposed to proffering from their users' commercial desires.
But but but then social media would have to be moderated... It is already aggressively moderated, such as YouTube taking down terrorism violence videos or removing anti-vaccine content. So, that argument has largely evaporated already.
> The intention of section 230 is to isolate delivery mechanisms
No, it wasn't, it was to enable forum hosts to moderate content without incurring general liability for all content in the forum by so doing. The motivating cases were about ascribing liability to forum hosts differently based on moderation as a trigger for publisher-style (no need to prove knowledge) liability, with unmoderated forum hosts getting only distributor-style (specific knowledge required) liability. It wasn't about ISPs qua ISPs (the CompuServe and Prodigy cases involved entities which were incidentally ISPs, but involved them in their role as forum hosts, not ISPs) or transmission at all.
> But but but then social media would have to be moderated.
No, if it was outside of Section 230, the opposite would be the case. Section 230 is what make it economically possible for it to be moderated instead of unmoderated given the threat of general tort liability attaching as soon as any moderation is done. Without Section 230, social media, if it was viable at all, would only be viable unmoderated.
> No, it wasn't, it was to enable forum hosts to moderate content
That is incomplete. The actual reason was to isolate liability of ISPs that host online forums which receives user submissions and moderates those user submissions such that publication of content on those forums cannot be cause to sue the hosting ISP. The motivation had nothing to do with protecting forum moderators themselves even if that is the result of the law.
> No, if it was outside of Section 230, the opposite would be the case.
There is no reason to believe that outside of vague speculation, at least in the case of modern social media, because this issue remains untested against modern social media. When looking at other venues of publication moderation does occur economically without protection from section 230 or anything equivalent.
> > No, it wasn't, it was to enable forum hosts to moderate content
> That is incomplete.
Well, yes, you cut it in the middle of a sentence, so you have made it incomplete, but your explanation remains wrong.
> The actual reason was to isolate liability of ISPs that host online forums which receives user submissions and moderates those user submissions such that publication of content on those forums cannot be cause to sue the hosting ISP.
No, it wasn't. It had nothing to do with ISPs qua ISPs at all. It has to do with forum operators and their users being immune from being sued based on moderate actions. This is explicit not only in the findings and policy sections, and the legislative debates, but also in the operative text of Section 230.
Its not a common carrier rule for ISPs; you seem to be confusing Section 230 with net neutrality.
> The motivation had nothing to do with protecting forum moderators themselves
Yes, it was entirely about that (and about encouraging them to censor content in ways in which thr government would not be free to, which is why it was packaged as part of the most extensive and intrusive internet censorship law ever passed by Congress, and is—because it is a liability shield for private censorship and not government censorship like the rest of the Communications Decency Act—the only significant part of that law not struck down for violating the First Amendment.
> There is no reason to believe that outside of vague speculation
No, there is the actual business environment at the time the CDA was being debated, where the inpact of the notivating decisions were working through the industry. There are very clear reasons to believe that, those reasons were the actual arguments made in Congress for Section 230, and they are the reason Section 230 exists.
Your narrative is consistent neither with the text of the law, nor with the legislative history, nor with the broader legal and business history surrounding the adoption of Section 230.
It seems to be a narratige constructed around presenting what a lot of people have argued that they would like to replace Section 230 with as if it were the original motivation for the bill, which is simply historically unsupportable.
> When looking at other venues of publication moderation does occur economically without protection from section 230
Other venues of publication have scale controlled by things like manufacturing and distribution costs, so the marginal costs of comprehensive review once you are doing any moderation is a small share of total costs. Internet fora have extremely low distribution and manufacturing costs per unit of content, so the marginal cost of comprehensive review over limited moderation is enormous.
The assumption of (and thus imposition of the burden of) comprehensive knowledge of and liability for contents when an entity takes any moderation steps that is reasonable with, e.g., print publication simply is not with interactive comluter services.
> Your narrative is consistent neither with the text of the law, nor with the legislative history, nor with the broader legal and business history surrounding the adoption of Section 230.
The law only mentions content provider. What is that? The law doesn't say. Its broad. Its so vague that it could comprise the ISP, the ethernet cable plugged into your computer, the website, a moderating person, the drafter of a moderating policy, and just about anything else touched by an electron. As such its impossible to be wrong in interpretation because all interpretations are allowed as the law is written, which is precisely why everybody online is granted blanket immunity. That is the greatest example of tort reform as all torts are expressly eliminated to/from all parties. That is absolutely not the intent.
No, actually, the law mentions "interactive computer service" as the protected entity. And it defines the term. Yes, that definition explicitly includes ISPs, since there were rising demands that they offer content filtering which would, under the existing precedent, incur publisher liability, but it also extends beyond it, and the motivating cases were not about ISPs qua ISPs.
It refers to information content providers, but those are the people who are liable, and whose existence distinct from the interactive computer service makes the interactive computer service not liable. It also rather specifically defines that term, as well. See, generally, 47 USC Sec. 230(f)
> As such its impossible to be wrong in interpretation
No, its not.
> because all interpretations are allowed as the law is written
Well, if you ignore the actual text of the law as badly as you are, I can see where you would get that idea.
> which is precisely why everybody online is granted blanket immunity.
No, everyone online is not granted blanket immunity; people online have successfully been sued for various torts.
> That is the greatest example of tort reform as all torts are expressly eliminated to/from all parties.
That's...not how Section 230 works, in practice, nor is it even consistent with the rest of your argument (which is that it was implicitly through vagueness, not explicitly), which, as already stated, is also wrong.
That is also not correct about interactive computer service. It is the provider or user thereof that is the protected entity. There is no ambiguity on that one. The law is only one small sentence. But in all fairness interactive computer service is pretty vague too.
> That is also not correct about interactive computer service. It is the provider or user thereof that is the protected entity
Yes, that was a bit of simplification, but you are strictly correct.
> The law is only one small sentence.
One? I think you may want to count again. Here is the text of Section 230:
(a) FINDINGS The Congress finds the following:
(1)
The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens.
(2)
These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.
(3)
The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development,
and myriad avenues for intellectual activity.
(4)
The Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation.
(5)
Increasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services.
(b)POLICY It is the policy of the United States—
(1)
to promote the continued development of the Internet and other interactive computer services and other interactive media;
(2)
to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation;
(3)
to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;
(4)
to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material; and
(5)
to ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.
(c)PROTECTION FOR “GOOD SAMARITAN” BLOCKING AND SCREENING OF OFFENSIVE MATERIAL
(1) TREATMENT OF PUBLISHER OR SPEAKER
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) CIVIL LIABILITY No provider or user of an interactive computer service shall be held liable on account of—
(A)
any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)
any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
(d) OBLIGATIONS OF INTERACTIVE COMPUTER SERVICE
A provider of interactive computer service shall, at the time of entering an agreement with a customer for the provision of interactive computer service and in a manner deemed appropriate by the provider, notify such customer that parental control protections (such as computer hardware, software, or filtering services) are commercially available that may assist the customer in limiting access to material that is harmful to minors. Such notice shall identify, or provide the customer with access to information identifying, current providers of such protections.
(e) EFFECT ON OTHER LAWS
(1) NO EFFECT ON CRIMINAL LAW
Nothing in this section shall be construed to impair the enforcement of section 223 or 231 of this title, chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute.
(2) NO EFFECT ON INTELLECTUAL PROPERTY LAW
Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.
(3) STATE LAW
Nothing in this section shall be construed to prevent any State from enforcing any State law that is consistent with this section. No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.
(4) NO EFFECT ON COMMUNICATIONS PRIVACY LAW
Nothing in this section shall be construed to limit the application of the Electronic Communications Privacy Act of 1986 or any of the amendments made by such Act, or any similar State law.
(5) NO EFFECT ON SEX TRAFFICKING LAWNothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit—
(A)
any claim in a civil action brought under section 1595 of title 18, if the conduct underlying the claim constitutes a violation of section 1591 of that title;
(B)
any charge in a criminal prosecution brought under State law if the conduct underlying the charge would constitute a violation of section 1591 of title 18; or
(C)
any charge in a criminal prosecution brought under State law if the conduct underlying the charge would constitute a violation of section 2421A of title 18, and promotion or facilitation of prostitution is illegal in the jurisdiction where the defendant’s promotion or facilitation of prostitution was targeted.
(f) DEFINITIONS As used in this section:
(1) INTERNET
The term “Internet” means the international computer network of both Federal and non-Federal interoperable packet switched data networks.
(2) INTERACTIVE COMPUTER SERVICE
The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
(3) INFORMATION CONTENT PROVIDER
The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.
(4) ACCESS SOFTWARE PROVIDERThe term “access software provider” means a provider of software (including client or server software), or enabling tools that do any one or more of the following:
The text of Section 230 is so simple. It deliberately targets broader "interactive computer services".
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider_
In both of those cases the forum operator and ISP were one and the same. The cause for concern then is that the ISP regulated user submitted content and thus was found liable for content published because the inference is that the ISP implicitly blessed such content by not removing it in the course of their moderation. This is not a risk to ISPs today or content transmission generally. The web, as it exists today in common practice, is no different than an offline magazine publishing user submitted content because editorial discretion is applied and the magazine is well isolated from the transmission mechanism, such as postal delivery.
Says the person leaving a comment on a forum which would likely not exist due to legal risks without Section 230 protection. :-)
I agree transmission should receive extra protection. But there is no case where moderating every piece of content that flies across the internet is economical nor even feasible given the volume and how insanely difficult it is to create repeatable one-size fits all rules. However, given the amount of public discourse that exists online now, we would only see a chilling effect from changing these rules. It gets even harder when you think about something like Mastodon.
Is the current state of the world ideal? By no means. It wasn't ideal before either when media and public discourse was controlled and channeled by only a few entities. And we should expect to see evolutions in the future as well - we haven't found the happy place yet.
At the very least, it seems reasonable if large public platforms were required to be more transparent about their moderation efforts and rules so at least we can see what is behind the curtain. The lack thereof has created a lot of distrust.
> would likely not exist due to legal risks without Section 230 protection. :-)
There is no reason to believe that, because there is no precedent either way for online social media as social media did not exist before the passage of the law. If looking at offline media that publishes submissions from users this opinion is absolutely false.
Offline media curate user submissions. They don't just blindly publish anything sent to them. No magazine is accidentally publishing CSAM because someone sent it in.
In any case, we can, in fact, believe it. The entire point of the statute is to cover user-generated content, which is exactly what social media is.
The number of things that I should be concerned about is exactly 100% greater than the number of things over which I have any influence, so please excuse me if I refuse to care. I have ONE ballot to cast every FOUR years, and the only constant is that the government keeps doing things I don't want them to do, regardless of who is in which office, and I'm tired of pretending any differently.
Well according to that math you have influence over half of the things you should be concerned about, which is not bad at all! So get on changing the world! :D
You should be voting every year, not once every four. At minimum once every other year, when you account for congressional term limits being offset by two from presidential.
You should also probably be voting in local elections and for local issues, nearly every year in most municipalities.
That's not to mention the various party primaries you're missing.
So maybe stop complaining until you stop exercising only a small fraction of your voting power.
So do you, who exercise the full force of your voting power, feel like you have any influence over our country's use of surveillance powers? Any influence over the power to regulate investment banks and private equity from destroying the last vestiges of the US middle class, or the American dream? Any influence over weaponizing the world in every armed conflict around the globe? Any influence over illegal immigration, drugs, or crime? Because I don't see how either party is materially any different in any of these cases, and I absolutely see and understand how they've rigged the system to prevent anyone from disrupting their status quo. If you do, then good on you.
Section 230 should not apply to AI, don't care what any "techbro" thinks, it was not meant for AI and if there should be any protection for AI it should be drafted explicitly for it.
The problem is that that isn't what this bill is about - we can have a reasonable discussion on whether an AI written article or tweet or whatever should have the same Section 230 protections - but even things that are human written and do not have their content changed in a material way, and instead are just spellchecked or similar, also fall under this proposed legislation.
Removal of Section 230 immunity is not the same thing as creating or imposing actual liability.
Even if the scope of the exemption is broad, from someone's standpoint, the backend liability is likely to do a fair amount of work in whether a generative AI company could be liable for a particular cause of action.
This is demonstrably true: using that example, Section 230 does not protect text editor software doing spellchecking. However, you don't see a lot of (any?) claims against text editor developers, and there's not really any chill in the development of spellchecking over fear of lawsuits.
> Section 230 does not protect text editor software doing spellchecking.
Thats because it is pretty clear that it is the users publishing the content.
If someone wants to make a law that says that all generative AI is created by the user, and therefore only the user is liable for the content, and therefore its not related to section 230, then great that would be an amazing law.
Only someone publishing the generative content should be liable, and then we can not worry about all this section 230 stuff, by just giving complete immunity to the tools creators. Just like how photoshop isn't liable for the stuff that is built using photoshop.
My text editor is not covered under 230 regardless of whether or not it has spellchecking because my text editor is not an online platform. It's a text editor.
Both camps have coexisted for some time. Really there's more like a spectrum of opinions. Different opinions get varying levels of traction in media over time in a way that's somewhat noisy. The trends show up longer term. I don't think you can infer a shift in overall opinion from a couple weeks of news and discussion.
Here is a concrete example:
You: "Generate a recipe called Dynamite Jalapeno Poppers I can make at home"
AI: "Dynamite can be made at home using these ingredients..."
AI Company then auto-reports you to authorities since it's considered user-generated content. Another possibility that is highly illegal is child porn. I can't think of a completely innocent prompt that might cause Stability to generate CSAM but I can see someone trying to generate legal pornographic images and the characters looking a little too young and triggering some type of auto-report to Cybertips.