> It would have mandated that platforms like Instagram and TikTok address online dangers affecting children through design changes and allowing young users to opt out of algorithmic recommendations.
> A central – and controversial – component of the bill was its “duty of care” clause, which declared that companies have “a duty to act in the best interests of minors using their platforms” and would be open to interpretation by regulators.
[…]
> Sensitive but important topics such as gun violence and racial justice could be viewed as potentially harmful and subsequently be filtered out by the companies themselves. These censorship concerns were particularly pronounced for the LGBTQ+ community, which, opponents of Kosa said, could be disproportionately affected by conservative regulators, reducing access to vital resources.
Sounds like it was an internet censorship bill. But lawmakers accidentally included some anti-algorithmic profiling stuff which would annoy their bosses, corporate lobbyists. I guess we’ll see a new version that only tramples the rights of normal people eventually.
I will give you it is not a direct line. But, I wont upload a government ID to a social platform. If everyone had to, it would have a chilling effect across the entire internet. It would be one of the single biggest censorship moves that a government had ever taken.
Now where is this centralized database and who has control? How much does it cost for businesses or people to access it and how is their access limited? Are the identification requests being stored by any party?
There are many ways in which this ideal is not in line with a free society.
I understand your reasoning but on the other hand it seems like any attempt to reel in big tech ends up in "all roads lead to censorship/free speech infringement" territory.
So there's really no way to reign in big tech on any of this?
Sure there are. You can regulate that there must be a variety of algorithms available to users where they can choose. Having some control to fully opt out or augment in a way you want would be nice. You could regulate then removal of section 230 protections for platforms that use certain types of algorithms or fail to release reports on specific transparency metrics. We can regulate that accounts marked as children upon creation need to be separated out from main servers to their own playground. There are many options available besides users showing papers whenever they want to participate in speech on the internet. The issue is nuance and that never plays well for legislation.
Marked as children in the same way they do today. I am assuming parental caring here. Don’t knowingly allow children into your bar. If a parent marks an account as a child during its creation, they are segregated. YouTube has YouTube kids. It’s not a requirement, but they try to make a safer environment for those who aren’t mature enough or able to handle the risk of full YouTube. It’s not perfect right? But I like that direction.
> A central – and controversial – component of the bill was its “duty of care” clause, which declared that companies have “a duty to act in the best interests of minors using their platforms” and would be open to interpretation by regulators.
[…]
> Sensitive but important topics such as gun violence and racial justice could be viewed as potentially harmful and subsequently be filtered out by the companies themselves. These censorship concerns were particularly pronounced for the LGBTQ+ community, which, opponents of Kosa said, could be disproportionately affected by conservative regulators, reducing access to vital resources.
Sounds like it was an internet censorship bill. But lawmakers accidentally included some anti-algorithmic profiling stuff which would annoy their bosses, corporate lobbyists. I guess we’ll see a new version that only tramples the rights of normal people eventually.