Yes, this has been my experience in my stint running an after school program. It’s an unfortunate reality that must be accepted in order to have sane policy.
Sure but this sounds exactly like the original definition of jihad or even “from the river to the sea” but people will get very upset if you suggest they mean they want to commit genocide when they chant it. I don’t think an argument over the meaning of ancient words is relevant or helpful here
If bluesky continues to grow, the porn and bots will arrive shortly and presumably it will have less resources to combat them than the much better funded X. Or does something in the design help here?
Regardless, I think this is more the new home effect. When you move into a new house, it is clean and you design it with intention. Over the years, it gets dirty and you start to hate the art you liked (people you followed) years ago.
significantly less of a problem on bluesky than on twitter. First there is a mass ban which scales well (one can ban entire lists of bots). Two there is no gamed algorithm that bias engagement towards the ideology of the owner of the platform... In other words, you have significantly more control over the content
valid points, time will tell how well this will work. For the ban lists there are directories they are typically updated by people you trust or are in the circles you care about. One can indeed imagine manipulating ban lists, I'm personally not too worried about that because typically accounts that post "reasonable" useful or interesting content are very few, most of the other "organic" accounts are just readers. The readers tend to prefer a good ratio of "signal/noise" and in my experience the noise part of the equation is the problem. Trimming down all the accounts whose purpose it just to insult, flame war, yell or post garbage is the goal.
We looked into that just before the big migration wave, when bsky was ~5M (https://dl.acm.org/doi/10.1145/3646547.3688407) and there is plenty of growth in terms of the number of labels/feeds, the posts that are labelled, and the popularity of the feeds. So while the default option is likely to matter lots, opening up content recommendation/moderation is having an effect.
Not sure I agree with your numbers but I also never see a single political post and everyone seems to say thats all that exists so it might be specific to how you use it or which feed tab you frequent
i think the most distinctive difference is that filtering of the fire hose
the fire hose is what bluesky calls the totality of the network regardless of who, what or why it exists
the largest threat to the network is a DDOS of the fire hose
compared to the fact that x is intentionally NOT a fire hose, which is widely documented by anyone getting banned that elon personally beefs with
so yeah, the fundamental design is what attracts people and why everything that’s tried to compete with existing networks “mano y mano” has failed— networks today generally boil down to dictatorships.
Does anyone have any suggestions for how to design a Twitter clone that can prevent this at any cost? it seems like a pervasive issue in every single social network.
Half baked thought; what about a linkedin style connection graph where your feed is only populated by those in your network, and it is trivial to see and disconnect from whoever introduced the bot into your network. Would a community self regulate if bots became actually contagious
The bots pay monthly for a blue checkmark so that they're promoted to the top of the replies. A small cost does not appear to have been a meaningful disincentive, especially when scammers can easily make that money back and political operatives can easily bankroll that.
$5 is enough for a human to glance at the feed and mute it if it seems bad. The trouble with $0 is you can create a million bots and no one has time to look at them.
I wouldn't mind something like that. When I joined Whatsapp it cost $1 for life and I thought that was a good system. Sadly it's gone, replaced by free but ad funded.
Regular old react without server components has been able to do this long before Svelte existed. Next.js has done it since day 1. Server components solve for something else
> I can't avoid feeling that the new direction taken by Next.js is not designed to help developers, but to help Vercel sell React. You can't really sell a service around SPAs: once compiled, a SPA is a single JS file that can be hosted for free anywhere. But a server-side rendered app needs a server to run. And a server is a product that can be sold. Perhaps I'm a conspiracy theorist, but I don't see another reason to break the React ecosystem like this.
Feels like this inevitably ends in a fork. It was a hell of a run though!
Vercel has raised an enormous amount of money and they are stuck with making bizarre choices like this or selling other tech brazenly for higher prices. That is why pg recommends not raising a huge round.
Too coincidental for pilot fatigue/shadows AND instrument malfunction to always happen at the same time in all of these cases. There are hundreds of reported instances in the past few years. This is simply not a serious explanation.
Strongly disagree. The sheer number of flight hours performed by all the world's professional pilots multiplied by the average percentage of a flight that a pilot could be considered to be "fatigued", multiplied by the odds of a cosmetic/minor sensor blip occuring is still an astronomically large number. That confluence of events probably happens quite regularity. This can be acendotealy verified hanging out at any general aviation flight club, and asking pilots about the times they got temporarily confused by some aerial phenomenon that turned out to be a strange reflection off a cloud. Happens literally all the time.
Humans also don’t have 8 eyes facing every direction at all times. They also get drunk/tired/impatient/angry etc. The reality is the entire argument is silly. Both are very different and Musk/Karpathy argument is misrepresented here. Saying humans only use vision was a response to “its not possible with only vision” not a statement that human vision is good enough and no need to do better. The 8 camera surround is leaps better than human vision. Where they lack is processing the signal. Human brain does that better. But if you have better inputs (we do already) and you believe you can one day match on the processing part, you’ll one day get a much better result. One thats suited to the vision based roads we have now and scales to literally anywhere not geo constrained like Waymo
Indeed, but humans also have an incentive to drive well, embodied by local traffic police and local laws, and even before passing their driving test they're made aware of the penalties for not driving well (which, let's remind ourselves, range from "mild ticking off"/"pay $$$" through "forfeit driving licence for a time" all the way to "forfeit liberty for a time")
Where are these incentives for self-driving algorithms?
If your algo breaks the law to a sufficient level, is someone(something?) prevented from driving for a time? Is that really going to be just that one vehicle, or should it be all vehicles with that same algo? If something really bad happens, who is charged; in the worst case, who might end up going to jail?
We all know CEOs tend to believe "this time it's different", that they're special, and that the annoying rulebook is to be viewed as guidance at best. VW/Martin Winterkorn, anyone?
> Where are these incentives for self-driving algorithms?
Surely the equivalent is the reward during training?
> If your algo breaks the law to a sufficient level, is someone(something?) prevented from driving for a time? Is that really going to be just that one vehicle, or should it be all vehicles with that same algo? If something really bad happens, who is charged; in the worst case, who might end up going to jail?
Personal opinion:
Algorithm should learn from fleet and should be shared by fleet; therefore all accidents should be treated like aircraft crashes and investigated extremely thoroughly with a goal of eliminating root cause.
If that cause was CEO demanding corners be cut to boost shareholder value then jail them; if it's that the algorithm had, say, never seen a flying shark drone[0] before, and misclassified it as a something it needed to take evasive manoeuvres to avoid and that led to a crash, then perhaps not (except anything I suggest probably should be in their list of things to check for, so even then perhaps it would still be a CEO-at-fault example…)
> Surely the equivalent is the reward during training?
Surely the counter-example to when a self-driving vehicle drives straight into a stationary fire truck?[0]
If a human driver did this more than once (and lived to tell the tale!) - yet had no explanation other than "Of course I saw it, but I wasn't sure what it was and didn't realise I needed to avoid hitting it it <shrug>" - wouldn't they lose their driving licence fairly quickly?
You asked for the incentives for AI; the equivalent isn't the same as for humans.
The nature of the AI doesn't include a concept of prison or licensing, so it can't be threatened with it, for the same reason I can't threaten a human driver with Af'nek-leigh D'Och entRah'negh.
I can however 'punish' (air-quotes necessary because it might not feel like anything) an AI by altering the weights and biases of its network — once done, it then thinks differently.
Don't anthropomorphise it, that's a category error.
Also, the field of "how does it even?" is tiny, which is itself a reason to not grant them control of vehicles, but that's a separate issue.
> You asked for the incentives for AI; the equivalent isn't the same as for humans. The nature of the AI doesn't include a concept of prison or licensing, so it can't be threatened with it [..]
There certainly should be incentives for the humans creating an AI, though.
> Don't anthropomorphise it, that's a category error.
Volkswagen [human!] engineers created the illegal defeat devices in Dieselgate, under the supervision of their [human!] managers. The device is illegal, we punish the humans in charge when laws are broken, not the devices themselves. It should be the same with AI.
If this means software engineering becomes a field where you need mandatory liability insurance to work on AI, is that a bad thing?
In the glorious words of Stelios Haji-Ioannou, "If you think safety is expensive, try [having] an accident"
A camera that is actually better than the human eye is pretty difficult to find, and they cost around ~2000$ each, and even then you'll have worse peak resolution in the day and worse motion characteristics at night. Human eyes are pretty good!