Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's clear OpenAI is a hype company

Every other industry: "My new invention is safe, I swear"

Public reaction: "You're biased, it's dangerous!"

Almost the entire AI industry, including people who resign to speak more openly about the risks: "This may kill literally everyone, none of us knows what we're doing or what 'safe' even means"

Public reaction: "You're biased, it's safe, just saying that to look cool!"



I found this guy's take on the AI safety scene to be quite insightful.

In summary, he feels the focus on sci-fi type existential risk to be a deliberate distraction from the AI industry's current and real legal and ethical harms: e.g. scraping copyrighted content for training without paying or attributing creators, not protecting those affected by the misuse of tools to create deepfake porn, the crashes and deaths attributed to Tesla's self-driving mode, AI resume screening bots messing up etc.

https://www.youtube.com/watch?v=YsLf4lAG0xQ


It's possible for current harms and future risks to both be real. It's also possible for human civilization to address more than one problem at a time. "You care about X but that's just a distraction from the thing I care about which is Y" is not really a good argument. I could just as well say that copyright concerns are just a distraction from the risk that AI could kill us all.

And it seems to me that if the AI industry wanted to distract us from harms, they would give us optimistic scenarios. "Sure these are problems but it will be worth it because AI will give us utopia." That would be an argument for pushing forward with AI.

Instead we're getting "oh, you may think we have problems now but that's nothing, a few years from now it's going to kill us all." Um, ok, I guess full steam ahead then? If this is a marketing campaign, it's the worst one in history.


The industry does not distract from harm to shake the followers off the tail. Whoever comes next will have to bear huge costs getting over the insane regulatory requirements. The more politicians are involved in the process, the more secure are initial investments.


> And it seems to me that if the AI industry wanted to distract us from harms, they would give us optimistic scenarios.

Nah it has to appear plausible.


People are very good at promising a better future in a non-specific way and without much evidence. That's kinda how Brexit happened.

It's when you get the specific details of a utopia that you upset people — for example, every time I see anti-aging discussed here, there's a bunch of people for whom that is a horror story. I can't imagine being them, and they can't imagine being me.


Only the last one is in any way actually bad and even then it should be in the interest of the company using it to fix it promptly.


Deaths in car crashes and copyright laundering by big corporations are not bad in any way at all?


I would say that car crashes are bad, even though they already happen and the motivation behind AI is to reduce them by being less bad than a human.

I think it is a mistake to trust 1st party statistics on the quality of the AI, the lack of licence for level 5 suggests the US government is unsatisfied with the quality as well, but in principle this should be a benefit. When it actually works.

Copyright is an appalling mess, has been my whole life. But no, the economic threat to small copyright holders, individual artists and musicians, is already present by virtue of a globalised economy massively increasing competition combined with the fact the resulting artefacts can be trivially reproduced. What AI does here needs consideration, but I have yet to be convinced by an argument that what it does in this case is bad.

All these things will likely see a return to/increase in patronage, at least for those arts where the point is to show off your wealth/taste; the alternative being where people just want nice stuff, for which mass production has led to the same argument since Jaquard was finding his looms smashed by artisans who feared for their income.


20 to 30 years ago, activists firebombed university research labs (e.g. Michigan State University, University of Washington, Michigan Technological University [1]) because they believed genetically engineered plants are dangerous. Today, we don't have such serious activism against AI. So you are right, the public doesn't think AI is a danger.

[1] https://en.wikipedia.org/wiki/Earth_Liberation_Front#Notable...


reminds me, I would’ve rather seen VCs fund more genetic engineering startups. Imagine the good it could do, from stem cells to nanobots to “hacking” human DNA itself. But I know the business model there can’t compete with software. So it will never reach the funding it needs.


> This may kill literally everyone

It's indeed hard to take seriously such gross exaggeration. Even the deadliest plagues didn't kill everyone, so advocating this is a likely outcome of creating spam generators is laughable.

This is more likely a strategy, common in academia, of aggrandizing results (here risks) so that more eyeballs, attention and money is diverted towards the field and its proponents.


> Even the deadliest plagues didn't kill everyone...

That is logically flawed; the species that were killed off by plagues aren't around to say that. Every species exists in a state of "the deadliest plagues [we've experienced so far] didn't kill everyone". You can say that about literally every threat - we know we have overcome everything thrown at us so far because we are still here. That will continue to be the case for everything that humanity ever faces except for 1 thing (we aren't certain what yet).

But we know that species go extinct from time to time, so the logic that we've overcome things in the past ergo we are safe doesn't make sense for ruling out even many well known threats. Let alone systems that can outplan us; we've never faced inhuman competitors that can strategise more effectively than a human.


> advocating this is a likely outcome of creating spam generators is laughable

They're used as spam generators because they're cheap.

The quality in many fields is currently comparable to someone in the middle of a degree in that field, which makes the quoted comparison a bit like the time Pierre Curie stuck a lump of radium on their arm for ten hours to see what it would do. I can imagine him reacting "What's that you say? A small lump of rock in a test tube might give me aplastic anemia*? The idea is laughtable!", except he'd probably have said that in Polish or French.

Even the limits of current models, even if we are using those models to their greatest potential (we're probably not), isn't a safety guarantee: there is no upper bounds to how much harm can be done by putting an idiot in charge of things, and the Peter Principle applies to AI as well as humans, as we're already seeing AI being used for tasks they are inadequate to perform.

* he died from a horse drawn cart, Marie Curie developed aplastic anemia and he likely would have too if not for the other accident getting him first.

Bonus irony: the general idea he had in regards to this, to use radiation to treat cancer, is correct and currently in use. They just didn't know anything like enough to do that at the time.


> They're used as spam generators because they're cheap.

No, the current fade of IA (LLM) are text generators. Very good, but nothing more than that.

> there is no upper bounds to how much harm can be done by putting an idiot in charge of things

Which is the not an AI problem. An AI may kill people indirectly in a setup like emergency services chatbot and a bad decision is taken, but it certainly couldn't roam the street with a kalachnikov killing people randomly or stabbing children (and if that ever happens politicians will say this has nothing to do with AI). The proponents of "AI can kill us all" can't write a single likely and non-contrived example of how that could happen.


> No, the current fade of IA (LLM) are text generators. Very good, but nothing more than that.

That doesn't address the point, and is also false.

Transformers are token generators, which means they can also do image and sound, and DNA sequences.

But even if they were just text, source code is "just text", laws are "just text", contract documents are "just text".

They have been used to control robots, both as input and output.

> Which is the not an AI problem

"Good news, at least 3,787 have died and it might be as bad as 16,000!"

"How is that good news?"

"We're an AI company, and it was our AI which designed and ran the pesticide plant that exploded in a direct duplication of everything that went wrong at Bohpal."

"Again, how is this good news?"

"We can blame the customer for using our product wrong, not our fault, yay!"

"I'm sure the victims and their family will be thrilled to learn this."

> it certainly couldn't roam the street with a kalachnikov killing people randomly or stabbing children

It can when it's put in charge of a robot body.

There's multiple companies demonstrating this already.

Pretending that AI can't be used to control robots is like saying that nothing that happens on the internet has any impact on real life.

Fortunately the AI which have been given control of robot bodies so far aren't doing that — want to risk your life with the humanoid robot equivalent of the Uber self driving car?

> The proponents of "AI can kill us all" can't write a single likely and non-contrived example of how that could happen.

Anything less would be a thing we can trivially prevent.

It's not like "dig up all the fossil fuels and burn them despite public protest about climate change and the existence of alternatives, and suing the protesters with SLAPP suits so we can keep doing it because it's inconvenient to believe the science and even if it did the consequences won't affect us personally", doesn't sound contrived.

And that's with humans making the decisions, humans whose grandkids would be affected.


It's quite common for new species to kill off old species. We ourselves have obliterated many species that we outcompeted for resources.


As if software is the same thing as a new biological species.

I am just so bored of reading bullshit like this.

If you really believe this then you need to level up your level of education and learning. It is not good.


> As if software is the same thing as a new biological species.

The other poster didn't claim they were.

They don't need to be.

They don't even need to be given control of robotic bodies, though they already are.

What they do need to be, is competing for the same resources.

And there's plenty of examples of corporations doing things that are bad for humans in the long-term because they are good for short-term shareholder value. And filing SLAPP suits against any activist trying to stop them.


> If you really believe this then you need to level up your level of education and learning. It is not good.

How does your level of education and learning compare to Nobel Prize winner Dr. Geoffrey Hinton (father of deep learning, 10%-50% chance that AI will kill everyone), Dr. Dan Hendrycks (GELU inventor, >80%), Dr. Jan Leike (DeepMind, OpenAI, 10%-90%), Dr. Paul Christiano (OpenAI, Time 100 2023, UK Frontier Taskforce advisory board, 46%), etc.?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: