Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's an argument from authority fallacy. It doesn't matter how many citations you have, you either have the arguments for your position or you do not have them. In this particular context, ML as a field looked completely different even few years ago and the most cited people were able to come up with new architectures, training regimes, loss functions, etc. But those things does not inform you about societal dangers of the technology. Car mechanics can't solve your car-centric urbanism or traffic jams.


In many ways, we're effectively discussing the accuracy by which the engineers of the Gutenburg printing press are able to predict the future of literature.


The printing press did end the middle ages. It was an eschatological invention, using the weaker definition of the term.


Right, but the question is if the engineers who intimately understood the function of the press itself were the experts that should have been looked to in predicting the sociopolitical impacts of the machine and the ways in which it would transform the media with which it was engaged.

I'm not always that impressed by the discussion of AI or LLMs by engineers who undisputably have great things to say about the operation when they step outside their lane in predicting broader impacts or how the recursive content refinement is going to manifest over the next decade.


the question is if the machine will explode, not societal impacts, that's where the miscommunication is. Existential risks are not societal impacts, they are detonation probability.


Not really. So much regarding that topic depends on what's actually modeled in the training data, not how it is being trained on that data.

They aren't experts on what's encoded in the training data, as the last three years have made abundantly clear.


That's exactly what I am saying. Since humanity has to bet the lives of our children on something very new and unpredictable, I would bet mine on the top 3 scientists and not your opinion. Sorry.they must be definition make better predictions than you and me.


Would you bet that Oppenheimer would have by definition made better predictions how the bomb was going to change the future of war than someone who understood the summary of the bomb's technical effects from the scientists but also had studied and researched geopolitical diplomacy changes resulting from advances in war technologies?

There's more to predicting the impact and evolution of technology than simply the mechanics of how it is being built today and will be built tomorrow (the area of expertise where they are more likely to be accurate).

And keep in mind that Hinton's alarm was sparked by the fact he was wrong about how the technology was developing, seeing a LLM explain a joke it had never seen before - a capability he specifically hadn't thought it would develop. So it was his failure to successfully predict how the tech would develop that caused him to go warning about how the tech might develop.

Maybe we should be taking those warnings with the grain of salt they are due coming from experts who were broadly wrong about what was going to be possible in the near future, let alone the far future. It took everyone by surprise - so there was no shame in being wrong. But these aren't exactly AI prophets with a stellar track record of prediction even if they have a stellar track record of research and development.


We disagree on what the question is. If we are talking about if an atomic bomb could erode the atmosphere I would ask Oppenheimer and not a politician or sociologist. If we don't agree on the nature of the question it's impossible to have discourse. It seems to me that you are confusing x-risk with societal downsides. I , and they, are talking about extinction risks. Has nothing to do with society. Arms,bioweapons and hacking have nothing to to with sociologists.


And how do you think extinction risk for AI can come about? In a self-contained bubble?

The idea that AGI poses an extinction risk like the notion of a chain atomic reaction igniting the atmosphere as opposed to posing risk more like multiple nation states pointing nukes at each other in a chain reaction of retaliation is borderline laughable.

The only way in which AGI poses risk is in its interactions with other systems and infrastructure, which is where knowledge of how an AGI is built is far less relevant than other sources of knowledge.

In an air gapped system no one interacts with AGI existing can and will never bring about any harm at all, and I would seriously doubt any self-respecting scientist would argue differently.


There are extremely many books and articles about the subject.its like me asking " wtf gravity bends time? That's ridiculous lol". But science doesn't work that way. If you want you can read the bibliography. If not, you can argue like this.


IMHO, the impact of the printing press was much more in advertising than literature.

Although that is by no means the official position.

(Same can be argued for radio/tv/internet - "content" is what people talk about, but advertising is what moves money)


The printing press's impact was in ending the Catholic Church's monopoly over information, and thereby "the truth". It took 400 years for that process to take place.

The Gutenberg Era lasted all the way from its invention to (I'd say) the proliferation of radio stations.


Yes very good! All the more so that today's is a machine that potentially gains its own autonomy - that is, has a say in its and our future. All the more so that this autonomy is quite likely not human in its thinking.


> That's an argument from authority fallacy.

Right. We should develop all arguments from commonly agreed, basic principles in every discussion. Or you could accept that some of these people have a better understanding, did put forth some arguments, and that it's your turn to rebuke those arguments, or point at arguments which do. Otherwise, you'll have to find somebody to trust.


it's not about societal changes. It's about calculating risk of invention and let me give you an example:

Who do you think can better estimate the risk of engine fire in a Red Bull F1: the chief engineer or Max the driver? It is obviously the creator. And we are talking about invention safety here. VCs and other "tech gurus" cannot comprehend exactly how the system works. Actually the problem is that they think they know how it works when the people that created say there is no way of us knowing and they are black boxes.


But Bayesian priors also have to be adjusted when you know there's a profit motive. With a lot of money at stake, the people seeing $$$ from AI have an incentive to develop, focus on, and advance low-risk arguments. No argument is total; what aspects are they cherry-picking?

I trust AI VCs to make good arguments less than AI researchers.


Maybe but I'd say it was also an argument from people who know their stuff vs an a biased idiot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: