Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For all the criticisms of Musk, and recent wackiness, it's moments like these that I thank our lucky stars that there are people like him on this earth. It just seems like these days there are so few people thinking about the long-term future of the human race.

I love it that his solution to the "AI terminator" problem is making a brain-computer interface so that we can have a fighting chance when AI takes off.

I love it that he wants to help us kick our addiction to non-renewable fossil fuels.

I love it that he wants to make us a two planet race so that we don't have all our humanity eggs in one planetary basket.

Thank you Elon.



There are, and have been, many many people working on all of these problems long before Elon Musk (the tech announced today at Neuralink is entirely built on work that was done by people at UCSF and UC Berkeley, and even that is an iteration on technology that was developed by scientists over the past several decades). Neuralink the company was founded by eight people other than Musk. It's a huge disservice to all of these people to give Musk the credit for their work.

Musk sits in a weird position where his unique blend of controversy keeps him in the headlines and ensures he gets linked to these technologies, but that does not mean he is responsible for nor deserves the credit. It could be argued that his "ability" to constantly land in the limelight draws more attention (and thus progress) to these issues, but others would argue that we would be even further along if not for the constant controversy he creates.


One speaker is a professor from UCSF who studied the brain's processing of motor signals. He explicitly credited Musk for having the right vision and the long term planning, and that's why he left his position after 16 years to come work with Neuralink.

No one credits Musk for solving bugs with the software on his products, or creating these brain-computer interfaces. But he can assemble the team to do, and motivate them to keep moving and progressing pretty aggressive schedules. And he frequently gives credit to his team (and doesn't sit there patting himself on the back).


> And he frequently gives credit to his team (and doesn't sit there patting himself on the back).

Yes, the list of authors on the whitepaper they released is "Elon Musk & Neuralink". I guess his team should be thankful.

https://www.documentcloud.org/documents/6204648-Neuralink-Wh...


As the other commenters said, that's not a research paper. Here is a link to an actual research paper where (at least some of) the authors work for neuralink.

https://www.cell.com/neuron/fulltext/S0896-6273(18)30993-0


thats a white paper , not the research paper (which is what ppl will read). You can also read it the other way around: He wanted to credit the entire Neuralink team, without claiming to be part if it or leading it


It would have read that way if the author list had simply said 'Neuralink'. He is definitely positioning himself as the leader here.


BioArxiv required at least one human author. We suggested this author list to him and, honestly, we just think it’s awesome.


Actually, the leader is usually the last author. The first author is usually the student doing the gruntwork


Fred Wilson has a blog post where he outlines the role of a CEO like this:

>A CEO does only three things. Sets the overall vision and strategy of the company and communicates it to all stakeholders. Recruits, hires, and retains the very best talent for the company. Makes sure there is always enough cash in the bank.

Based on everything we saw in the Neuralink livestream, it seems like Elon is nailing all three of these. Doesn't mean he deserves all the credit, but it mean he's doing his job.


> He explicitly credited Musk for having the right vision and the long term planning, and that's why he left his position after 16 years to come work with Neuralink.

How do we know that's why, vs his estimation of Musk being the right kind of showman to get a lot of investment.


> How do we know that's why, vs his estimation of Musk being the right kind of showman to get a lot of investment.

Because it is what he explicitly said. As I pointed out in my parent comment.

Of course I'm sure access to capital plays a role. Otherwise it's just someone with a good idea and no money. A meh idea but lots of money also wouldn't attract these kinds of people.


What people explicitly say is their reasoning isn't necessarily their reasoning; especially in an investor/recruiting hype presentation.


Technological R&D doesn't get you anywhere on its own. It's an important prerequisite, sure, but just as necessary is the next step, where a company is formed to commercialize/productize novel research through years of schlepping through market education and government safety trials, to pave the way for the technology to become a "safe" product category for other companies to follow on to. There are many technologies stuck between these two stages—thoroughly "researched and developed", but not yet commercialized.

People like Musk (and the people he co-founds these companies with) are important because they're taking nascent product categories that are "stuck" in the R&D stage with little attention being paid to them, and directing large-scale consumer demand onto them in a way that brings profit-driven industry interest—and therefore industry talent—into the picture. Even if it's not Musk's offering that end up winning the space, these efforts redefine the public perception of the category in a way that means that every company in the space wins.

(For another equivalent example: the creator of Bitcoin did more for smart contracts by creating one platform that lead to competitor platforms that actually had smart-contract support, than a thousand academic smart-contract systems projects ever could have.)


... and there were people working on electric cars and rockets before Musk came along too, but somehow he just manages to nudge things along a lot more than the average person!


Our media perpetuates and encourages erratic behavior. People like/love Musk because he does it, for science!

I personally have no problem with him.


I am also an optimist and a techno-utopian...

BUT:

1. This has not been tested on a single human yet, as it has no FDA approval.

2. Preliminary trials in full quadripilegic patients are several away (these are also not yet approved)

3. Should these trials succeed, this will still not be available as an elective procedure for healthy people (that will take much more time)

3. The skull exists and is a hard barrier that is not going away. A decade or so from now, should this be approved as an elective procedure, patients will have to have a hole drilled in their skull (note that most people find LASIK invasive, even after decades of successful surgeries)

4. Patients will also have to become comfortable with thousands of fibers being inserted (albeit in a minimally invasive way) through brain tissue by an automated surgical robot.

5. Should the procedure be successful, patients should finally, at long last, be able to control a mouse, or keyboard, or smartphone using their brain and imagining the movements instead of using their hands.

There is perhaps, a cyberpunk future where crime syndicates mine Bitcoin in the brains of their victims, where malware pipes gigabytes of extremist political memes in seconds through the dorsolateral prefrontal cortex of young adults.

Maybe that will come one day, but this technology is only using the signals generated by the brain to control a mouse and keyboard. This existed twenty years ago in chimpanzee studies. The real innovation here is in materials science and surgery.

This is amazing multi-disciplinary science in the pursuit of advanced medicine, and we should be applauding it for what it is.

So, thank you Elon for funding this -- but more importantly, thanks to all the scientists, researchers, and engineers who have dedicated their lives the advancement of our science and medicine.

I will not be electing to undergo this surgery in the future.


The applications are very very speculative and far-reaching. I think, by the time the applications are feasible there will probably be a way to do minimally invasive craniectomy. The neural implant is impressive work, but anything beyond that is probably going to be very different than is speculated.


> There is perhaps, a cyberpunk future where crime syndicates mine Bitcoin in the brains of their victims, where malware pipes gigabytes of extremist political memes in seconds through the dorsolateral prefrontal cortex of young adults.

Not bad, man. I'd read that book.


> I love it that his solution to the "AI terminator" problem is making a brain-computer interface so that we can have a fighting chance when AI takes off.

This seems like making a well-intentioned medical application that incorporates latest research findings and likely addresses historical downsides of the field (e.g. scarring issues with long term deployment of invasive BCI).

I don't see how that is anywhere close to being related to some sci-fi "AI terminator" scenario though. If you want to go into some cyberpunk fanfic about Musk you can just turn this application around and spin a "AI is now able to fry our brains out" narrative, which is neither helpful, nor realistic. This AI FUD is so weird to me, it's much more likely to be killed by badly written auto pilot for fancy cars, a failed operation to get your brain USB plug, a malicious application of AI by companies or state actors in areas like mass surveillance and population control... than it is for a real, strong AI to suddenly emerge, become sentient and decide that humanity cannot be trusted.


What evidence is there that AI is going to "take off" and threaten humanity somehow? How are people imagining this process would happen?


The reverse argument made here is usually the turkey fallacy. For turkey, all logical evidence points to a continuously improving quality of life, with every need met and a constant availability of food. There’s no evidence that it’s going to be eaten this thanksgiving, so any effort in building turkey-computer neural links is dismissed off hand as being a waste of time.


How does this analogy apply specifically to AI, though? There's also no evidence that God is going to come and pronounce his judgement on us, so any effort in prayer and pious living is often dismissed off hand as being a waste of time. Should non-believers in God reconsider their ways given their knowledge of the turkey fallacy?


Think that’s the premise of Pascal’s wager, that if you simply multiply cost with expectation believing in God is a better bet.

Of course with something like a general AI all bets are off. This neural link think is a horribly bad defense, because of all possible defenses this is the one that could give the AI a direct connection with your brain.


This is mostly a media hype IMO, but I have a bias as I graduated in machine learning and still work in AI.

For an analysis of the state of the field and the surrounding media attention, I highly recommend this blog post by Zachary Lipton [1].

[1] http://approximatelycorrect.com/2017/03/28/the-ai-misinforma...


There are two ways:

1. paperclip optimizers where a very smart computer you tell to do one menial task like producing as many paperclips as possible or proving a mathematical theorem can turn into a catastrophy as that computer turns all iron on earth into paperclips or into computers that all try to find a solution to the theorem. This also includes computers that we task to "protect" humanity coming to the conclusion that humans having power to kill each other is mankind's biggest threat.

2. crazy would-be dictator who wants to rule over the world and tells an AI to do it or kill all humans or something else.

TLDR: First way: forgetting machines to tell to not kill humans (or not doing it in an effective manner). Second way: some really shit individual explicitly telling machines to kill humans.

The first danger is one we already face: basically since we've had machines there have been accidents with them, also ones involving casualties. In general, the more we care about avoiding casualties the less likely they are. However, it only takes one super intelligent paperclip optimizer to "break lose" so given the high amount of possible casualties, there needs to be a lot of care taken to prevent even one such event.

The second danger needs to be coped as well. One could do two things: very slow deployment of super-AI capabilities at the start, while building AIs that can defend governments and somehow encoding into them how the government works (to prevent parts of the government from using that machine in a coup). The same computers will prevent revolutions though, so I guess we'll see less and less of those. You can think of variations of those ideas like AIs that only enforce asimov's laws or only make sure that we don't use any weapons more powerful than $weapon on each other.

What I don't understand though is how neuralink will help with coping with those threats.


1. Unplug the paperclip optimizer. Blow it up. The problem with the less wrong idea is they keep ascribing more and more godlike powers to AI to counter very common objections to technology. Somehow the entire thing becomes a godzilla like self-sustaining organism that ignores anything we can do or throw at it, and has magical powers. Meanwhile it seems apparently tha major websites can have outages if people go on summer vacation and the interns are on duty.

2. They can do that now. What would an AI do differently that couldn't be accomplished by conventional weapons? How would it do so without using said weapons or any sort of thing that could be done so without it?

The AI thing is just a secular form of the rapture, a particular variant of existential dread for people with little to no religious belief.


> Somehow the entire thing becomes a godzilla like self-sustaining organism that ignores anything we can do or throw at it, and has magical powers. Meanwhile it seems apparently tha major websites can have outages if people go on summer vacation and the interns are on duty.

Sure, the risk is low right now, but the more powerful computers we can build, the larger the potential risk is. Before you manage to press the off button the computer might already have deployed a bioagent or killed countless lives with drones.

> They can do that now. What would an AI do differently that couldn't be accomplished by conventional weapons?

A military made out of humans is subject to human failings. It is generally a big problem that soilders shoot in the general direction of the enemy to not get punishments for not shooting but miss on purpose. As an extreme example, the nazis had to give lots of free alcohol to their soilders so that they'd continue shooting civilians and burying them under new bodies before they have even died. They later invented gas chambers as an easier method to kill masses of people. Compared to humans, an AI is doing what it is being told to. If you tell it "Kill all humans" it will do it.



Same guy that doesn't know Hume Guillotine(1) or anything about philosophy and is a charlatan with his meditation app

(1)https://youtube.com/watch?v=wxalrwPNkNI


https://samharris.org/response-to-critics-of-the-moral-lands...

Do you have anything more substantial to say than ad hominem? Like a response to the video I linked instead of grinding your unrelated axe against the guy?


The title of the book: How Science Can Determine Human Values is literally in contradiction with Hume's Guillotine which any philosophy 101 should be aware of.

>Do you have anything more substantial to say than ad hominem?

Nope, because the title of the book says it all


I'm interested in hearing more about about being a charlatan with the meditation app especially considering, as far as I remember, you can get it for free by just asking.


There's two types of people you meet who are into mindfulness: high practicioners (monks) and yoga guy from Los Angeles who is "kinda" into mindfulness but not really


I can only go by what he and other people close to him say but he says he used to do plenty acid and been to retreats in asia for months and months (cumulatively) during his early life and seems to be good pals with people like Joseph Goldstein (who studied under asian teachers in 60s/70s). He has probably experienced all kinds of stuff.

Point being, if you get (at least some of) what there is to get then does it matter where your body was born or what it looks like? Is it a bad thing that western born people are bringing this (buddhist/hindu/jain) thought to the west?

I would revise your statement that there are the monastics who dedicate their lives to this, the lay people who practice and the commoditized 'yoga' as exercise/stretching folk/peddlers who are far removed from its spiritual components.


Eh, if you use the word "mindfulness," you are yoga guy. A monk isn't mindful, he is mortifying his flesh to practice the tenets of the religion he believes in so fiercely enough he is willing to self-imprison to follow it better. What you see as mindfulness is just the surface results of winning that struggle. It is very possible to lose it instead, and monks are often open about the dangers of monastic life.

I think people really don't get religion in this sense. The radical, wild, anarchic aspects of it. Mindfulness is more just a wish for stoicism in religious guise; the idea of being not stoic, and weeping over your prayers in a cell because you feel the weight of the world's sin and know that the time is short will not often occur to people.


>Eh, if you use the word "mindfulness," you are yoga guy

Everything in Buddhism and meditation surrounds around mindfulness/sati/awareness you call it.

>A monk isn't mindful, he is mortifying his flesh to practice the tenets of the religion he believes in so fiercely enough he is willing to self-imprison to follow it better.

Monks have to cultivate the 8 fold path which includes right mindfulness so saying he isn't doesn't make him monk. And also wow that sounds so disrespectful and ignorant.


What exactly is wrong with being a regular person who practices mindfulness? Are the benefits they receive not legitimate in your eyes?

The philosophy I have been exposed to through meditation has helped me better understand how the ego can cause problems. It seems you are rather attached to the idea of a very pure, austere study of meditation and associated philosophies. There are other valid ways of approaching such things that you are unjustifiably disregarding.

Alternatively, you could look at it as someone simply being earlier on their path, and provide encouragement instead of ridicule.


>What exactly is wrong with being a regular person who practices mindfulness

Completely normal.

Sam Harris is a charlatan for preaching it using "his program": https://www.goodreads.com/book/show/18774981-waking-up

Spare me this book under 4 stars rating.


> Completely normal

Ok, good to hear. That's not the impression I got from your comment about the LA yoga guy.

> Sam Harris is a charlatan for preaching it using "his program": https://www.goodreads.com/book/show/18774981-waking-up

What's wrong with the book? I read it and thought it was, on the whole, interesting and useful. Obviously it isn't perfect.

How specifically is Sam a charlatan? What falsehoods does he claim about himself regarding meditation?


>problem is making a brain-computer interface so that we can have a fighting chance when AI takes off.

our fighting chance is EMP.


That's like saying a lions chance against humans is the big teeth. It's dangerous in a particular context, sure, but misses the fundamental assymmetry that took humans from scared or lions to existential threat to lions.


what is EMP?


Apparently Paul Allen's Experience Music Project has been a secret weapon against malevolent AIs all along!

Or if you're no fun, it's an electromagnetic pulse.


An electromagnetic pulse, created when a nuclear bomb explodes, that some say will destroy all electronic devices within hundreds of miles that haven't been specifically designed to be EMP resistant.


> so that we can have a fighting chance when AI takes off.

there wouldn't be a need for this without rampant, myopic introduction of AI. why not just stop that irresponsible "innovation"?


These sorts of criticisms always ignore the prisoner's delima for the sake of expression of moral indignation.

There is no stopping individual agents in a system from doing what helps them most without an authoritarian at the top. Mostly, those authoritarians come with even worse problems so we're left with this imperfect world.

I'd love to see comments on hn not focused on self righteousness and instead realize that there is no one guy at the top that you just have to scream really loud at.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: