Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Comma Two Devkit (comma.ai)
225 points by ryanlowun on Jan 7, 2020 | hide | past | favorite | 230 comments


Quick bit of feedback for any Comma employees lurking:

It took me a very long time to realize that this was a self-driving solution for existing vehicles. The hero just says

"Driven over 6 million miles" and then a "Buy Now".

When I scrolled down and looked at the product and read:

> We’ve added an infrared camera to provide 24/7 driver monitoring, an integrated panda, a custom cooling solution, and an OBD-C port with power supplied directly from the car.

I thought it was some kind of dash cam + driver sleep monitor? Also, most of those words don't mean anything to me.

Only after deeply investigating it did I realize that it's basically Tesla Autopilot for existing vehicles.

You have an awesome value prop though, "Bring autopilot features, to your car." Make that clear!


I think this is deliberate. It's also why you have to download and install the self driving software separately. AFAIK regulators stepped in and killed comma as a self driving solution, so this is their way of still going forward and trying to maintain plausible deniability. It might not pass muster in a court of law, I don't know, but it should probably keep the regulators off their back.


It seems irresponsible for an engineering company to avoid regulators.


At risk of being snarky, I'm not sure 'engineering company' is the right moniker. They're an R&D shop making bleeding edge research available on a homebrew platform that can interface with recent vehicles to provide a level 2/3 driver assistance package.

The stuff required to pass the regulators is the long, slow, boring process of pushing reliability and failure modes into "safe" territory, while documenting the fact. This is the 'engineering' part. Making a system that works (probably, most of the time) isn't engineering, it's hacking.


"At risk of being snarky, I'm not sure 'engineering company' is the right moniker. They're an R&D shop ..."

... doing their R&D on public roadways, with unwitting / unwilling participants.


They are hackers. Geohot, the founder did iPhone and Play Station jailbreaking in the past. I wouldn't buy any safety critical equipment from them. They motto could be literally: go fast and break things.


Not necessarily.

Regulatory capture is a thing. Regulators, at least in part, exist to maintain a barrier to entry for any new players trying to enter a field dominated by large players.

Several major tech companies only exist because they broke rules in their early days. And most people agree the outcome was a net benefit to society.


As someone who drives on public roads (and has family and friends who drive on public roads) this is a frightening comparison and attitude.


Next time you drive, look around you at some of the other drivers. The ones with cell phones in their hands are more dangerous than someone with a comma.ai dev kit.

Fun story: I wanted to build a hypermilling coach/trainer system about 10 years ago and got to the point of having a poor mans ACC in a Scion. It didn’t brake though, only coasted to lose speed (thus requiring driver input). So I got the wild idea of tapping into the electronic brake distribution system to apply brake pressure when I needed to slow down. I figured out the system and was ready to tap into it when I had a dream that night. I was on the highway and all of a sudden two wheels decided to lock up from braking too hard. I stopped all testing on that car on public roadways after that.

My current car is hacked to provide lane centering when lane assist is on, but it’s still operating within the bounds of the factory electronic steering (so it can’t self drive/make 90 degree turns)


I agree, both cell phone drivers and people with unlicensed, untested, unregulated homebrew self-driving setups are dangers to everyone around them.


As are people with 15 year old rusted hoopties, drivers who had a fight with their significant other, disagreement with a colleague or whose sports team just lost, drivers who slept poorly the last few nights, drivers who are ill and taking cold medicine, cars with lamps out, worn tires, worn ball joints, bad shocks, no ABS, no vehicle stability control, etc.

I’m okay with enthusiasts repairing and modifying their cars and driving them on public roads, because I’d rather have that than a world where only certified cars, maintained at certified dealers, using only certified parts are permitted on the roads.

Maybe you're not a "car person". Apply the same principles to whatever field you have passion for. Only certified Intel parts may be added by Intel technicians to Intel computers, which must run only certified operating systems. Or only certified Edison lightbulbs may be added to certified Edison light fixtures by certified Edison technicians. Only certified firmware may be run on devices you're being allowed to rent from the certified manufacturer.


I agree, driving is dangerous. Why is the response to add another dangerous hazard? This argument is ridiculous.


Do you have evidence to support the hypothesis that cars equipped with aftermarket equipment are in general or with this specific system are more dangerous? Do you think such evidence should exist/be considered prior to banning a potential device?

The reason I ask is because there is a general topic of whether we should preemptively block new technology based on hypotheses, whether we should seek supporting evidence, or whether we should wait for confirming data before blocking freedoms to experiment. Said differently, do we ban everything until “proven” safe, allow everything until “proven” unsafe, or something in middle.


Experimentation using heavy equipment that has been modified with unlicensed and untested hardware and software designed to radically change its behaviour, in close proximity to people who haven’t consented to the experiment, should be illegal.


> but it’s still operating within the bounds of the factory electronic steering

That's Comma AI's approach. Their hardware spoofs the stock computer so it can send commands to the car's active safety systems. So there are some stock limitations, like the steering torque, that openpilot doesn't bypass.


This doesn't address the fundamental issue: a car is being controlled by an untested and unregulated system on public roads.


That's true, but how much oversight is really applied to the existing software systems on modern vehicles? Did national regulators review the design and coding on the electronic stability controls? I doubt it. Is the CAN bridge between the entertainment system and the critical systems bus (with ABS, ESC, power steering etc.) suitably hardened to protect against hackers (hint: no.) Who has reviewed the electrically assisted power steering?

Most things in a modern car are controlled by software that's far more opaque than OpenPilot. Don't get me wrong, a lot of their software and hardware architecture choices scare the heck out of me, but at least it's transparent.


That could be a way to describe all human drivers :P


I'm not sure what testing a regulator can do today that's more rigorous than the 10.5 million miles already driven by Open Pilot equipped cars.


It isn't being done by anyone the company is liable for


If Linux caused your house to burn down, who would be liable?

(Hint: I hope you have good insurance)


You're already driving on roads shared with garbage solution like tesla's '''autopilot''' this thing is no different other than the creators not being able to afford the regulatory bribe to deploy it.


It's a completely different product with a much wider number of configurations that end-users have to piece together from Wiki pages and packages they download themselves, loaded onto hardware they've installed in their cars with zero oversight from anyone qualified.


> end-users have to piece together from Wiki pages and packages they download themselves

Which makes it a ton safer than '''autopilot''' because by the time those users are done installing it they have a pretty clear understanding of its capabilities and limitations.

vs. the endless videos of morons driving tesla in '''autopilot''' mode with hands off the wheel because it's an incomprehensible black box of magical self-driving capability for them.

The regulator hasn't stopped tesla autopilot from causing accidents. It's just a competitive moat.


It doesn't take a 5 year ML PHD and millions of simulation miles to download and install packages.


And yet all of those resources have amounted to what all regulators agree is a SAE level 2 system, only qualified for hands-on lane-keeping assist, exactly like you'll find on a modern Mazda or any other number of cars without any of that hardware or ML PHDs.

Sort of proving the point that the resource pool of the incumbent doesn't necessarily translate to the value of the product.


Comma is bragging about 6 million miles drive. Tesla's Autopilot has over 2 billion miles driven. This has two cameras (EDIT: and apparently 1 radar that already exists in the vehicle). Tesla's has 12 cameras, 12 sonars, and 1 radar. This is backed by a company that apparently has received $8 million in funding. Autopilot is backed by a company with a market cap of $84 billion. You are welcome to your opinion of Tesla's product, but if that is garbage, I can't image the word you should use to describe this because the two products are in no way comparable.


You're listing specs which are a proxy for how much money the incumbent has, and saying that the regulatory barrier should be set at those specs for some reason.

None of those specs give any indication to the relative safety of the two systems though. And this is exactly what incumbents want. They want a small player to have to match their massive cash holdings to enter their market, even if the small player has a better product.

Here's some food for though: Tesla has been slapped with multiple lawsuits for misleading advertising of Autopilot. Regulators all agree that Autopilot is just an SAE level 2 system, same like those found on much cheaper cars with far less hardware and none of the marketing oversell.


I listed miles driven. That is a direct indicator of safety.

I also listed the number and type of inputs each system has. This might correlate with money, but it clearly also correlates with safety. One simple example, it is impossible for the Comma system to have 360 degree visual coverage of what is around you with just two cameras in the locations they are in.


Tesla does not use 12 cameras for autopilot. It just uses radar and 1 or 2 of the front facing cameras depending on the version. Not really different from OpenPIlot.


A selling point of HW3 is that it will use all of the cameras, so that's likely to eventually change.


You're making a lot of really wild assumptions about what correlates with safety without a shred of evidence to back it up.

Humans have two eyes, a pretty narrow field of view, no radar, no sonar. By your logic they're even worse, yet theyre so much better than Autopilot that it's unattainable for it to match.


>You're making a lot of really wild assumptions... without a shred of evidence to back it up.

>Humans have two eyes... yet theyre so much better than Autopilot that it's unattainable for it to match.

These two statements are pretty ironic back to back.

I'm not sure if we are ever going to convince each other of anything if we can't agree that there is a clear difference in track record when one product has been used for 2 billion miles and the other for 6 million.


I haven't seen any videos of OpenPilot vehicles swerving towards gore points or ploughing into stopped vehicles, though.


You are already sharing a road with people who are constantly distracted, drunk, or other. I would say the roads would be a much better place if there were more people who had a computer driving them.


Uber is not a tech company, it is a mini cab firm

Air BnB is not a tech company, it is in the holiday let's game.


On one hand yes, on the other the company wouldn't exist. Regulators are important, but they also serve incumbent interests as they raise the barrier to entry for newcomers. It's a way to pull the ladder up behind you once you've achieved success. I don't know how we can better balance those seemingly conflicting aspects.


> Regulators are important, but they also serve incumbent interests as they raise the barrier to entry for newcomers.

Call me crazy, but when it comes to products that could so easily cause death, a barrier to entry is a good thing.


Aaaaand that's why Americans need to pay 500 bucks for an insulin injector pen while Indians pay 10 bucks for the same device.

The problem with regulatory capture and arguing it's a good thing, is that there's a downside/cost to the barrier it creates. And often that downside outweighs the upside.

To be clear, no one is arguing regulation shouldn't exist. Just that the nature of regulation is to be captured by incumbents and serve a purpose orthogonal to protecting the people.


I didn't say regulatory capture is a good thing. I said that barriers to entry can be good in certain instances. In the instance of medical devices, the problem in the US has nothing to do with barriers to entry. If you can drive across the border to Canada and buy an identical product for a fraction of the price it would cost in the US, that isn't because the barrier to entry is too high in the US. The product already exists and is being produced, it just costs more due to a broken healthcare industry.


No, Americans are paying 500 bucks for a pen precisely because of regulatory capture. None of the manufacturers of the cheaper version have been successful in jumping the regulatory barrier to entry which the incumbents lobbied for, and tens of thousands of Americans suffer for it each year.


I have a hard time believing e.g. most european countries have less regulation in the medical sector than the US. Probably less corruption/"lobbying" (because corporate campaign distribitions are more strictly regulated in most other countries), but corruption quite different from regulation, even if both affect how law gets written.


Many times it’s hard to distinguish the difference between regulation and corruption. Sometimes they’re one disguised as the other. Not sure what method you have for distinguishing between the two, so please share!


I'd say they exist on different levels and shouldn't be conflated. Corruption is the undemocratic alternative to a democratic decision making process. Both affect regulation, e.g. law.


In what way is that scenario not suggestive of a barrier to entry to the US market for the maker of the product already being sold in Canada?


> Aaaaand that's why Americans need to pay 500 bucks for an insulin injector pen while Indians pay 10 bucks for the same device.

I can't speak for insulin injectors, but I know for a fact that the reason Indian pharmaceuticals are often times massively cheaper is because they ignore patents by which the majority of the world abides.

Now in many cases, the patent system is set up for incumbents who have enough legal muscle to develop and patent isomers, metabolites, or "extended release" versions of successful drugs which are losing their patent privileges. But that's not the whole story. We know that some patents and copyrights are needed to encourage investment in R&D.

India (used to?) just blindly copy drugs, ignore paying royalties, and take the profits. I haven't ever been on any blockbuster meds developed natively in India. Have you?


[flagged]


What's with the nationalism? Every country has benefits and problems for it's citizens, and Australia is no different: just look at the literal schorched Earth outside


The comment points out the rediclousness of the parent comment stating that you either pay $500 for medication or get run over by diy cars.


Taking another angle, when it comes to a product that could so easily save a net of 10s of thousands of Americans every year once fully developed, barriers to entry might be counter-productive.

(I’m actually fairly bearish on level 5 self-driving being “close”, but if it was, speeding that up would almost surely be worth throwing the switch on the trolley tracks for both the lives saved and the reduction in wasted attention of those who would have otherwise used a lot of their lifetime driving.)


I can never really understand this argument. Why can't we carefully craft regulations to scale in their burden with the company's ability to bear said burden in some way? Scale with revenue or head count numbers? Similar to how income at the lowest bracket isn't taxed, to lessen the burden on people much less able to pay.


Because (a) then you'd end up with things like pacemakers designed by sole operators because their regulatory burden would be that much lower than what a biomedical engineering firm would face, and (b) any such system would be trivial to game anyway.


We can, but usually it's the established encumbents who are helping draft the language of the legislation (which is better than politicians with no expertise working alone) and they are not incentived to encourage the kind of regulation you propose.


I like this idea a lot. We don't currently do this, but it would get most of the benefits without most of the downside. Seems like a win.


Count me in as well. I came here to make that comment.

I've had to scan the entire website and still cannot find a reference to a simple term like "self-driving" anywhere in the website. It seemed like a dashcam that warns the driver.


While the website doesn't directly state it, the entire purpose of the Comma hardware is to run the OpenPilot self-driving software (or "driver assistance" software, whatever you want to call it). Sure, you can run other software with it, but it works really well with the software that has already been written for it and is available on the same website.

The FAQ clears some of this up (https://comma.ai/faq), but I agree with the idea that it is purposefully obtuse to keep regulators off their back.


On the 3rd scrolling down "page" OpenPilot , there is a hard to see link "See OpenPilot in action".

When I watched the videos it made it clear what it does, i was lost before that.


If it doesn't say "self driving" anywhere, it might be presumptuous to say that's its intended purpose.


Seriously. The website does a pretty crappy job at explaining what the product actually is.


I think that's sort of by design. The NHTSA shut down their ability to sell in the US a few years ago. This seems to be some sort of stealth method to sell it again using the same sort of trick pharmaceutical companies use where they don't have to tell you any warnings if they don't actually tell you what the thing does.

https://techcrunch.com/2016/10/28/comma-ai-cancels-the-comma...


> Hotz tweeted from the official Comma.ai account that rather than providing the requisite response, the company would instead be cancelling Comma One entirely, and turning its attention to “other products and markets,” since Hotz says that the prospect of a life “dealing with regulators and lawyers… isn’t worth it.”

...Wow. Maybe don't develop autonomous driving tech then?


Their tech is within the top two right now, and Hotz has demonstrated a deep responsibility with it. Plus, it's all libre software.

Skipping regulators seems fine, given (unlike their competitors) they're acting responsibly. Their eye-tracking tech is really cool, too, and prevents the Tesla problem of drivers losing focus.


>Their tech is within the top two right now,

how can anyone make that statement in the same breath as declaring that dodging regulators is fine? Who the hell evaluates the quality of the product if not regulators?

Are we supposed to take George Hotz's or the companies word for it?


I am a comma user, engineer and autonomous vehicle enthusiast. Comma truly is second only Tesla if we leave out Waymo, but Waymo is only operating in Geofenced areas.

I have used every single pilot assist out and they're all quiet frankly terrible in comparison. I will give GM Supercruise a nod in how well it's done but again, it's Geofenced.


You can use it; you don't have to take Hotz's word, or the company's word for it. How everything works is completely open, and it works pretty well. Also, see:

https://news.ycombinator.com/item?id=21987585


I thought the computer vision model wasn't open?


The quality of a product is pretty much always determined by the marketplace, not regulators.


I feel like the Boeing 737 MAX situation is a clear demonstration that this isn’t really the case, especially in the realm of safety regulations.


It seems to me like the market has developed a better idea of the product quality there than the regulators initially did.


It's open sourced. Everyone is able to evaluate quality.


The people who are consumers of these products are not necessarily software engineers, so no everyone cannot evaluate the quality of these products. Secondly, the models are not open source, and even if they were looking at them tells them nothing about the quality of the product.

The only way to actually access the quality of these products is through independent testing by a body that is qualified and reputable enough to do so. Which in my opinion should happen before even a single one of these devices is actually allowed to participate in regular traffic.


It was made fully opened sourced last year.

https://medium.com/@chengyao.shen/decoding-comma-ai-openpilo...


> it was made fully opened source

Unless something changed I missed, the part I emphasized is not true. There’s binary blobs in the source code that last I heard would not be released.


Do you have any evidence to support your claim that Hotz has done anything resembling "deep responsibility"?

Because from the scraps that I've managed to put together, he seems childish and irresponsible.


https://www.youtube.com/watch?v=iwcYp-XT7UI

This Lex Friedman interview is pretty explanatory on the topic of responsibility.

A couple of highlights, which point to Comma being at minimum more responsible than Tesla: Comma doesn't advertise itself as anything beyond L2, unlike Tesla; Comma doesn't fuck up eye-detection, it refuses to cooperate with a driver who has their eyes off the road.


Their tech is nowhere close to the top two.


Or maybe don't have such a stringent regulation ?


As I understand it, "the NHTSA shut down" is a serious approximation of the story. They marketed aftermarket self-driving kits while not getting any regulatory approval, and when the NHTSA sent them a letter saying "So, uh, what's this thing you're selling? Can we meet?" they immediately chose to shut themselves down instead of meeting.


It isn't an actual consumer product, it is a development kit.


it's a dashcam with a passive driver assist system, the openpilot stuff is a separate product meant to be installed onto the "comma two" by the user


Yes. The heck is an "integrated panda"?



Maybe the target customer is the King of All Cosmos.


I only knew from your comment what is this after I came back from the site spent minutes trying to find out. For a couple of seconds I only saw the "Buy now" button and I though "Buy what the fuck?" Really annoying design.


Came here to say something similar. I briefly thought it might have been a self-driving add-on because of this bit:

>The comma two does not ship preloaded with software capable of controlling your car. Open source software can be installed separately.

But then I figured that if it actually did that, they'd be screaming about it, not talking about how you can watch your drives.

Kind of hilarious.


Came here to make this comment! Seems like a cool product - but most visitors to the webpage will never learn what the product does, since it is buried in the page.


It's vagueish on purpose. They only want smart people using. It's still a dev kit. Not consumer product yet.

From comma's discord:

"if people can’t figure out what it is, it isn’t for them"


It's also definitely not for everyone who figures out what it is. They should be up front about what it does and the legality around using a device in that capacity.


I still feel like they've taken it a level too far. I already knew comma.ai from past news, and even with that context I wasn't sure if they pivoted to fancy dashcams.


I literally thought I was looking at some funky games console first....but which you mount on your dashboard for some reason? And even after reading the entire website I didn't get that it was meant to be for autopilot - I only got that after reading your comment.


I completely agree, I never heard about it before, after looking a few minutes at the websites and some youtube videos I realized it was a self-driving thing you can install on stock vehicles that support it.

Better information / overview.

Look very promising tho!


Same! I had no idea what this was even after reading most of the website.


Yea I had to come to the comments to figure it out.


This. I clicked from the HN-linked page over to the main page, scrolled through it, and still had no idea what I was looking at.


Came here to say the same thing! Very difficult to understand what this actually is.


I read the whole website and still not much idea what this is.


Comma.ai represents a strange attitude towards life in which you try to turn hard, serious problems (like self-driving), into trivial problems (write some Python and run it on Android phones and plug it into your car!) and try get away with it. It only has a chance of working if you are cool enough in SV such as geohot. I interviewed there once and it was extremely off-putting how clear their demand for fast progress was, and how little they cared about safety.

It's move fast and break things all over again.

Think about how the tech community rips apart Boeing and demands the utmost in quality engineering, reliability, redundancy, testing. And then we have comma, which controls your vehicle on a non-realtime system...

If you ran comma.ai in your car and had a serious crash, you could possibly be found criminally negligent.


> And then we have comma, which controls your vehicle on a non-realtime system...

Lol. This is what I found extremely shocking and is the first thing that I noticed. I don't think anyone serious about embedded design would touch it. They are trying to ignore decades of experience in real time systems and build some hacky stuff and make it open source.


I don't understand how you'd do hard real-time when one of the major inputs, the neural-networks, aren't deterministic? In a hard real-time system, what happens if your neural-networks are late telling you where the lanes are?

In reality the comma-AI isn't controlling the car, a number of peripherals on the canbus are controlling the car. All those peripherals are hard-real time, and the comma-AI is bridging the non-deterministic neural networks and those hard-real-time peripherals.

At some point you need to deal with the non-deterministic neural networks, and bridge them into a hard real-time control system. People complaining that part of the system isn't real-time just seems like they haven't thought about it very much. Of course the UI and neural networks aren't going to be hard real-time.


Neural networks are very much real-time. They generally consist of a fixed series of math operations of fixed size. Only the weights and inputs change.

The neural net will never be late. It just might be inaccurate.


Huh, I suppose they aren't inherently incompatible with real-time constraints. It looks like it's pretty challenging to do in practice though.

I'm still somewhat confused as to how multi-object recognition works in a real-time way, naively I'd presume that recognizing n+1 objects in a frame would take longer than n objects.

You might be able to more practically get real-time neural networks using something like YOLO (you only look once)?

Either way it seems to me like hard real-time neural networks are a very challenging problem.


> If you ran comma.ai in your car and had a serious crash, you could possibly be found criminally negligent.

I don't know that it's much different than using cruise-control and then having a serious crash. If there's criminal negligence when using cruise control, it's because you weren't aware-enough to shut it off when the conditions cruise-control is built for no longer applied; the mere fact of using cruise-control isn't criminally negligent, as long as you're "just waiting for it to slip up so you can take the wheel."

For that matter, the same is true of teaching a teenager to drive. There's no negligence to having them at the wheel if you're ready to supercede their bad driving at all times.

What's suddenly different, if the "thing that doesn't really know how to drive yet, but which you're ready to supercede the bad driving of" is an AI instead of a teenager?


You can get a learners permit to teach your teenager completely legally. You couldn't do the same with a 10 year old...

If a serious crash involving comma.ai went to trial, it would look like the driver was using unapproved, unregulated software, unnecessarily, putting others at risk. Recent Autopilot incidents show us that expecting a human to take over within a second is not reasonable. I would be very worried that the court would indeed find the driver negligent.


> I would be very worried that the court would indeed find the driver negligent.

I use open pilot on the highway, and I love it. I had to physically tap into the CAN bus for my car's safety system, that made me very nervous. But the installation is completely reversible, and I'm much more comfortable with it's limitations. I just treat it as a really great adaptive cruise control and active lane keep assist.

But that's all it is. I would be completely responsible in most situations if my car caused an accident while open pilot was active. Open pilot's design has it immediately drop all control of the car if you touch the brake or gas, it does not aggressively follow cars, and the driver is still required to control the car. So if I start to become uncomfortable I need to immediately take over.

Assuming that open pilot will always act within it's safety guidelines doesn't mean that its safe. Toyota's paid tens of millions in fines and settled multiple lawsuits over unintended acceleration. I believe that using openpilot on the highway means I'm a safer driver. But I do understand that if I did get in an accident proving I'm not negligent might be extremely difficult.

I agree with you, liability is a very important part of self driving functionality. I won't think a car has full self driving capability until I can get in the back of that car, go to sleep, have it drive me somewhere, and hold no legal or financial responsibility for any accidents that occur. Openpilot is nowhere close to that, but I do think it's really good for what it's meant to do.


What if the crash is caused by the device failing? In the case of your car's built in systems, they've been approved by regulators and you probably wouldn't be held liable. If it was an unregulated device that you installed into your own car that caused the crash, I bet the legal circumstances would be different.


Geohot is quite abrasive.


he's quite a character, surely abrasive to some. If you want a sample for yourself:

https://www.youtube.com/watch?v=u-dgbrB7PZU


His interview with Lex Fridman was quite interesting: https://youtu.be/iwcYp-XT7UI

I like him, personally.


Yeah I really enjoyed this interview. Made me respect him much more and realized that he actually is concerned about safety, specifically more than Tesla (especially when it comes to driver monitoring).


I’ve seen worse characters (Torvalds) but yet they are loved by the ‘community’ despite their behaviour.

The Bully wins


[flagged]


[flagged]


What's so "abrasive" about non-lethal border security?


Compare OpenPilot to other solutions.

Tesla trying to kill someone: https://www.youtube.com/watch?v=0GnysB0rO3s

The super reliable "Level 3" Audi system not being able to go straight on a highway, the most simple problem there is in self-driving: https://www.youtube.com/watch?v=WsiUwq_M8lE&t=297

Several cars "pedestrian safety system" failing on detecting pedestrian (You had one job..) https://www.youtube.com/watch?v=7Y8JG7kepwc

Let's be honest, there is not a single driving assistant that isn't dangerous. Why care for "realtime" if your model tries to kill you once per minute? Realtime is one of the final touches you need to make your software perfect. That's just how the industry works, and if you want to keep up with them you have to go the same route, or do you know of any exception? Most manufacturers tell you that you should closely watch what your car is doing on autopilot, and they limit it's torque and acceleration, so the human driver is reliability, redundancy, testing and the realtime system.


Great perspective. Why isn't SDC software held to the same regard as Boeing? The safety framework is the same.


And comma ai deals with regulation by selling dashcams that can drive a car if the user installs hardware and opensource software.

I use open pilot for highway driving and love it. But Comma AI is intentionally making design decisions to get around regulation.


People don’t seem to understand probability and statistics.


Thanks to NHTSA being mean Comma is more or less or fore-er running(than Tesla they say...yeah okay?) so


Brakes on a plane, fall out of the sky Brakes in a car, come to a stop


I think their bet is to be acquired by a desperate established car manufacturer, hopefully before someone dies.


"And then we have comma, which controls your vehicle on a non-realtime system..."

This can't be true ... ?


The safety critical stuff for disengagements and torque limiting etc. runs in real-time in C code on an STM32 micro.


> It's move fast and break things all over again.

And this time it's people what they're moving :P


I'm curious about this perspective and would like to know more, could you expand on this?


If you'd like to know more specifically about the comma.ai saga, regarding safety and regulation in the U.S., here's some background, previously covered on HN:

https://news.ycombinator.com/item?id=12840368

https://techcrunch.com/2016/10/28/comma-ai-cancels-the-comma...


Apps crash a lot.


IRL too, now.


When did Comma.ai claim they were doing self-driving?


Well, the website says:

> Currently, openpilot performs the functions of Adaptive Cruise Control (ACC) and Automated Lane Centering (ALC) for compatible vehicles. It performs similarly to Tesla Autopilot and GM Super Cruise. openpilot can steer, accelerate, and brake automatically for other vehicles within its lane.


So nowhere, then.


What is Adaptive Cruise Control and Automated Lane Centering if it isn't self driving?

Notably, the NHTSA called out these exact features in their letter to Comma.ai: https://techcrunch.com/2016/10/28/comma-ai-cancels-the-comma...


If the car can't drive without you, then it's not self-driving. Those features do not allow the the car to drive by itself. It can't go anywhere without a licensed, attentive driver. This is so thoroughly not complicated.


What hasn’t changed is that OpenPilot falls under the category of level 2 self-driving as defined by the Society of Automotive Engineers[1]

Look at the directory structure for the software - it has a "selfdrive" directory where all the code is![2]

[1] https://venturebeat.com/2020/01/07/comma-ai-launches-comma-t...

[2] https://github.com/commaai/openpilot#directory-structure


What are they doing?


Since the website is not clear, here is what it is.

Comma Two is an aftermarket dash cam that adds features like adaptive cruise control, automatic lane centering, and forward collision warning to cars using open source software.

It uses a combination of the dash cam and the vehicle’s built in radar to accomplish this. It is able to control the car and receive its radar data by connecting to the vehicle’s OBD port.

The unit is $1000 and also requires the purchase of a vehicle specific harness for $200.

A list of compatible vehicles can be found here: https://comma.ai/vehicles

The source code that performs the driver assist functionality has to be loaded by the user and is available here: https://github.com/commaai/openpilot

There’s also a “prime” subscription which pays for cell service on an included SIM card and allows you to see video footage or location data from your vehicle remotely.


A lot of commaai bashing here.

I was very skeptic too. However using their system for almost an year everyday, I can deffo say it makes me a safer highway driver.

I have overall more awareness of the road since I don’t always have to keep eyes on the lane in front. I look around more around the car and look side mirror/rear mirror more. Once this asshole driver was zipping left and right over taking and was gonna squeeze through us, I could spot him/her from distance and I gave more space. If I didn’t he would have deffo crashed into us. Granted half an hour later he/she did get into a crash.

Another time, car in front suddenly brakes, system immediately alerted and started braking. Even If I was manually driving, I couldn’t have had the same reaction time. Car ahead crashed, ours didn’t.

Some humans are very dangerous highway drivers since they don’t maintain distance with car infront.

This is testament to both commaai and Toyota. I can set I want 2 seconds-ish reaction time with car infront and it always mantains that safe distance. Really neat.

Also commaai driver mobitoring has gotten quite good. Move your eyes off the road for a bit and it yells with an alert to touch the steering wheel and get eyes back on the road. That alone makes you more attentive.

All I can say is give it a shot, it’s quite an ingenous solution to make other brands of cars, highway self lane following.

I’m not gonna say self driving, because essentially it’s a driver monitoring + alert system + lane following system. Very narrow but does a good job at it.


"comma two is designed for permanent installation in your car."

Okay, what's it do?

"We’ve added an infrared camera to provide 24/7 driver monitoring, an integrated panda, a custom cooling solution, and an OBD-C port with power supplied directly from the car."

Great, so it monitors cars... For what? Quick movements? What's a panda? I'd be interested if I knew what it was.

"comes with three free months of comma prime"

So the 1000$ onboard computer that does something comes with a free* subscription.

They need a total marketing rework, can't imagine the sales they've already lost.


Their marketing won't change until they are willing to deal with the NHTSA. https://techcrunch.com/2016/10/28/comma-ai-cancels-the-comma...


George (geohot) won't listen to anyone, much to the detriment of comma


>>provide 24/7 driver monitoring

>Great, so it monitors cars

I think it's monitoring the driver (against sleep).


they need a job in a field very far away methinks


Can anyone here comment on actually using a Comma product?

I don’t even drive a car, but I don’t see why anyone would buy something like this yet. My point being that it’s not full self driving, and frankly its safety is entirely questionable.

Self driving is supposed to make driving less stressful, but does having a system like this actually do that? Or does it make you more stressed because you have to be conscious of the computer?

Don’t get me wrong, I like what George Hotz is trying here, but what incentive is there to be an early adopter?


I find driving much less fatiguing when using my Eon (the previous iteration of this product). Taking yourself out of the feedback loop of constantly adjusting steering (and gas if in traffic) makes the experience much more relaxing even though you still have to pay full attention. In my experience, OpenPilot is really rock solid in normal highway driving scenarios and there isn't anything stressful about using it.

As far as safety goes, their safety policy is outlined here: https://github.com/commaai/openpilot/blob/devel/SAFETY.md

the short of it is:

"""

The driver must always be capable to immediately retake manual control of the vehicle, by stepping on either pedal or by pressing the cancel button.

The vehicle must not alter its trajectory too quickly for the driver to safely react. This means that while the system is engaged, the actuators are constrained to operate within reasonable limits.

"""

Those checks are separately coded into both OpenPilot, the self-driving software, and Panda, the microcontroller responsible for communicating between OpenPilot and the vehicle.


Thanks for the perspective.

Do you happen to know what the hardware safety is like? Just curious.


Sure! so there are three stages of safety:

OpenPilot generates all of the control messages to send to the vehicle based off of feedback from the cars sensors and it's own camera. it has rate limits for all control messages it generates so that it can't jerk the steering wheel or slam on the brakes (as of now it leaves Collision Avoidance up to the stock system)

Steering limits: https://github.com/commaai/openpilot/blob/a2ae18d1dbd1e59c38...

Accel/Decel limits: https://github.com/commaai/openpilot/blob/a2ae18d1dbd1e59c38...

These control messages are then passed to the Panda over USB. Panda is a microcontroller that converts the control messages into CAN messages that the vehicle can understand. The Panda has the same rate limits checks hardcoded into the firmware and it will reject any control messages that are outside the limits

Panda Safety code: https://github.com/commaai/openpilot/blob/a2ae18d1dbd1e59c38...

Finally, because Panda is sending the same CAN messages as the stock ADAS system, whatever safety mechanisms the OEM implemented apply to OpenPilot as well. Most cars have some form of torque caps and some form of rate limiting baked into the EPS firmware.


There’s the self driving code and native layers that runs on a bog standard Android phone, then an Arduino like microcontroller handles interfacing which is more or less realtime. Malformed, illegal or out of range commands would be trimmed out if occurs.

They also has a narrower than commonly used limits and that is reportedly insufficient for tighter curves on some highways but are done so in good faith I guess.


and the vehicle itself


I have an eon and love it for highway driving. It makes long trips relaxing. It's basically the combination of really good adaptive cruise control and active lane keep assist. And I feel it's safe within those limitations.

Open Pilot keeps me in my lane on the highway following a safe distance behind other cars. I have a Toyota with TSS 2.0, so the steering is powerful enough on curves. My only complaint about steering is that on tight turns like highway onramps it will slow down a little too much.

While I was driving my last car I got in an accident when I was driving down the highway at highway speeds and everyone in my lane came to a complete stop. I'm convinced I would not have gotten in that accident if I was driving my current car with open pilot active.


Even without openpilot the TSS should have alerted you to brake.

On a recent long trip (24 hour drive) TSS 2.0 by itself made the driving stress free.

It seems like comma is extending the boundaries of TSS by a little bit beyond what Toyota/Honda can legally do at their scale.


True, I agree with that. You can still use stock TSS 2.0 instead of open pilot even with it set up in your car, and I tested out stock TSS 2.0 before I bought my eon. I'm familiar with how it works. My last car was more than 10 years old, and adaptive cruise control probably would have kept me safe.

Here's how I compare the two systems. Stock adaptive cruise control is decent and useful. Stock also allows you control the follow distance. Openpilot doesn't allow you to adjust follow distance but it's conservative. OP is really smooth, so I think it's better than stock.

Stock lane keep assist is mildly annoying at best. I find the alarms annoying. And stock active lane keep assist is weak. I wouldn't generally consider stock LKA to be worth using for me. OP LKA is really good. It's great at keeping you in a lane, and it's extremely smooth. OP LKA is something I really appreciate.

You can technically use OP ACC without OP LKA, but the combination of the two is greater than the sum of its parts. Driving with OP is a surprisingly good experience. I was showing it to a relative recently and she said she was surprised at how smooth and how comfortable she was with it active. She mentioned if she wasn't watching the steering wheel she would not have noticed that I was not actually steering.


The Comma One source code was put online (I think by the author?) and deemed to be incredibly dangerous. If I recall there was very little error handling. Does anyone know what has really changed since then?


Yeah plenty has changed since then, there are limits that are checked twice to ensure the driver is always able to take control. If violated, the car or openpilot throws an error openpilot.comma.ai


Source (i assume you read the source code and know where these checks are)?



This was harder to find but this is where caps for steering are set in OpenPilot:

https://github.com/commaai/openpilot/blob/a2ae18d1dbd1e59c38...


Last but not least, here is brake and gas limiting:

https://github.com/commaai/openpilot/blob/a2ae18d1dbd1e59c38...


the tensorflow codebase ? I forgot I read that..


Well, it didn't take me long to find a message about this on the frontpage that clearly says:

> Keep your eyes on the road.

> "While engaged, openpilot includes camera based driver monitoring that works both day and night to alert the driver when their eyes are not on the road ahead"

Also, if there are deaths around other autonomous systems like we have seen from Tesla and Uber then perhaps they're even more dangerous as they are closed-systems, unlike this then.


The Comma Two looks like a step in the right direction for making existing cars self driving with open source being a bonus with openpilot.

One can say it is like Tesla Autopilot for any car.

Comma’s products explicitly state you still need to pay attention when driving with their system which is almost hard to ignore.

On a more meta note: I find the comments here ironic given this project fits within the spirit of HN and hacker culture but commenters find this project displeasing.

I agree their marketing is kind of vague and needs improvement, but I encourage HN to view videos of the system in action before jumping to conclusions.


If there is anything about Comma.ai that I have seen beyond the hype of self-driving cars, I see this as substance, unlike the rest of them AI self-driving cars (Except Tesla). The idea to turn your existing car into self-driving rather than spending $$$ on a new one makes sense for those saving money.

Comma.ai is for cars what Linux is for PCs, meaning that you use open-source software and a hardware kit to make your existing car self-driving. Very clever!


When somebody installs Linux on their computer without knowing what they are doing they might make break their computer. When somebody tries to install after-market self-driving equipment in their car without knowing what they are doing they could easily kill people.


That's really a bullshit comparison. It either works, or it does not. That's quite literally the same as oem systems. It either works, or throws an error and does not.


Yeah and Many people probably don’t want a car that’s “self driving” (or insert your definition of what OpenPilot does) to “throw an error” while going down the highway. That’s not safe for the car or the nearby drivers.


It either works, or throws an error and does not.

It controls your throttle and steering so it appears there may be other errors that could occur.


You’re missing the huge “works incorrectly” space of outputs.


>The idea to turn your existing car into self-driving rather than spending $$$ on a new one makes sense for those saving money.

You sure this is where you want to save money on a self-driving system?

God forbid there's an accident, what then? There's no big company you can point at for recourse and I guarantee that Comma.ai isn't going to stand by and accepted responsibility - their website makes it clear that the dev kit does not ship with any self-driving software.

>Very clever!

Very dangerous!!


It's only a level 2 system and with this hardware, won't ever be level 3

If there's an accident, the driver has to be pretty stupid. it's predictable, you know where its strong and weak points are. It has weak points when the roads get tricky. Previously it would panic through an intersection, this has since been improved upon.

Its eyes don't leave the road, people on their phones do. I wonder which cause more bumper to bumper collisions!

They probably won't accept responsibility, that's on the driver since openpilot is a driver assist system, not a higher level (L3+) self-driving system


Why is it dangerous? Self driving software instances costs a lot to develop but nothing to manufacture, and the cost cannot be used as an indicator of its safety or sophistication, at all.


Comma and Tesla do driver assist, neither are self-driving.


From Tesla's own promotional video[1]:

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


That's not available to the public.


Someone should tell Tesla because they use that term all over their site, marketing, and tweets.


Followed by the words "in the future". https://www.tesla.com/autopilot Or maybe "hardware", since the software isn't there yet.https://www.tesla.com/blog/all-tesla-cars-being-produced-now...


What is this? The entire site already assumes you know what it does.


So, it appears to be a plug-in system to do automated cruise control and lane assist and driving in stop & go traffic.

Out of curiosity, I checked for my car. It says it's not supported. BUT, Comma is open source, so I could add it myself.

Does this imply they're encouraging anyone to write buggy code and drive their cars with it?!?


> Does this imply they're encouraging anyone to write buggy code and drive their cars with it?!?

The fact that the comma guys aren't locking it down out of a misguided paternalistic belief that they no better is a good thing. Don't blame them for providing the hardware, blame people if they write buggy code. This is analogous to blaming a firearms manufacturer for a murderer's crime.


No it would be like blaming a firearms manufacturer for a fault that causes the thing to blow up.


I found that after I wrote the comment and had the same question. Also, what does "adding it" look like? Do I have to somehow interface it with the car's computer or other systems? I'm rather confused.


Adding a vehicle is the equivalent to writing drivers (heh) for your particular car. It entails reverse engineering your car's builtin driver assist features to determine which CAN messages do things like actuate the brakes and gas, turn the wheel with a given amount of torque, or communicate radar values from the cars builtin radar sensor.

For many cars the CAN messages will be the same as similar models so porting a new vehicle just means adding tuning values that can be captured by driving with OpenPilot connected but not controlling the vehicle e.g.: https://github.com/commaai/openpilot/pull/866


Wait wait wait, are you telling me I can actuate the brakes and gas and turn the steering wheel via the CAN bus? So I can realize my dream of turning my car into an RC car just with a Raspberry Pi and a $4 dongle? That is the best news of the year!

Which cars have these sorts of actuators? How can I see if mine does?


Yup! https://github.com/commaai/openpilot-tools/raw/master/steer....

If your car is in this list:

https://github.com/commaai/openpilot#supported-cars

then it definitely works and you can use these tools to control it https://github.com/commaai/openpilot-tools

If it's not in that list then the rough heuristic is whether or not your car has some sort of existing driver assist features e.g. Adaptive Cruise Control, Lane Centering, Forward Collision Warning, etc. There are some exceptions to that rule (I think Hyundai ships some cars that have full control of steering and gas even if you don't option out the ADAS features) but that's all case by case.

If you join the Comma Discord (https://discord.comma.ai/) there are manufacturer specific channels that can answer any questions you've got.


I love you, this is amazing and I am going to get right on it. Hopefully I won't crash my car in the process.


Remote control is a banned topic for safety reasons


That makes sense, thank you.


Please remind the admins that Discord does not support users that choose to keep their location private by using Tor.

This makes it a nonstarter for a lot of people.


Almost any car with drive by wire and/or modern safety features (lane assist, forward crash avoidance, etc) can be controlled (to an extent) via the CANBUS.

It’s different for (almost) every car, but you’d be amazed how easy and fun cars are to hack on, as long as you keep it off the streets.


It's definitely lots of fun, but I didn't know it was easy, I thought the CAN bus was read-only. I can't wait to discover what my car supports.


If it was read-only... how would you be able to 'clear a check engine light'...?


Actually it's a horrible thing, think of it, a terrorist can buy one of these cars, install rasperrypi with internet access through mobile data, then they can terrorize the whole town by running people over without being caught or killed


Not really, all you need is a few other people in cars to block them in, problem solved. Or, if it's the US, for the police to use some of those ex-military surplus toys they have. That's not getting into remote control being imprecise so good chance the car will just get stuck somewhere.


Why bother doing that when you can just drone-drop grenades into a crowd for much cheaper?


You can easily ban drones but not cars.


Wouldn't it be easier to ban grenades?


They highly discourage buggy code with no safety checks - I have been one of those people with no safety checks and have since implemented some


Opensource AI driving kit to mount on your car.

Made by Geohot, jailbreaker of iPhone and PS3 fame

See : https://github.com/commaai/openpilot


I agree. Its like they are intentionally being opaque about what the device does.


Well, hey, for $1000 I’d almost buy this for the “Tesla experience” but alas it doesn’t support my car (2014 Lexus).

Well done for finding the right price point. You have a solid business case here.

Not commenting on safety like everyone else because it seems from the ISO 26262 (not easy to comply) and architecture that you’ve put some thought into this. Will be good to see it improve further. I understand that you need to sell units to do so, so I do not hold that against you.


Let's say I order this stuff and flash the firmware of my car.

If there was a bug in comma ai program (like jerking the driving wheel and slam my car in the guard rail without giving me time to react) I guess I am liable?

That sounds like a bad idea.


OpenPilot doesn't require you to flash anything to your car, everything is done using the same CAN messages that the stock ADAS system sends. Further, the microcontroller that communicates between OpenPilot and the vehicle is hardcoded to block any CAN messages OpenPilot sends that are deemed to be too fast to react to:

https://github.com/commaai/openpilot/blob/a2ae18d1dbd1e59c38...


I am not impressed with this source code. Maybe you can tell my why i am wrong.

The safety.h you pointed to is using "int" for variable declaration instead of using stdints to control size. Is there are reason for that?

From safety.h, I tried to follow addr_allowed function parameters. I get here (https://github.com/commaai/openpilot/blob/c025b96e8a15640ee4...) and see this:

int addr = GET_ADDR(to_send); int bus = GET_BUS(to_send);

Where are GET_ADDR and GET_BUS macros declared? There are no header declaration to follow dependencies.


GitHub > commaai/panda > Search > 'GET_BUS' >

panda/board/drivers/llcan.h


Yes and yes, but the alternative is you never enjoy self driving and the potential it has to reduce fatalities for the rest of your life.

It’s a trolley problem, this thing WILL KILL, the question is how more/less often compared to humans.


they have limits in place so it never "jerks" the wheel. They are liable in the same way that your automatic cruise control would accelarate towards a stopped car


Funny how people are usually piling on Hotz/Comma and lionizing Musk/Tesla... while their software development practices are probably not too dissimilar : https://twitter.com/atomicthumbs/status/1032939617404645376


I use commaai kit for lane following everyday. Just commuted to Portland. 3 hours totally hands free driving. It was Magical.

Would never go back to manual highway driving.


The related articles from this website suggest this is a product for automated driving. It's still unclear to me what exactly this product does.


This website is so self-obsessed it can't just tell you what the hell the product is.


Serious question: is 80K-130K for a data science position competitive in the San Diego area? Feels a bit low to me but I live in NYC.

https://comma.ai/jobs


Considering the people they're explicitly looking for, according to that page:

> People who have done well at math competitions (USAMO, IMO, PUTNAM), competition programming (ACM, USACO, codejam, topcoder), science fairs (ISEF, STS), or capture the flag (DEFCON, secuinside, GITS). Those competitions don't just select for ability, they also select for quickness. We are in a very competitive space.

...it's absurdly low. Companies like Google hand out close to $200k total, liquid compensation to new grads who haven't placed in any of those competitions. The people who have ranked in any of those (especially the math ones, and doubly so the Putnam) can easily write their ticket to a job paying double $130k right out of college.

Anyone with that kind of competitive math/programming experience and real world machine learning engineering experience could earn triple that range if they wanted to. That's a ridiculously small and competitive set of candidates to be targeting. It's also not necessary, because strong performance on the e.g. IMO doesn't a priori map to outperformance, on a per dollar basis, writing autonomous driving logic.

Basically: no it's not competitive for San Diego, Comma is asking for wildly overqualified people to sacrifice significant amounts of money to work there, and it's not clear they should be using those kinds of qualifications as a filtering criteria in the first place.

This kind of cargo culting does not inspire confidence in their recruiting.


It's low.


I think Comma is generally pretty good. The only two downsides are:

1. Some car manufacturers will limit the torque applied to the steering wheel. So Comma can only perform small "drifts" rather than full turns

2. The cooling fan is pretty loud


Who would willingly buy this and install it in their car? Someone who wants to die?


Well, I think Comma gives a tough competition to Tesla given the features which it offers when compared to AutoPilot of Tesla. There are lot of masses who still cannot afford a Tesla( and their $7K package for full self driving cars). Imagine all of (atleast 20 %) Comma AI and Openpilot supported cars actually using this hardware and software in the next 6 months which is way more than the number of Teslas in the market. Also, it avoids the necessity of owning a Tesla. Coming to safety, I believe Openpiot with Comma makes a driver a better and safe driver with driver being attentive. I wouldn't drive a Tesla without being attentive even it offers Level 4 autonomy (because we always need to have an eye on the road and pitch-in when needed as this involves people's lives and there is no single system in the market which is 100% perfect).


Found a video onn YouTube that demonstrate the system a little bit: https://www.youtube.com/watch?time_continue=361&v=GODp6q4Sac...

Pretty cool stuff.


I never heard about them. And with the limited information their site provides my initial impression is that this looks incredibly dangerous.


Jump on youtube, plenty of timelapses showing its reliability. The predictability of the system partly removes the danger factor and with the driver monitoring system in place, it becomes very difficult to not pay attention. I'd rather have this driving a car on highways than most people I know


> On stage, I asked Hotz about safety issues and concerns, but he expressed confidence that Comma One wasn’t doing anything existing technology on the market doesn’t already offer.

This is a nonsensical reply. Begs the question what is it that Comma does not do that every proper self-driving or driver assistant solution does.


What is "OBD-C"? Did the appropriate the USB-C connector for an OBD connection for some reason?


Is this still a Geohot project?


Yes.


too bad


? Why


I always disliked geohot ambition.



Any more info on the hardware used? I'm kinda curious what the new phone and the infrared camera mentioned is.



Not a word of explanation on the front page explaining what it is.


If the obtuseness is to deter regulators, it's a good thing nobody at any regulatory body reads Hacker News. Right? Because otherwise frontpaging HN would defeat the entire purpose of trying to fly under regulatory radar.


I clicked around for a while and it took me forever to figure out what this was. For the longest time I thought it was a dash cam. What's an integrated panda? Why do I care about a custom cooling solution?


>"Make driving chill"

Yeah, nah. Driving isn't meant to be "chill". Driving "chill" is how collisions occur. Fuck this product and fuck the SV cocksucks making it.


There is so many reasons this shouldn't make it to the top of the front page. But here we are...


They should move away from using Python in a system like this. There's a non trivial amount of Python code (e.g. [0]) in the repo. The lack of type-safety is just the start of the issues with using it in a project like this.

[0] https://github.com/commaai/openpilot/tree/aa365e0d48ba29fd44...


Geo addressed that at the lex fridman podcast 5 months ago, briefly mentioning type checking as one of the reasons. I think he said they're moving to Go or C




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: