Quick bit of feedback for any Comma employees lurking:
It took me a very long time to realize that this was a self-driving solution for existing vehicles. The hero just says
"Driven over 6 million miles" and then a "Buy Now".
When I scrolled down and looked at the product and read:
> We’ve added an infrared camera to provide 24/7 driver monitoring, an integrated panda, a custom cooling solution, and an OBD-C port with power supplied directly from the car.
I thought it was some kind of dash cam + driver sleep monitor? Also, most of those words don't mean anything to me.
Only after deeply investigating it did I realize that it's basically Tesla Autopilot for existing vehicles.
You have an awesome value prop though, "Bring autopilot features, to your car." Make that clear!
I think this is deliberate. It's also why you have to download and install the self driving software separately. AFAIK regulators stepped in and killed comma as a self driving solution, so this is their way of still going forward and trying to maintain plausible deniability. It might not pass muster in a court of law, I don't know, but it should probably keep the regulators off their back.
At risk of being snarky, I'm not sure 'engineering company' is the right moniker. They're an R&D shop making bleeding edge research available on a homebrew platform that can interface with recent vehicles to provide a level 2/3 driver assistance package.
The stuff required to pass the regulators is the long, slow, boring process of pushing reliability and failure modes into "safe" territory, while documenting the fact. This is the 'engineering' part. Making a system that works (probably, most of the time) isn't engineering, it's hacking.
They are hackers. Geohot, the founder did iPhone and Play Station jailbreaking in the past. I wouldn't buy any safety critical equipment from them.
They motto could be literally: go fast and break things.
Regulatory capture is a thing. Regulators, at least in part, exist to maintain a barrier to entry for any new players trying to enter a field dominated by large players.
Several major tech companies only exist because they broke rules in their early days. And most people agree the outcome was a net benefit to society.
Next time you drive, look around you at some of the other drivers. The ones with cell phones in their hands are more dangerous than someone with a comma.ai dev kit.
Fun story: I wanted to build a hypermilling coach/trainer system about 10 years ago and got to the point of having a poor mans ACC in a Scion. It didn’t brake though, only coasted to lose speed (thus requiring driver input). So I got the wild idea of tapping into the electronic brake distribution system to apply brake pressure when I needed to slow down. I figured out the system and was ready to tap into it when I had a dream that night. I was on the highway and all of a sudden two wheels decided to lock up from braking too hard. I stopped all testing on that car on public roadways after that.
My current car is hacked to provide lane centering when lane assist is on, but it’s still operating within the bounds of the factory electronic steering (so it can’t self drive/make 90 degree turns)
As are people with 15 year old rusted hoopties, drivers who had a fight with their significant other, disagreement with a colleague or whose sports team just lost, drivers who slept poorly the last few nights, drivers who are ill and taking cold medicine, cars with lamps out, worn tires, worn ball joints, bad shocks, no ABS, no vehicle stability control, etc.
I’m okay with enthusiasts repairing and modifying their cars and driving them on public roads, because I’d rather have that than a world where only certified cars, maintained at certified dealers, using only certified parts are permitted on the roads.
Maybe you're not a "car person". Apply the same principles to whatever field you have passion for. Only certified Intel parts may be added by Intel technicians to Intel computers, which must run only certified operating systems. Or only certified Edison lightbulbs may be added to certified Edison light fixtures by certified Edison technicians. Only certified firmware may be run on devices you're being allowed to rent from the certified manufacturer.
Do you have evidence to support the hypothesis that cars equipped with aftermarket equipment are in general or with this specific system are more dangerous? Do you think such evidence should exist/be considered prior to banning a potential device?
The reason I ask is because there is a general topic of whether we should preemptively block new technology based on hypotheses, whether we should seek supporting evidence, or whether we should wait for confirming data before blocking freedoms to experiment. Said differently, do we ban everything until “proven” safe, allow everything until “proven” unsafe, or something in middle.
Experimentation using heavy equipment that has been modified with unlicensed and untested hardware and software designed to radically change its behaviour, in close proximity to people who haven’t consented to the experiment, should be illegal.
> but it’s still operating within the bounds of the factory electronic steering
That's Comma AI's approach. Their hardware spoofs the stock computer so it can send commands to the car's active safety systems. So there are some stock limitations, like the steering torque, that openpilot doesn't bypass.
That's true, but how much oversight is really applied to the existing software systems on modern vehicles? Did national regulators review the design and coding on the electronic stability controls? I doubt it. Is the CAN bridge between the entertainment system and the critical systems bus (with ABS, ESC, power steering etc.) suitably hardened to protect against hackers (hint: no.) Who has reviewed the electrically assisted power steering?
Most things in a modern car are controlled by software that's far more opaque than OpenPilot. Don't get me wrong, a lot of their software and hardware architecture choices scare the heck out of me, but at least it's transparent.
You're already driving on roads shared with garbage solution like tesla's '''autopilot''' this thing is no different other than the creators not being able to afford the regulatory bribe to deploy it.
It's a completely different product with a much wider number of configurations that end-users have to piece together from Wiki pages and packages they download themselves, loaded onto hardware they've installed in their cars with zero oversight from anyone qualified.
> end-users have to piece together from Wiki pages and packages they download themselves
Which makes it a ton safer than '''autopilot''' because by the time those users are done installing it they have a pretty clear understanding of its capabilities and limitations.
vs. the endless videos of morons driving tesla in '''autopilot''' mode with hands off the wheel because it's an incomprehensible black box of magical self-driving capability for them.
The regulator hasn't stopped tesla autopilot from causing accidents. It's just a competitive moat.
And yet all of those resources have amounted to what all regulators agree is a SAE level 2 system, only qualified for hands-on lane-keeping assist, exactly like you'll find on a modern Mazda or any other number of cars without any of that hardware or ML PHDs.
Sort of proving the point that the resource pool of the incumbent doesn't necessarily translate to the value of the product.
Comma is bragging about 6 million miles drive. Tesla's Autopilot has over 2 billion miles driven. This has two cameras (EDIT: and apparently 1 radar that already exists in the vehicle). Tesla's has 12 cameras, 12 sonars, and 1 radar. This is backed by a company that apparently has received $8 million in funding. Autopilot is backed by a company with a market cap of $84 billion. You are welcome to your opinion of Tesla's product, but if that is garbage, I can't image the word you should use to describe this because the two products are in no way comparable.
You're listing specs which are a proxy for how much money the incumbent has, and saying that the regulatory barrier should be set at those specs for some reason.
None of those specs give any indication to the relative safety of the two systems though. And this is exactly what incumbents want. They want a small player to have to match their massive cash holdings to enter their market, even if the small player has a better product.
Here's some food for though: Tesla has been slapped with multiple lawsuits for misleading advertising of Autopilot. Regulators all agree that Autopilot is just an SAE level 2 system, same like those found on much cheaper cars with far less hardware and none of the marketing oversell.
I listed miles driven. That is a direct indicator of safety.
I also listed the number and type of inputs each system has. This might correlate with money, but it clearly also correlates with safety. One simple example, it is impossible for the Comma system to have 360 degree visual coverage of what is around you with just two cameras in the locations they are in.
Tesla does not use 12 cameras for autopilot. It just uses radar and 1 or 2 of the front facing cameras depending on the version. Not really different from OpenPIlot.
You're making a lot of really wild assumptions about what correlates with safety without a shred of evidence to back it up.
Humans have two eyes, a pretty narrow field of view, no radar, no sonar. By your logic they're even worse, yet theyre so much better than Autopilot that it's unattainable for it to match.
>You're making a lot of really wild assumptions... without a shred of evidence to back it up.
>Humans have two eyes... yet theyre so much better than Autopilot that it's unattainable for it to match.
These two statements are pretty ironic back to back.
I'm not sure if we are ever going to convince each other of anything if we can't agree that there is a clear difference in track record when one product has been used for 2 billion miles and the other for 6 million.
You are already sharing a road with people who are constantly distracted, drunk, or other. I would say the roads would be a much better place if there were more people who had a computer driving them.
On one hand yes, on the other the company wouldn't exist. Regulators are important, but they also serve incumbent interests as they raise the barrier to entry for newcomers. It's a way to pull the ladder up behind you once you've achieved success. I don't know how we can better balance those seemingly conflicting aspects.
Aaaaand that's why Americans need to pay 500 bucks for an insulin injector pen while Indians pay 10 bucks for the same device.
The problem with regulatory capture and arguing it's a good thing, is that there's a downside/cost to the barrier it creates. And often that downside outweighs the upside.
To be clear, no one is arguing regulation shouldn't exist. Just that the nature of regulation is to be captured by incumbents and serve a purpose orthogonal to protecting the people.
I didn't say regulatory capture is a good thing. I said that barriers to entry can be good in certain instances. In the instance of medical devices, the problem in the US has nothing to do with barriers to entry. If you can drive across the border to Canada and buy an identical product for a fraction of the price it would cost in the US, that isn't because the barrier to entry is too high in the US. The product already exists and is being produced, it just costs more due to a broken healthcare industry.
No, Americans are paying 500 bucks for a pen precisely because of regulatory capture. None of the manufacturers of the cheaper version have been successful in jumping the regulatory barrier to entry which the incumbents lobbied for, and tens of thousands of Americans suffer for it each year.
I have a hard time believing e.g. most european countries have less regulation in the medical sector than the US. Probably less corruption/"lobbying" (because corporate campaign distribitions are more strictly regulated in most other countries), but corruption quite different from regulation, even if both affect how law gets written.
Many times it’s hard to distinguish the difference between regulation and corruption. Sometimes they’re one disguised as the other. Not sure what method you have for distinguishing between the two, so please share!
I'd say they exist on different levels and shouldn't be conflated. Corruption is the undemocratic alternative to a democratic decision making process. Both affect regulation, e.g. law.
> Aaaaand that's why Americans need to pay 500 bucks for an insulin injector pen while Indians pay 10 bucks for the same device.
I can't speak for insulin injectors, but I know for a fact that the reason Indian pharmaceuticals are often times massively cheaper is because they ignore patents by which the majority of the world abides.
Now in many cases, the patent system is set up for incumbents who have enough legal muscle to develop and patent isomers, metabolites, or "extended release" versions of successful drugs which are losing their patent privileges. But that's not the whole story. We know that some patents and copyrights are needed to encourage investment in R&D.
India (used to?) just blindly copy drugs, ignore paying royalties, and take the profits. I haven't ever been on any blockbuster meds developed natively in India. Have you?
What's with the nationalism? Every country has benefits and problems for it's citizens, and Australia is no different: just look at the literal schorched Earth outside
Taking another angle, when it comes to a product that could so easily save a net of 10s of thousands of Americans every year once fully developed, barriers to entry might be counter-productive.
(I’m actually fairly bearish on level 5 self-driving being “close”, but if it was, speeding that up would almost surely be worth throwing the switch on the trolley tracks for both the lives saved and the reduction in wasted attention of those who would have otherwise used a lot of their lifetime driving.)
I can never really understand this argument. Why can't we carefully craft regulations to scale in their burden with the company's ability to bear said burden in some way? Scale with revenue or head count numbers? Similar to how income at the lowest bracket isn't taxed, to lessen the burden on people much less able to pay.
Because (a) then you'd end up with things like pacemakers designed by sole operators because their regulatory burden would be that much lower than what a biomedical engineering firm would face, and (b) any such system would be trivial to game anyway.
We can, but usually it's the established encumbents who are helping draft the language of the legislation (which is better than politicians with no expertise working alone) and they are not incentived to encourage the kind of regulation you propose.
Count me in as well. I came here to make that comment.
I've had to scan the entire website and still cannot find a reference to a simple term like "self-driving" anywhere in the website. It seemed like a dashcam that warns the driver.
While the website doesn't directly state it, the entire purpose of the Comma hardware is to run the OpenPilot self-driving software (or "driver assistance" software, whatever you want to call it). Sure, you can run other software with it, but it works really well with the software that has already been written for it and is available on the same website.
The FAQ clears some of this up (https://comma.ai/faq), but I agree with the idea that it is purposefully obtuse to keep regulators off their back.
I think that's sort of by design. The NHTSA shut down their ability to sell in the US a few years ago. This seems to be some sort of stealth method to sell it again using the same sort of trick pharmaceutical companies use where they don't have to tell you any warnings if they don't actually tell you what the thing does.
> Hotz tweeted from the official Comma.ai account that rather than providing the requisite response, the company would instead be cancelling Comma One entirely, and turning its attention to “other products and markets,” since Hotz says that the prospect of a life “dealing with regulators and lawyers… isn’t worth it.”
Their tech is within the top two right now, and Hotz has demonstrated a deep responsibility with it. Plus, it's all libre software.
Skipping regulators seems fine, given (unlike their competitors) they're acting responsibly. Their eye-tracking tech is really cool, too, and prevents the Tesla problem of drivers losing focus.
how can anyone make that statement in the same breath as declaring that dodging regulators is fine? Who the hell evaluates the quality of the product if not regulators?
Are we supposed to take George Hotz's or the companies word for it?
I am a comma user, engineer and autonomous vehicle enthusiast. Comma truly is second only Tesla if we leave out Waymo, but Waymo is only operating in Geofenced areas.
I have used every single pilot assist out and they're all quiet frankly terrible in comparison. I will give GM Supercruise a nod in how well it's done but again, it's Geofenced.
You can use it; you don't have to take Hotz's word, or the company's word for it. How everything works is completely open, and it works pretty well. Also, see:
The people who are consumers of these products are not necessarily software engineers, so no everyone cannot evaluate the quality of these products. Secondly, the models are not open source, and even if they were looking at them tells them nothing about the quality of the product.
The only way to actually access the quality of these products is through independent testing by a body that is qualified and reputable enough to do so. Which in my opinion should happen before even a single one of these devices is actually allowed to participate in regular traffic.
This Lex Friedman interview is pretty explanatory on the topic of responsibility.
A couple of highlights, which point to Comma being at minimum more responsible than Tesla: Comma doesn't advertise itself as anything beyond L2, unlike Tesla; Comma doesn't fuck up eye-detection, it refuses to cooperate with a driver who has their eyes off the road.
As I understand it, "the NHTSA shut down" is a serious approximation of the story. They marketed aftermarket self-driving kits while not getting any regulatory approval, and when the NHTSA sent them a letter saying "So, uh, what's this thing you're selling? Can we meet?" they immediately chose to shut themselves down instead of meeting.
I only knew from your comment what is this after I came back from the site spent minutes trying to find out. For a couple of seconds I only saw the "Buy now" button and I though "Buy what the fuck?" Really annoying design.
Came here to make this comment! Seems like a cool product - but most visitors to the webpage will never learn what the product does, since it is buried in the page.
It's also definitely not for everyone who figures out what it is. They should be up front about what it does and the legality around using a device in that capacity.
I still feel like they've taken it a level too far. I already knew comma.ai from past news, and even with that context I wasn't sure if they pivoted to fancy dashcams.
I literally thought I was looking at some funky games console first....but which you mount on your dashboard for some reason? And even after reading the entire website I didn't get that it was meant to be for autopilot - I only got that after reading your comment.
I completely agree, I never heard about it before, after looking a few minutes at the websites and some youtube videos I realized it was a self-driving thing you can install on stock vehicles that support it.
Comma.ai represents a strange attitude towards life in which you try to turn hard, serious problems (like self-driving), into trivial problems (write some Python and run it on Android phones and plug it into your car!) and try get away with it. It only has a chance of working if you are cool enough in SV such as geohot. I interviewed there once and it was extremely off-putting how clear their demand for fast progress was, and how little they cared about safety.
It's move fast and break things all over again.
Think about how the tech community rips apart Boeing and demands the utmost in quality engineering, reliability, redundancy, testing. And then we have comma, which controls your vehicle on a non-realtime system...
If you ran comma.ai in your car and had a serious crash, you could possibly be found criminally negligent.
> And then we have comma, which controls your vehicle on a non-realtime system...
Lol. This is what I found extremely shocking and is the first thing that I noticed. I don't think anyone serious about embedded design would touch it. They are trying to ignore decades of experience in real time systems and build some hacky stuff and make it open source.
I don't understand how you'd do hard real-time when one of the major inputs, the neural-networks, aren't deterministic? In a hard real-time system, what happens if your neural-networks are late telling you where the lanes are?
In reality the comma-AI isn't controlling the car, a number of peripherals on the canbus are controlling the car. All those peripherals are hard-real time, and the comma-AI is bridging the non-deterministic neural networks and those hard-real-time peripherals.
At some point you need to deal with the non-deterministic neural networks, and bridge them into a hard real-time control system. People complaining that part of the system isn't real-time just seems like they haven't thought about it very much. Of course the UI and neural networks aren't going to be hard real-time.
Neural networks are very much real-time. They generally consist of a fixed series of math operations of fixed size. Only the weights and inputs change.
The neural net will never be late. It just might be inaccurate.
Huh, I suppose they aren't inherently incompatible with real-time constraints. It looks like it's pretty challenging to do in practice though.
I'm still somewhat confused as to how multi-object recognition works in a real-time way, naively I'd presume that recognizing n+1 objects in a frame would take longer than n objects.
You might be able to more practically get real-time neural networks using something like YOLO (you only look once)?
Either way it seems to me like hard real-time neural networks are a very challenging problem.
> If you ran comma.ai in your car and had a serious crash, you could possibly be found criminally negligent.
I don't know that it's much different than using cruise-control and then having a serious crash. If there's criminal negligence when using cruise control, it's because you weren't aware-enough to shut it off when the conditions cruise-control is built for no longer applied; the mere fact of using cruise-control isn't criminally negligent, as long as you're "just waiting for it to slip up so you can take the wheel."
For that matter, the same is true of teaching a teenager to drive. There's no negligence to having them at the wheel if you're ready to supercede their bad driving at all times.
What's suddenly different, if the "thing that doesn't really know how to drive yet, but which you're ready to supercede the bad driving of" is an AI instead of a teenager?
You can get a learners permit to teach your teenager completely legally. You couldn't do the same with a 10 year old...
If a serious crash involving comma.ai went to trial, it would look like the driver was using unapproved, unregulated software, unnecessarily, putting others at risk. Recent Autopilot incidents show us that expecting a human to take over within a second is not reasonable. I would be very worried that the court would indeed find the driver negligent.
> I would be very worried that the court would indeed find the driver negligent.
I use open pilot on the highway, and I love it. I had to physically tap into the CAN bus for my car's safety system, that made me very nervous. But the installation is completely reversible, and I'm much more comfortable with it's limitations. I just treat it as a really great adaptive cruise control and active lane keep assist.
But that's all it is. I would be completely responsible in most situations if my car caused an accident while open pilot was active. Open pilot's design has it immediately drop all control of the car if you touch the brake or gas, it does not aggressively follow cars, and the driver is still required to control the car. So if I start to become uncomfortable I need to immediately take over.
Assuming that open pilot will always act within it's safety guidelines doesn't mean that its safe. Toyota's paid tens of millions in fines and settled multiple lawsuits over unintended acceleration. I believe that using openpilot on the highway means I'm a safer driver. But I do understand that if I did get in an accident proving I'm not negligent might be extremely difficult.
I agree with you, liability is a very important part of self driving functionality. I won't think a car has full self driving capability until I can get in the back of that car, go to sleep, have it drive me somewhere, and hold no legal or financial responsibility for any accidents that occur. Openpilot is nowhere close to that, but I do think it's really good for what it's meant to do.
What if the crash is caused by the device failing? In the case of your car's built in systems, they've been approved by regulators and you probably wouldn't be held liable. If it was an unregulated device that you installed into your own car that caused the crash, I bet the legal circumstances would be different.
Yeah I really enjoyed this interview. Made me respect him much more and realized that he actually is concerned about safety, specifically more than Tesla (especially when it comes to driver monitoring).
Let's be honest, there is not a single driving assistant that isn't dangerous. Why care for "realtime" if your model tries to kill you once per minute? Realtime is one of the final touches you need to make your software perfect. That's just how the industry works, and if you want to keep up with them you have to go the same route, or do you know of any exception? Most manufacturers tell you that you should closely watch what your car is doing on autopilot, and they limit it's torque and acceleration, so the human driver is reliability, redundancy, testing and the realtime system.
If you'd like to know more specifically about the comma.ai saga, regarding safety and regulation in the U.S., here's some background, previously covered on HN:
> Currently, openpilot performs the functions of Adaptive Cruise Control (ACC) and Automated Lane Centering (ALC) for compatible vehicles. It performs similarly to Tesla Autopilot and GM Super Cruise. openpilot can steer, accelerate, and brake automatically for other vehicles within its lane.
If the car can't drive without you, then it's not self-driving. Those features do not allow the the car to drive by itself. It can't go anywhere without a licensed, attentive driver. This is so thoroughly not complicated.
Since the website is not clear, here is what it is.
Comma Two is an aftermarket dash cam that adds features like adaptive cruise control, automatic lane centering, and forward collision warning to cars using open source software.
It uses a combination of the dash cam and the vehicle’s built in radar to accomplish this. It is able to control the car and receive its radar data by connecting to the vehicle’s OBD port.
The unit is $1000 and also requires the purchase of a vehicle specific harness for $200.
The source code that performs the driver assist functionality has to be loaded by the user and is available here:
https://github.com/commaai/openpilot
There’s also a “prime” subscription which pays for cell service on an included SIM card and allows you to see video footage or location data from your vehicle remotely.
I was very skeptic too. However using their system for almost an year everyday, I can deffo say it makes me a safer highway driver.
I have overall more awareness of the road since I don’t always have to keep eyes on the lane in front. I look around more around the car and look side mirror/rear mirror more. Once this asshole driver was zipping left and right over taking and was gonna squeeze through us, I could spot him/her from distance and I gave more space. If I didn’t he would have deffo crashed into us. Granted half an hour later he/she did get into a crash.
Another time, car in front suddenly brakes, system immediately alerted and started braking. Even If I was manually driving, I couldn’t have had the same reaction time. Car ahead crashed, ours didn’t.
Some humans are very dangerous highway drivers since they don’t maintain distance with car infront.
This is testament to both commaai and Toyota. I can set I want 2 seconds-ish reaction time with car infront and it always mantains that safe distance. Really neat.
Also commaai driver mobitoring has gotten quite good. Move your eyes off the road for a bit and it yells with an alert to touch the steering wheel and get eyes back on the road. That alone makes you more attentive.
All I can say is give it a shot, it’s quite an ingenous solution to make other brands of cars, highway self lane following.
I’m not gonna say self driving, because essentially it’s a driver monitoring + alert system + lane following system. Very narrow but does a good job at it.
"comma two is designed for permanent installation in your car."
Okay, what's it do?
"We’ve added an infrared camera to provide 24/7 driver monitoring, an integrated panda, a custom cooling solution, and an OBD-C port with power supplied directly from the car."
Great, so it monitors cars... For what? Quick movements? What's a panda? I'd be interested if I knew what it was.
"comes with three free months of comma prime"
So the 1000$ onboard computer that does something comes with a free* subscription.
They need a total marketing rework, can't imagine the sales they've already lost.
Can anyone here comment on actually using a Comma product?
I don’t even drive a car, but I don’t see why anyone would buy something like this yet. My point being that it’s not full self driving, and frankly its safety is entirely questionable.
Self driving is supposed to make driving less stressful, but does having a system like this actually do that? Or does it make you more stressed because you have to be conscious of the computer?
Don’t get me wrong, I like what George Hotz is trying here, but what incentive is there to be an early adopter?
I find driving much less fatiguing when using my Eon (the previous iteration of this product). Taking yourself out of the feedback loop of constantly adjusting steering (and gas if in traffic) makes the experience much more relaxing even though you still have to pay full attention. In my experience, OpenPilot is really rock solid in normal highway driving scenarios and there isn't anything stressful about using it.
The driver must always be capable to immediately retake manual control of the vehicle, by stepping on either pedal or by pressing the cancel button.
The vehicle must not alter its trajectory too quickly for the driver to safely react. This means that while the system is engaged, the actuators are constrained to operate within reasonable limits.
"""
Those checks are separately coded into both OpenPilot, the self-driving software, and Panda, the microcontroller responsible for communicating between OpenPilot and the vehicle.
OpenPilot generates all of the control messages to send to the vehicle based off of feedback from the cars sensors and it's own camera. it has rate limits for all control messages it generates so that it can't jerk the steering wheel or slam on the brakes (as of now it leaves Collision Avoidance up to the stock system)
These control messages are then passed to the Panda over USB. Panda is a microcontroller that converts the control messages into CAN messages that the vehicle can understand. The Panda has the same rate limits checks hardcoded into the firmware and it will reject any control messages that are outside the limits
Finally, because Panda is sending the same CAN messages as the stock ADAS system, whatever safety mechanisms the OEM implemented apply to OpenPilot as well. Most cars have some form of torque caps and some form of rate limiting baked into the EPS firmware.
There’s the self driving code and native layers that runs on a bog standard Android phone, then an Arduino like microcontroller handles interfacing which is more or less realtime. Malformed, illegal or out of range commands would be trimmed out if occurs.
They also has a narrower than commonly used limits and that is reportedly insufficient for tighter curves on some highways but are done so in good faith I guess.
I have an eon and love it for highway driving. It makes long trips relaxing. It's basically the combination of really good adaptive cruise control and active lane keep assist. And I feel it's safe within those limitations.
Open Pilot keeps me in my lane on the highway following a safe distance behind other cars. I have a Toyota with TSS 2.0, so the steering is powerful enough on curves. My only complaint about steering is that on tight turns like highway onramps it will slow down a little too much.
While I was driving my last car I got in an accident when I was driving down the highway at highway speeds and everyone in my lane came to a complete stop. I'm convinced I would not have gotten in that accident if I was driving my current car with open pilot active.
True, I agree with that. You can still use stock TSS 2.0 instead of open pilot even with it set up in your car, and I tested out stock TSS 2.0 before I bought my eon. I'm familiar with how it works. My last car was more than 10 years old, and adaptive cruise control probably would have kept me safe.
Here's how I compare the two systems. Stock adaptive cruise control is decent and useful. Stock also allows you control the follow distance. Openpilot doesn't allow you to adjust follow distance but it's conservative. OP is really smooth, so I think it's better than stock.
Stock lane keep assist is mildly annoying at best. I find the alarms annoying. And stock active lane keep assist is weak. I wouldn't generally consider stock LKA to be worth using for me. OP LKA is really good. It's great at keeping you in a lane, and it's extremely smooth. OP LKA is something I really appreciate.
You can technically use OP ACC without OP LKA, but the combination of the two is greater than the sum of its parts. Driving with OP is a surprisingly good experience. I was showing it to a relative recently and she said she was surprised at how smooth and how comfortable she was with it active. She mentioned if she wasn't watching the steering wheel she would not have noticed that I was not actually steering.
The Comma One source code was put online (I think by the author?) and deemed to be incredibly dangerous. If I recall there was very little error handling. Does anyone know what has really changed since then?
Yeah plenty has changed since then, there are limits that are checked twice to ensure the driver is always able to take control. If violated, the car or openpilot throws an error
openpilot.comma.ai
Well, it didn't take me long to find a message about this on the frontpage that clearly says:
> Keep your eyes on the road.
> "While engaged, openpilot includes camera based driver monitoring that works both day and night to alert the driver when their eyes are not on the road ahead"
Also, if there are deaths around other autonomous systems like we have seen from Tesla and Uber then perhaps they're even more dangerous as they are closed-systems, unlike this then.
The Comma Two looks like a step in the right direction for making existing cars self driving with open source being a bonus with openpilot.
One can say it is like Tesla Autopilot for any car.
Comma’s products explicitly state you still need to pay attention when driving with their system which is almost hard to ignore.
On a more meta note: I find the comments here ironic given this project fits within the spirit of HN and hacker culture but commenters find this project displeasing.
I agree their marketing is kind of vague and needs improvement, but I encourage HN to view videos of the system in action before jumping to conclusions.
If there is anything about Comma.ai that I have seen beyond the hype of self-driving cars, I see this as substance, unlike the rest of them AI self-driving cars (Except Tesla). The idea to turn your existing car into self-driving rather than spending $$$ on a new one makes sense for those saving money.
Comma.ai is for cars what Linux is for PCs, meaning that you use open-source software and a hardware kit to make your existing car self-driving. Very clever!
When somebody installs Linux on their computer without knowing what they are doing they might make break their computer. When somebody tries to install after-market self-driving equipment in their car without knowing what they are doing they could easily kill people.
That's really a bullshit comparison. It either works, or it does not. That's quite literally the same as oem systems. It either works, or throws an error and does not.
Yeah and Many people probably don’t want a car that’s “self driving” (or insert your definition of what OpenPilot does) to “throw an error” while going down the highway. That’s not safe for the car or the nearby drivers.
>The idea to turn your existing car into self-driving rather than spending $$$ on a new one makes sense for those saving money.
You sure this is where you want to save money on a self-driving system?
God forbid there's an accident, what then? There's no big company you can point at for recourse and I guarantee that Comma.ai isn't going to stand by and accepted responsibility - their website makes it clear that the dev kit does not ship with any self-driving software.
It's only a level 2 system and with this hardware, won't ever be level 3
If there's an accident, the driver has to be pretty stupid. it's predictable, you know where its strong and weak points are. It has weak points when the roads get tricky. Previously it would panic through an intersection, this has since been improved upon.
Its eyes don't leave the road, people on their phones do. I wonder which cause more bumper to bumper collisions!
They probably won't accept responsibility, that's on the driver since openpilot is a driver assist system, not a higher level (L3+) self-driving system
Why is it dangerous? Self driving software instances costs a lot to develop but nothing to manufacture, and the cost cannot be used as an indicator of its safety or sophistication, at all.
> Does this imply they're encouraging anyone to write buggy code and drive their cars with it?!?
The fact that the comma guys aren't locking it down out of a misguided paternalistic belief that they no better is a good thing. Don't blame them for providing the hardware, blame people if they write buggy code. This is analogous to blaming a firearms manufacturer for a murderer's crime.
I found that after I wrote the comment and had the same question. Also, what does "adding it" look like? Do I have to somehow interface it with the car's computer or other systems? I'm rather confused.
Adding a vehicle is the equivalent to writing drivers (heh) for your particular car. It entails reverse engineering your car's builtin driver assist features to determine which CAN messages do things like actuate the brakes and gas, turn the wheel with a given amount of torque, or communicate radar values from the cars builtin radar sensor.
For many cars the CAN messages will be the same as similar models so porting a new vehicle just means adding tuning values that can be captured by driving with OpenPilot connected but not controlling the vehicle e.g.: https://github.com/commaai/openpilot/pull/866
Wait wait wait, are you telling me I can actuate the brakes and gas and turn the steering wheel via the CAN bus? So I can realize my dream of turning my car into an RC car just with a Raspberry Pi and a $4 dongle? That is the best news of the year!
Which cars have these sorts of actuators? How can I see if mine does?
If it's not in that list then the rough heuristic is whether or not your car has some sort of existing driver assist features e.g. Adaptive Cruise Control, Lane Centering, Forward Collision Warning, etc. There are some exceptions to that rule (I think Hyundai ships some cars that have full control of steering and gas even if you don't option out the ADAS features) but that's all case by case.
If you join the Comma Discord (https://discord.comma.ai/) there are manufacturer specific channels that can answer any questions you've got.
Almost any car with drive by wire and/or modern safety features (lane assist, forward crash avoidance, etc) can be controlled (to an extent) via the CANBUS.
It’s different for (almost) every car, but you’d be amazed how easy and fun cars are to hack on, as long as you keep it off the streets.
Actually it's a horrible thing, think of it, a terrorist can buy one of these cars, install rasperrypi with internet access through mobile data, then they can terrorize the whole town by running people over without being caught or killed
Not really, all you need is a few other people in cars to block them in, problem solved. Or, if it's the US, for the police to use some of those ex-military surplus toys they have. That's not getting into remote control being imprecise so good chance the car will just get stuck somewhere.
Well, hey, for $1000 I’d almost buy this for the “Tesla experience” but alas it doesn’t support my car (2014 Lexus).
Well done for finding the right price point. You have a solid business case here.
Not commenting on safety like everyone else because it seems from the ISO 26262 (not easy to comply) and architecture that you’ve put some thought into this. Will be good to see it improve further. I understand that you need to sell units to do so, so I do not hold that against you.
Let's say I order this stuff and flash the firmware of my car.
If there was a bug in comma ai program (like jerking the driving wheel and slam my car in the guard rail without giving me time to react) I guess I am liable?
OpenPilot doesn't require you to flash anything to your car, everything is done using the same CAN messages that the stock ADAS system sends. Further, the microcontroller that communicates between OpenPilot and the vehicle is hardcoded to block any CAN messages OpenPilot sends that are deemed to be too fast to react to:
they have limits in place so it never "jerks" the wheel. They are liable in the same way that your automatic cruise control would accelarate towards a stopped car
Considering the people they're explicitly looking for, according to that page:
> People who have done well at math competitions (USAMO, IMO, PUTNAM), competition programming (ACM, USACO, codejam, topcoder), science fairs (ISEF, STS), or capture the flag (DEFCON, secuinside, GITS). Those competitions don't just select for ability, they also select for quickness. We are in a very competitive space.
...it's absurdly low. Companies like Google hand out close to $200k total, liquid compensation to new grads who haven't placed in any of those competitions. The people who have ranked in any of those (especially the math ones, and doubly so the Putnam) can easily write their ticket to a job paying double $130k right out of college.
Anyone with that kind of competitive math/programming experience and real world machine learning engineering experience could earn triple that range if they wanted to. That's a ridiculously small and competitive set of candidates to be targeting. It's also not necessary, because strong performance on the e.g. IMO doesn't a priori map to outperformance, on a per dollar basis, writing autonomous driving logic.
Basically: no it's not competitive for San Diego, Comma is asking for wildly overqualified people to sacrifice significant amounts of money to work there, and it's not clear they should be using those kinds of qualifications as a filtering criteria in the first place.
This kind of cargo culting does not inspire confidence in their recruiting.
Well, I think Comma gives a tough competition to Tesla given the features which it offers when compared to AutoPilot of Tesla. There are lot of masses who still cannot afford a Tesla( and their $7K package for full self driving cars). Imagine all of (atleast 20 %) Comma AI and Openpilot supported cars actually using this hardware and software in the next 6 months which is way more than the number of Teslas in the market. Also, it avoids the necessity of owning a Tesla. Coming to safety, I believe Openpiot with Comma makes a driver a better and safe driver with driver being attentive. I wouldn't drive a Tesla without being attentive even it offers Level 4 autonomy (because we always need to have an eye on the road and pitch-in when needed as this involves people's lives and there is no single system in the market which is 100% perfect).
Jump on youtube, plenty of timelapses showing its reliability. The predictability of the system partly removes the danger factor and with the driver monitoring system in place, it becomes very difficult to not pay attention. I'd rather have this driving a car on highways than most people I know
> On stage, I asked Hotz about safety issues and concerns, but he expressed confidence that Comma One wasn’t doing anything existing technology on the market doesn’t already offer.
This is a nonsensical reply. Begs the question what is it that Comma does not do that every proper self-driving or driver assistant solution does.
If the obtuseness is to deter regulators, it's a good thing nobody at any regulatory body reads Hacker News. Right? Because otherwise frontpaging HN would defeat the entire purpose of trying to fly under regulatory radar.
I clicked around for a while and it took me forever to figure out what this was. For the longest time I thought it was a dash cam. What's an integrated panda? Why do I care about a custom cooling solution?
They should move away from using Python in a system like this. There's a non trivial amount of Python code (e.g. [0]) in the repo. The lack of type-safety is just the start of the issues with using it in a project like this.
Geo addressed that at the lex fridman podcast 5 months ago, briefly mentioning type checking as one of the reasons. I think he said they're moving to Go or C
It took me a very long time to realize that this was a self-driving solution for existing vehicles. The hero just says
"Driven over 6 million miles" and then a "Buy Now".
When I scrolled down and looked at the product and read:
> We’ve added an infrared camera to provide 24/7 driver monitoring, an integrated panda, a custom cooling solution, and an OBD-C port with power supplied directly from the car.
I thought it was some kind of dash cam + driver sleep monitor? Also, most of those words don't mean anything to me.
Only after deeply investigating it did I realize that it's basically Tesla Autopilot for existing vehicles.
You have an awesome value prop though, "Bring autopilot features, to your car." Make that clear!