Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even signed automatic security updates, where the vendor runs the update server, still allow the vendor to inject targeted attacks in your binary.

There are ways to mitigate this, but not well given the design constraints of closed-source software.



Well, if you are pulling down a vendor's binary, they can "inject" whatever they want in it :)

If you meant that a man in the middle can do that, then if things are implemented correctly on the app's end and if an attacker doesn't have vendor's private key, then - no, of course, they cannot inject anything that way.


> Well, if you are pulling down a vendor's binary, they can "inject" whatever they want in it :)

Why does this have to be the case? Here are two specific alternative deployment models that protect you from this attack:

1. The update is signed at rest (e.g., PGP), instead of the update traffic being signed in motion (e.g., SSL), and the update and signature are widely mirrored or distributed through a third party. The vendor can still inject malicious code, but everyone has the same code: they can't easily introduce targeted attacks against a specific person.

This is the security model of Chrome or Firefox extensions, and it's one of the reasons there are vague security advantages in distributing an app that way instead of directly via a website. It's also the security model of the Apple App Store, the Play Store, almost all Linux distros (kinda), etc., sometimes with the addition that there's a human reviewer at the third party.

2. Same as 1, but the code is open source and reproducibly built, such that anyone can audit (possibly even automatically!) that the sources match the binaries. Or the code is open source and not in a compiled language. At that point, any injected crap is visible to the world.

(This design has the secondary advantage that it disincentivizes the government from compelling the software author to add a hidden backdoor, but that's for the same reasons it makes it hard for the software author to add a hidden backdoor of their own volition.)

My point is that we shouldn't settle for a bad threat model just because a for-profit vendor tells us it's the threat model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: