Universal encryption is a defense against blanket vacuuming of communications for later offline analysis. Its a defense against a massive parallel MITM attack against the world's communications infrastructure.
Its not a defense against targeted attacks of individual devices.
In addition to this, I frequently hear people talk about how Signal would prevent monitoring of groups like those that stormed the capital. As if you can create a secret communication channel that members of the public can join but the FBI is unable to infiltrate.
Signal and E2EE stop dragnets, not targeted efforts. Which honestly is exactly what I want and seems like what we want in a free and open society. Monitoring shouldn't be the default but only happen when there is a warranted reason to monitor, preferable with a literal warrant. This embodies the idea of "innocent unless proven guilty" but balances the ability to move from suspicion to evidence gathering and minimizes the collection of data of innocent people. It's not "if you have nothing to hide then you have nothing to fear" but "if you have nothing to suspect then you have no reason to search." I don't know how a dragnet doesn't violate the 4th amendment.
> As if you can create a secret communication channel that members of the public can join but the FBI is unable to infiltrate.
Back in the day people used to, hilariously, have MOTD notices on all of their illegal servers saying the internet equivalent of "You have to tell me if you're a cop".
I first saw that on BBS's, and it made it's way onto some forums in the early days of the web.
I always found it humorous what laws they'd cite, when they bother to, to say basically "By clicking this button you assert you're not law enforcement officer".
Yeah it's a commonly held misconception spanning decades that cops must identify themselves if asked. I wonder where it originated. Needs some sunlight like:
Badger: "Prove you're not a cop."
...
Undercover cop: "If you ask a cop if he's a cop, he's like... obligated to tell you -- it's in the Constitution."
- Breaking Bad, Season 2, Episode 8, moments before Badger gets arrested
I don’t think the server notices were based on this assumption. They were trying to use the draconian cyber security laws instituted after Mitnick and others got caught, which stipulated 10 years jail time for “unauthorised access to a computer system.”
Huh, I just kind of assumed they were related since the whole moral panic over "hacking" was about the same time (or, in hindsight, the same time I started using telnet... I'm less old but still quite old. :P )
Not a lawyer, but I seriously doubt any prosecutor would bring a criminal case against a law enforcement officer whose access to a system was denied for being a law enforcement officer, and while there are some situations where one can make a civil claim under the CFAA, what legitimate damages could you show that an officer caused you by accessing your site in the furtherance of their duty? If someone actually tried to sue for this, it would be a very interesting case indeed.
It’s not about bringing a criminal case, it’s about being able to say “this evidence was gathered illegally” if it was gathered without a warrant, then having the evidence thrown out.
A defense lawyer may give something like that a shot at least.
I have no idea if it would work in practice of course.
> it's a commonly held misconception spanning decades that cops must identify themselves if asked. I wonder where it originated.
Perhaps it comes from some of the loose laws that require police to make some attempt to announce themselves and their purpose in specific situations (i.e. when executing a search warrant where they don't have reason to believe announcing themselves would pose a risk to their well being)
This reminded _SWIM_ of another artifact of netizen legalese. Some forums were almost hysterical as nobody ever did or experienced anything ever. There was only frequent hearsay about "Someone who [wasn't] me".
I hope SWIM is still in good health, despite everything they've been through.
Would be interesting to know if saying SWIM did, or didn't, save someone from unwanted legal attention at some point.
The modern equivalent is using copyright material/music in a video and then putting "I make no claim on the copyright of the music in these videos" in the description thinking that's a pass
Total guess: I assume they could claim unauthorized access to a computer system under the CFAA.
I’m fairly sure that the police can’t hack a random server then claim “I’m a cop so it’s okay” without a warrant. At the very least any evidence gathered may be deemed inadmissible. Accessing a system without permission may, I guess, fall in the same bucket.
Some of these user agreements tried to cover that too ("you are not an officer, nor employed by one, nor have you ever been, nor will you ever be"). It was cute.
I remember reading this a few years back. The title says it all. Why bother cracking codes etc when you can get a judge to sign THAT for you? :)
Going through the PDF of the legal document, on page 10, the screenshots have a description of the "Source extraction" (point): "....<some long zip filename-I assume the dump>/private/var/Containers/Shared/AppGroup/<long HEX number>/telegram-data/account<long number>/postbox/db/db_sqlite"
I got so many questions.. I believe Telegram does NOT encrypt group chats. And the default setting is that it does NOT encrypt 1-1 chats. You have to jump the extra hoop (a couple of taps) to start a new and encrypted chat (you cannot encrypt an existing chat).
So.. the chat was not encrypted? So they 'just' managed to bypass iPhone's lock, and dump all the storage, look for messaging SQLITE files, read them. Are we sure they didn't get that data from iCloud (and hiding it - with permission?). Page 11 writes: "Several videos from his iCloud depict him apparently showing off his cash." Could the authorities have gotten an automated backup from iCloud, the guy was using an unencrypted Telegram chat, so basically not much hacking took place?
Side note: big Supernatural fan here - page 10, the second 'blue' message writes "..Poughkeepsie". In the Supernatural universe "Poughkeepsie" is the Winchesters' secret distress signal to each other meaning that something is wrong and they are to drop everything and run.
> Fun fact for the UK: "UK police have a new tactic to circumvent strong iPhone encryption: steal the unlocked phone out of the criminal’s hand"
PSA: On an iphone if you hold down the power button and a volume button for a second, your phone will lock into the state its in just after you turned it on. From here it can only be unlocked with your passcode. You can perform this gesture without taking your phone out of your pocket.
(Edit: When the phone is unlocked, you need to use power + volume down. Power + volume up while the phone is unlocked takes a screenshot.)
iOS also takes a screenshot when the phone is unlocked if you use power + volume up. Holding power + volume down hard locks the phone even when its unlocked
To defend against on-device attacks or iCloud backups, Telegram would need to do its own, separate encryption of its storage and prompt the user for a passphrase at app launch.
The typical security model for iOS apps assumes that the local device is secure, as its storage is already encrypted by the system based on its passcode/biometrics (on initial power-up biometrics aren't available).
End-to-end encryption within chat apps typically refers to encryption over the network. It does not include encryption of messages once they reach their intended recipients.
> To defend against on-device attacks or iCloud backups, Telegram would need to do its own, separate encryption of its storage and prompt the user for a passphrase at app launch.
This isn’t quite correct. iOS applications can exclude files from iCloud backups.
This is pretty much how Scott Albrecht of Silk Road was caught. The police caused a distraction the grabbed his unlocked laptop while he was at a cafe.
Of course, maybe bringing an unlocked laptop with details of your drug empire to a cafe wasn’t the best opsec out there.
No they don't, they just force the police to do regular police work and infiltrate the group the old fashioned way rather than using mass surveillance.
They do. The reality is that there's plenty of non-secret, boring warrants which go nowhere because the police can't decrypt the traffic or read the phone memory (and here in Germany, they have less access to fancy tech).
At the same time, an unprecedented amount of private conversation happens using phones, raising the stakes and the default expectation of privacy.
The pendulum swung from insane dragnet surveillance to widespread availability of tech that is resistant to all but the most sophisticated surveillance efforts.
Talking quietly at the local pub is resistant to all but the most sophisticated surveillance efforts too.
It's only very recently that people have been communicating over the internet, allowing the mass surveillance. All encryption does is return us back to the pre 1990s in terms of surveillance capabilities.
Exactly. Or going for a walk in the forest - many of these conversations are now happening online.
On the flip side, even ordinary phone calls and messages are often encrypted by default now, which used to be much more readily available to law enforcement.
> and here in Germany, they have less access to fancy tech
Interestingly, there’s a sort of perverse incentive going on here. If cops and governments hadn’t historically abused phone taps and sms interception - then strongly encrypted e2e tech like Signal wouldn’t have become “necessary” in the minds of non criminal users concerned about privacy, and we probably wouldn’t have had well known and widely used services/platforms like Signal and Telegram already in place for people to flock to when WhatsApp made their privacy blunder.
I’m sure GreyKey and NSO are perfectly happy with a cyber arms race requiring cops to buy more and more expensive “fancy tech” every year...
I'm not sure I buy this, but it depends how we define targeted attacks. If we include getting access to the device, which I think is reasonable, it obviously doesn't stop targeted efforts. Not to say it doesn't make it more difficult. Remotely, are we only talking Signal or the system as a whole? AFAIK nothing is unhackable, only difficult to hack. But as long as we're playing the cat and mouse game I'm happy. Improving defensive technologies shouldn't ever be stopped or hindered. I'm not sure why this isn't seen as a national security issue but that's a bigger discussion.
But my main point is that most people are afraid of large organizations of terrorists or bad actors will be able to discuss things without the ability for the FBI to surveil them. Well you can't have "large" and "vet everyone to an extremely high degree." Sure, this will make it more difficult to stop small groups, but those have been notoriously difficult to find and stop in the first place.
My personal perspective is that if a Three Letter Agency becomes specifically “interested” in me, I’m fucked. No matter what I do. Even if I fake my own death and live in a submarine...
What I can do, however, is take measures to protect myself against less powerful or sophisticated attackers.
Where I come from, “communications metadata” is required to be kept by all telcos and isps. This metadata is them “available to law enforcement” - which is not just investigations into child abuse and drug running, as the proponents of the laws made out when advocating for them, but includes agencies such as the Taxi commission, various local councils, and state fisheries departments.
Using (trusted) vpns and e2e encrypted messaging will reduce the chance of a local council or a fisheries inspector being able to get as much information from my metadata as they might from non VPN and secure messaging using people.
(Of course, it might backfire and just paint a big target on my back... One potential privacy advantage of COVID and widespread wfh is that many many more people are using VPN tunnels for ordinary and mundane purposes. Adding extra hay to the haystack my needle is trying to hide in is a good thing. So long as it’s not Mossad looking for my specific needle...)
> Which honestly is exactly what I want and seems like what we want in a free and open society. Monitoring shouldn't be the default but only happen when there is a warranted reason to monitor, preferable with a literal warrant.
Its more than warrants, though. The evidence of the last few decades is that warrants aren't enough to block dragnets. Warrants can (and are) avoided through parallel construction. Unscrupulous agents will go off on "LoveInt" missions if it suits their ethics.
Universal encryption uses economic force to make dragnet surveillance infeasible where ethical force has failed.
If we're going to nitpick I think HN is the place that this is acceptable, especially when done in a good manner. I do think you make a good point since "until" implies that anyone is guilty given enough time. I'll try to adopt this change into my vernacular. I updated my comment in an effort to acknowledge and support this idea.
> “ until" implies that anyone is guilty given enough time
Sadly, I think that’s considered a “feature” by modern law enforcement and judicial systems. Plea bargains and the sheer volume of laws that cannot possibly be understood by normal people make for a prosecutorial power balance that genuinely means everybody really is just “innocent until they decide to prove you guilty”.
Your _are_ guilty of something. If you come to the attention of the wrong LEO, they will find something to prosecute you for, and like Al Capone, they’ll perfectly happily send you to jail for tax evasion if they’ve decided but cannot prove you are guilty of something else.
> I frequently hear people talk about how Signal would prevent monitoring of groups like those that stormed the capital
Those folks walked over on public roads from a Trump rally down the street, live streaming on a hundred cameras as they did it. Of all the things that went wrong on the 6th, surveillance was clearly not one of them.
What I think you're remembering is more the point that Signal and Telegram provide harder-to-surveil forums for the people who got radicalized. That having all that chatter be private by default means that we won't see the next extremist faction before its born. And that's a fair enough point. Q communities on Facebook and Twitter made it easy to see where these people were coming from.
But even there, the nature of radicalization is that it happens in a big group. There may be surveillance-proof channels on Telegram where modern right wing extremists are assembling to find like minded souls, but finding them isn't a problem at all. The ones that are hard to find die out by definition.
> Q communities on Facebook and Twitter made it easy to see where these people were coming from.
Cynical view. Those made it easy to see what the manipulated fairly public mass of disenfranchised or disgruntled or actively evil Q/Trump supporters were discussing and planning. I’ll bet people like flexcuff guy and pipebomb guy weren’t discussing _their_ plans in such public forums. And I’ll bet there was a _lot_ of planning of the sort that used to take place in underground-public places like 4chan, which has now gone deeper underground and has much more stringent initiation rites for admitting new members. The sort of planning that involves manipulating and convincing “pawns” in their “it’s all about ethics in video game journalism” games to provide cover and take the fall for deeply serious shit...
> I’ll bet people like flexcuff guy and pipebomb guy weren’t discussing _their_ plans in such public forums.
I dunno, some of those folks were pretty brazen. Remember they thought all this was, if not "legal" exactly, "sanctioned" by their sitting president. They weren't trying to hide.
But even if we grant that the premise (which is clearly true) that serious criminals will always have the ability to hide from surveillance, the presence of an easily found public forum is still a prerequisite for real insurrection.
Flexcuff guy and pipebomb guy may have been hardened experts, but they couldn't have sacked the capitol with only their militias behind them. The preventable part of this is the mob violence. And mob violence requires a mob, which requires that the members of the mob (by definition amateurs) be able to find forums to radicalize them.
>>> As if you can create a secret communication channel that members of the public can join but the FBI is unable to infiltrate.
A mob needs to radicalize people and recruit. If you are recruiting and not on the FBI's watch-list I'm going to blame the FBI, especially since you are actively radicalizing people. Any insurrection or major plot will require recruitment of people which it is difficult to vet. This (should) makes for a trivial ability to infiltrate. If it doesn't I'm really disappointed in the Three Letter Agencies as they clearly make themselves out to be more than woefully incompetent. We're spending all that money and people can't click a link? I'm pretty confident they can accomplish this.
Yeah exactly, we can't read cypher text and so aren't included in the end to end encryption contract of signal. Frankly this is nothing to do with signal and everything to do with phone security.
I'm no legal expert, but that seems difficult for a court to do. They're not legislative (or, not supposed to be). And I'm certain the ACLU would fight that tooth and nail.
Signal was subpoenaed in court, and told to give up information[0] on certain customers. They replied the only information they had was the time the account was created and the date of the last connection. Admittedly, this was over 5 years ago. They fought to get these court records released.
There’s still some “trust” required in both Whispersystems to not backdoor updates, as well as Apple and Google to not backdoor the distributed apps after Whispersystems submit updates for publication.
There is though, some ability for skilled enough people to “trust but verify” by reversing the app bundles after publication. I believe (but not for any good evidence based reason) that there are “enough eyeballs” interested in Signal that _hopefully_ if a backdoored app update ever appears the white hats will raise the alarm quickly (I have no doubt the black hats will sell the details to NSO/GreyKey just as quickly...)
At least Apple have demonstrated pushing back on a court order requiring them to fundamentally break their product’s advertised security to comply with such an order.
I’d be curious to see if WhisperSystems are prepared enough to lawyer up and fight like that. (I suspect they’d probably get NSLed like Lavabit did we won’t know about it until way later...)
According to the Snowden leaks, Apple's been a part of PRISM for quite some time.
Seems to me like the trial was a show for PR, and a win for the FBI because now a ton of people are under the impression that they can commit crimes without a trace if they have an iPhone.
How about if Signal encrypted all your stored communication when not in use and required a password (and 2FA) to decrypt it? Thus the app's security is ~independent of phone security, at least for forensic seizure analysis.
The point of encryption and privacy software isn't to make it completely impossible to violate someone's privacy. We still have laws that most of us agree with and want to see them enforced.
The point is to make it impossible to compromise everyone all at once and stream that data into a system designed to automate the manufacturing of consent.
It has to cost something non negligible to violate someone's privacy, and that cost will demand evidence and accountability from those who are authorized to wield that power.
Lots of DLP solutions rely on MITM user sessions, so at least corps who implement those solutions have access to all their user data in the clear —which is fair for a Corp. but users not being hygienic about their data often use Corp resources for personal use and that can get vacuumed up in the process.
If you are really paranoid you'd also worry about future attacks revealed through algorithmic flaws, quantum computing, or simple increases in processing power.
I guess our hope is that they won't care what we said after a decade or two.
I'm not sure if you're kidding, but smashing your devices with a hammer will not damage the cold storage unless you directly hit it and sufficiently deform it. It's much more likely that you just damage the screen, casing and mainboard when randomly smashing on it. Especially with a laptop. So the storage could simply be ported to an undamaged phone/laptop and effectively all you achieved is turning the device off.
Similarly with a microwave. The reflections and sparks might kill the microwave before the hard drive in your device is actually damaged. Please correct me if im wrong, but the microwaves shouldn't actually be able to flip the bits, but just heat up the material.
A shredder strong enough to tear the metal/silicon/plastic apart in small enough pieces sounds reliable.
IANAL, but my understanding is that obstruction of justice via spoliation, tampering, or destruction of evidence is a charge that requires your investigation to have already begun, the raid to have already started, or the arrest to have been made, and that you are free to destroy any of your own property prior to these events.
Specifically, you need to knowingly be the subject of an investigation. I'd assume destroying the phone when you see the cops coming but before you've seen the warrant would be a grey area. Please, lawyers, clarify.
I suspect that a judge/jury would not be sympathetic to a complaint along the lines of:
"I had no idea I would be under investigation when I saw the cops arriving at the door. I just decided it'd be fun to beat the shit out of my laptop with a hammer at that exact moment."
I consider myself “recreationally paranoid”. I like to consider possible avenues of attack on me, and work out mitigations where possible. I’ve always considered it a subset of some people’s “hacker mindset”. The question of “How would I break into my stuff if I were motivated to, and what can I do to prevent it or make it more difficult to break?” provides me with a lot of satisfying thinking, even though I’m not planning to overthrow a government or move shipping containers of narcotics across borders.
I’m far more concerned about a thief trying to rob my house than I am about the government coming in for my computers. In fact, in that scenario I’d rather let them have my computers unharmed as it would prove whatever they’re looking for doesn’t exist on my machines. My concern is how do I sufficiently back up data for replacements.
I have practically zero concern about thieves robbing my house. I lock the doors when I go out, and pay for insurance.
It’s happened only once to me in over 50 years, and that was because I foolishly assumed a 3rd story window was safe to leave open, when there was enough plumbing on the wall that a determined enough petty thief could climb through it. All I lost was a computer (a PowerMac 6100) but had recent-enough backups at work that it was only minority annoying to replace and deal with insurance. (There was personal email and the like on it, and stored passwords. I monitored the email server carefully for 6 or 12 months after that, and never saw a single login attempt. Whatever they did with it, they wiped it before they connected it to the internet and the email logged I automatically).
I’m also not so concerned about my government coming in for my computers. They’re too incompetent to do it well except for major targets. I do enjoy the thought experiments around “what would happen if the government ‘went bad’?”
(I’m also aware the I, like probably everybody, have data on my machines that if cherry picked by prosecutors would be awkward to have to defend myself against in court. I often rant about wanting to stab my idiot co workers in small group chats for example. If I were a prosecutor trying for force me into a plea bargain or an leo attempting to coerce me into doing something I’d prefer not to do, I have no doubt that a fairly damning looking case could be constructed by choosing only the parts of my data that when put together paint a bad but fictitious/out-of-context story...)
To be fair, Apple and Google both put a lot of effort into making exploiting a phone as difficult as they can even in the case of an attacker having physical possession.
I’d much rather have potential “evidence” against me on my phone, than on my laptop.
The stakes are very high though, and the attackers very motivated and well resourced, and I suspect there’s enough political pressure on them both to do only “a good enough job to make it look like they’re succeeding”. If anybody thinks the NSA isn’t several steps ahead of both Apple/Google and NSO/GreyKey, wtheyre fooling themselves...
The iPhone's terrible battery life isn't a bug, it's a privacy feature! I wonder if the FBI's evidence protocol involves immediately plugging in an iPhone to maintain the vulnerable state:
> That latter acronym stands for “after first unlock” and describes an iPhone in a certain state: an iPhone that is locked but that has been unlocked once and not turned off. An iPhone in this state is more susceptible to having data inside extracted because encryption keys are stored in memory.
I do wish Apple would add "restart" as one of the system actions in the Shortcuts app.
One thing they did do: if you bring up the power off screen (by holding power, or power+volume up, depending on model) then it disables biometric unlock, even if you don't power it off. Bringing up the power off slider screen is sufficient to force a passcode-only unlock.
I have contact tracing enabled on my iPhone. Charge it nightly, usually sits around 70%-80% when i plug it in again the following evening.
Then again, i've been spending most of last year in my home, and i live in a rural area, so not much activity besides my wife and kids. I can only assume that the busier your surroundings, the more power is being drawn for contact tracing.
This should bring into question even more Signal's implementation of using real phone numbers for accounts. It is NOT privacy focused.
Even if this 'hackability' is an issue only with the security of the phone/hardware - able to be hacked and thus reach the decrypted signal messages - That also means, that person's Signal contacts also have their real identities exposed. (Where they wouldn't be if the account names/ids could be arbitrary like eg. wickr)
We expect heads of state to be able to have “private” discussions while knowing the other heads of state they may be communicating with. I can have “private” conversations with my partner, even though people know who that is.
You can also choose to anonymously communicate with no privacy, Reddit or 4chan style...
They may not be totally orthogonal, but I don’t think either privacy nor anonymity are encapsulated by each other.
You're just describing levels of privacy. Head's of state can of course have private discussions, where only the contents of conversation is private - but the fact the conversations took place are not private. Compare that to AnonA talked to AnonB - We're still aware two people communicated, but it's even more private as we can't infer their relationship (And probabilities about what might have been discussed - from other metadata of the conversation time,place,duration,frequency). And finally, perfect privacy nobody knows that a conversation took place in the first place and the contents are unknown too.
There is also still privacy in public communications when done anonymously, as you can't tie the information disclosed to the real identity (necessarily). -- Hence why for privacy purposes advertising data is often anonymized. (But again, less private of course, if ie I said I was a 2 fingered man in Poland and there is only 5 people fitting that description)
So I'd still say, perfect privacy cannot occur without anonymity.
Seems like this is partially Signal's fault to me. Why doesn't Signal independently encrypt the message db when not in use? It's well known at this point that iPhone is easily cracked, and thus Signal provides no security for a stolen phone.
> Why doesn't Signal independently encrypt the message db when not in use?
To what end?
If there's no additional authentication required to open the app and just view the messages normally, it's useless because they can just open the app and view the messages.
If they use their existing PINs as key material then, given an OS-level exploit grants access to the underlying database, it can be cracked offline in... basically no time at all.
If they allow you to create a long, secure password for opening the app, basically nobody uses it because nobody wants to type a 30 character password on a phone touchscreen every time they want to check/send a message.
Pretty much any solution that doesn't absolutely destroy usability still relies on the OS performing some sort of authentication first. (As in the current solution where the encryption keys are stored in the OS-level key store.)
If you're that concerned about your security, just go disable FaceID/TouchID so the iPhone never leaves keys in memory when it's locked.
Maybe your assumption that something dubious is going on can be eliminated by Occam's razor? Because: any business that gains in popularity will automatically also see higher press coverage.
Isn't it the case not just in tech, but literally in any field that is male-dominated? I strongly doubt that the problem you describe is any less prevalent in finance or academia, for example. Not a comprehensive list, of course, those were just two off the top of my head.
If you raise Signal's profile then I think you can expect some half-informed tech journalists to start writing about it without fully understanding it.
I mean it makes sense. People are concerned about data privacy on WhatsApp so they move to signal. Then the people who have always said “signal isn’t perfectly secure” must reiterate to the masses what that have said many times before.
I don't think this should surprise anyone. The FBI has multiple methods for accessing locked phones: using physical exploits like those provided by Cellbrite, or through baseband attacks - i.e. first attacking the cellular modem and from there using an exploit to get to the main ARM cpu, or through exploits or backdoors in any app the phone had that do background refresh through the web while the phone is locked.
I think the current status of infosec means that anyone that is the target of a nation state intelligence agency or counter intelligence agency can be hacked. The question if that is actually done or not depends on how interesting they are and the lawfulness of the action and not on technical capabilities.
Your understanding of baseband attacks is not correct. Having a baseband exploit would not facilitate this. Nor would exploits/backdoors in any particular app.
Why couldn't a baseband attack facilitate this? It was shown at least as far back as 2017[0] that a program on a baseband could affect the memory of the application processor, and in 2018[1] that a specially crafted message can achieve an RCE on a baseband. Since then, cell modems have gotten even more integrated with APs.
Because this is about the iPhone, where the baseband is just a USB peripheral. There simply is no DMA. iPads and Macs have DMA controls in place as well. There are other iPhone attacks for sure, but they have been fairly conscious about keeping the baseband isolated for a good long while. So it's less likely to be the vector. Apple didn't spend a ton of money on a custom security processor and OS stack just to let a 3rd party vendor firmware walk all over it. From page 41 of their old iOS Security Guide:
>"To protect the device from vulnerabilities in network processor firmware, network interfaces including Wi-Fi and baseband have limited access to application processor memory. When USB or SDIO is used to interface with the network processor, the network processor can’t initiate Direct Memory Access (DMA) transactions to the application processor. When PCIe is used, each network processor is on its own isolated PCIe bus. An IOMMU on each PCIe bus limits the network processor’s DMA access to pages of memory containing its network packets or control structures."
You'll notice in those papers you link, that "iPhone" and "Apple" do not appear as subjects of the paper. Cellebrite and the like are probably doing other things.
Exploits are possible even without DMA. Windows had a slew of USB stack exploits, ranging from the serial and modem drivers to HID device and more.
There have also been in the past (and probably still exist) exploits over serial lines, over I2C and SMBus, etc'. Not having DMA makes it much, much harder, but not impossible.
So having the modem connected by USB does not make attacking through it impossible - how can you tell there are no bugs in the iOS USB stack?
How is it possible that the FBI has so much advanced stuff when I’ve never met a skilled developer willing to work for what the government pays? Are their tools developed by high paid contractors?
First off, others have pointed out that these are 3rd party software companies providing the tools.
I'd like to talk about your other point though:
> I’ve never met a skilled developer willing to work for what the government pays
So there are a lot of very skilled developers working for the government right now. I'd agree you probably haven't met them. I've found that people who work for the government, especially on TS/SCI systems, do not go out and "network". They usually can't or won't talk about their work. Yes they get paid far lower than what they'd get from a SV startup, but maybe people aren't solely motivated by money. There are some nice things about working for certain parts of the government. First off, there is excellent job security. You pretty much never have to worry about getting fired, it isn't even in the back of your brain. Comparing that to a startup where you are almost always out of breath being worked too hard and you have that fear every day your job is going to disappear. Second, while the pay not be totally up to par, it also isn't bad. The hours are also great, there are laws that govern contracts and projects that limit employees to 40 hours per week. You will not be asked to work over that, even in "crunch" time, because it breaks competition laws when contracts are offered up to outside agencies who put in bids to do the work. Lastly, and most importantly, there is often a sense of pride or duty involved in the job. I know HN isn't exactly the most rah-rah-yay-government crowd, but there are a lot of people out there who have a desire to serve the public somehow. They believe they can do more good on the inside compared to those on the outside who just simply complain about the government.
Not a government worker, but I work in non-profit areas.
I do make a lot less money in this role than I would in a startup environment or SV. But, work life balance is significantly better. Benefits are actually the best I've ever seen (no cost health plan and they put the full amount of the deductible in an HSA for me each year, generous 401k match with only a 1 year vesting schedule, summer hours so I can spend more time in daylight, a pension with a very easy vesting schedule, plenty of time off, etc).
I'd work a government job for similar reasons. It's not about pay for me. I need enough to live, enough to plan for retirement, and a little extra to enjoy my life.
I work to enjoy my life, I don't work to die early or just to work. My life comes first, work comes second and finding a company that respects that and encourages it to some extent, is worth more to me than high pay and a worse environment to work in.
As you get older, you are more valuable. No "silicon valley" syndrome about age: you'd never have to dye your hair, wear a hoodie to fit in, nobody will blink if you have to take a day off to take care of the kids.
You can be a real adult - nobody comes into those programs and expects a ball pit or a foosball table, and nobody seriously thinks someone right out of college but who is up on the "latest framework" could take their job - because the jobs are deeply, deeply technical and require lots of experience. If you blow the doors off of everyone else in programming, great! That's one dimension. But it usually isn't the only one.
"The pay is worse" - for software, undoubtedly. For hardware? It's usually a wash after adjusting for cost of living and the benefits are better on the defense side.
I like defense because the systems are usually pretty badass. We have people leave for FAAMG occasionally, so it's not like we don't know who the people who are "that good" are! :)
As someone who has spent many years working in defense / government contracting, I think you're touching on a really big reason a lot of people work in this world.
Many years ago, when my career first started, I was an idealist who thought the way I could give back (as someone who couldn't get into the military for physical reasons) was to work in the defense sector. I also tried very hard to get interviewed by an SV company, but I didn't qualify for an interview due to going to the wrong college and getting the wrong degree.
And I have worked on some seriously badass programs and with some really cool systems and people. In my older age, it's funny because I've come full circle. I'm actually using my experience, and the new all-remote world, to interview with some tech companies. I'm also starting to sour a lot on working with the government and military because my idea of "giving back" has become much more cynical. EDIT- I meant to say here that I've become much more cynical, and working with the government no longer feels like "giving back." Nowadays I give back by working on social issues.
BTW, I'm also upvoting / interacting with your comment because (a) I think you're getting downvoted by people who disagree with you, rather than by people who think you've broken some rule, and (b) I think your perspective represents a pretty common way of looking at SV from the outside world.
Oh same on the interviewing with tech companies remotely. After you've built things that are really badass, it's REALLY hard to be upset about a rejection, and the interviews can be really fun problems, so it's just fun.
It's more about the impression they get from you in five or six 45-minute slots. It's just a numbers game if you're getting that far, so just keep at it right? Take notes, improve, watch some Coursera courses, do some silly coding exercises - basically what we all do for fun anyway.
Glad you could find work on social issues though; I know what you mean about the cynicism, but I think I've gotten around that - I realized it came on with bad programs with lots of waste or highly political teams/programs, so I just avoid those. It was a long road to get there though... :)
I know you're bring downvoted into oblivion for reverse ageism, but my experience mirrors your statements. I think the aggrieved HN masses just don't have similar, or much experience.
It's kind of bitterly ironic that on this site that comment is considered more inflammatory than someone like the top level commenter who effectively just says "lol only stupid people work for the government."
+1. Also upvoting the original comment because no one seems to be disagreeing with it with any words.
Solving really hard problems is valuable. Little gets easier, but you can get better at it.
You have me wondering .. Does expertise from experience (whether compressed into a few years or a lot) qualify as reverse ageism? Is anyone being excluded or denigrated?
It seems plausible (but not exclusively) that the more time and effort you apply to something, the better you will probably get at it. Time alone isn’t a measure of experience and expertise but it doesn’t hurt.
It’s just describing what the experience of someone who has been at something for a while.. might end up like.
"Does expertise from experience (whether compressed into a few years or a lot) qualify as reverse ageism? Is anyone being excluded or denigrated?"
No, of course not - isn't it silly that we have to ask if that is "reverse ageism" to say that people who spend time at something are generally better at it? That's universally true of every single human endeavor, isn't it?
"It seems plausible (but not exclusively) that the more time and effort you apply to something, the better you will probably get at it." - Why does it sound like you're shying away from stating something that should be immediately obvious to every single person? Is it because you're afraid of the charge of reverse ageism if you agree? If so...isn't that silly?
"It seems plausible (but not exclusively) that the more time and effort you apply to something, the better you will probably get at it. Time alone isn’t a measure of experience and expertise but it doesn’t hurt." - I agree wholeheartedly - again, I thought that was obvious.
I appreciate your ability to walk into the middle of the bee hive to knock it over with this comment.
It was amusing to read and I don’t disagree with what you said but I don’t feel super qualified to judge either.
I’ve only seen that silicon valley start up culture from the outside when my partner worked at a YC company while they were going through a later stage fundraising round.
It wasn’t as cartoonish as you make it sound at all but your impression at least seemed in the ballpark of true.
This is precisely my experience, and it's why I've been hesitant to leave this sector for so long. I could undoubtedly make more elsewhere, but the "badass" factor of what I'm working on would plummet significantly.
There have been many times the temptation of “badassery” has sorely pulled against my own moral/ethical stances. Lots of projects I’d have genuinely enjoyed working on, but which I’d have hated myself knowing all the ways the tech I’d have been building could be misused (and too often in the fullness of time, when developed by other people, was misused in exactly the obvious ways).
I’ll bet their prominence is fairly carefully vetted, and probably part of their job description rather than the more typical tech industry “personal brand building”.
But then that’s evidently true of Google (and FAANG in general) as well.
I'll bet pretty much nobody cares, inside or outside of the IC. It's a fun thing for people to joke about (they've been joking about Aitel for decades) but reality is mostly boring.
From what I've heard, the hours you cite also enable moonlighting. Some people do a few hours a night and make as much as their day job. Sometimes it's even as a contractor in a related field that might service their main role. At least they're getting paid for this "overtime"!
For some people, gov. agencies provide the following:
1. job security.
2. ideology.
3. attractive retirement plans.
4. interesting domain / things to work on.
If you have a burning passion for catching bad guys, then you go working for the people that are catching bad buys. Pretty simple.
Sure - the tech can (and probably will) be abused against regular folks, but for some people it's worth the price if it means busting another pedophile/human trafficking/terrorist/etc. ring.
It's a bit like asking why some doctors dedicate their professional lives to humanitarian work, when they could make 10x more as a GP in some cushy suburb. For some, ideology and mission counts A LOT.
It has nothing to do with contractors. The FBI buys a product from a company that does this stuff. They do not develop it, nor do they contract out the work to develop it. They buy an (essentially) off the shelf product that does this.
Of course they can ignore the gov't employee pay scale. Contractors get paid variable amounts, depending on what they're working on and for whom.
Many defense contractors (can't speak to the intel side of things) get paid 25-100% more than their GS counterparts, if they have an actual counterpart. Though that's not universal. Their pay is also not capped, like civilian pay is in the US federal government.
EDIT: For further context, a technical GS employee (engineer, computer scientist) will be in a minimum of a GS-12 position after a few years of experience (usually 3-5). Most technical positions cap at GS-14. GS-15 is the highest of the GS grades, but mostly reserved for management and a select smaller group of senior technical people. A GS employee programmer will usually be a GS-12 or GS-13 until they reach a more senior position (usually with a higher degree than just a BS and often 10-20 years of experience, tending towards the higher end of that).
Federal employees also get a 5% match on their 401k equivalent (TSP), 13-26 days of leave a year (rolls over, cap at 30), 13 days of sick leave a year (rolls over, no max), 10 federal holidays, a pension (1% of pay for each year of service, 20 years = 20%), and pretty decent insurance. If the goal isn't "get rich quick", it's not a bad gig.
I would bet Palintir/NSO/Cellebrite (and many other equally capable but way less publicly known defence contractors) pay their top software engineering talent FAANG comparable rates.
They almost certainly also have just as many shittily paid employees that FAANG like to pretend they do not employ, offloading admin/janitorial/support staff to 3rd party “low caste” status employment.
I’d also guess BD and Sales roles at those companies pay at least as well as FAANG too. Commissions on defence contracts are worth a lot more than selling another bunch of SaaS seats to some promising VC backed startup...
When someone asks a question, why not just answer it instead of questioning the person and making them feel stupid about not knowing something?
At any rate, the U.S. government imposes many standards on federal government contractors including minimum wage standards, hiring practices, paid sick leave, a host of responsibilities that any contractor that wishes to do business with the federal government MUST adhere to. For example, in the area of construction all federal contractors must pay employees at a minimum, the prevailing wage including benefits for the locality of the construction site. It's not unreasonable to think and to ask what are the requirements and responsibilities that IT professionals working as federal contractors might also have. There are a ton of other requirements and conditions that you can read more about some of those requirements here:
Sometimes a little prodding can lead to more learning. Your answer on the other hand is so coddling that it could lead someone who's not paying attention to believing that the government pay scale actually does apply to contractors.
Everything I know about contractors and government work comes from Edward Snowden’s memoir where he was paid like garbage ($120k as a senior IIRC) to work as a contractor for the NSA. That’s why I thought it might matter. I’m sorry you don’t like my question
While I admire Snowden, I suspect his “senior role” was more like a Big4 consulting “senior” than a distinguished engineer at Microsoft or Amazon. It’s entirely possible he’d never have made it past a phone screen for a FAANG role.
I’m not sure $120k (in 2013) counts as “paid like garbage” for his role. What were SREs getting paid at Google back then? I get the feeling that was a reasonably close match for the role he had?
When the Roman Empire wanted something like water engineering or gold mining they found out who did it best, and then did invade the territory and enslave those who knew how to do it.
Steel traders from India will tell Romans it came from China so they did not invade it.
When the US three letter agencies want something from some company, they use a similar strategy: You are with us or against us. You can let us install backdoors in your software or you are a terrorist that support terrorists.
Replace “water engineering” and “gold mining” with “oil” and “lithium mining” to see that the world hasn’t changed much in some important ways in 2000 years, no matter how thin our phones are or how close to self driving our cars are getting...
Most of skills required gov operations are done via companies holding government contracts. Those that work for government directly for a low pay compared to what they could get in the industry simply aren't smart enough to figure out how to work via a 3rd party for a much higher pay.
I wonder if Apple's relentless march towards eliminating all physical ports on the phone is at least in some small part an attempt to harden against these GrayKey / Cellebrite tools that can attack the phone.
I am not particularly familiar with them, but having previously been someone who jailbroke my phone, several of the exploits used were originally delivered via plugging the phone in to another device, i.e. through the data port on it. Elimiating this may be to them a way to harden the phone against this, for both better (these tools) and worse (the JB scene).
This of course does not prevent remote or semi-remote wireless attacks, such as through the cellular baseband.
Rather than hacking Signal itself, maybe they were able to access the iOS app preview files from the iOS app-switcher? I'd imagine the app-switcher (the feature when you swipe up to switch between recently used apps) works by overwriting a screenshot every time it's minimized. Maybe they were able to access this data directly or indirectly (or maybe even via iCloud).
There's a setting in Signal where you can hide the screenshot in the app-switcher, but could still be triggered and stored somewhere. Or maybe they just got lucky and one of the guys had it disabled.
Should read the article before you comment next time ;)
There is a screenshot in there showing it is directly parsing the Signal sqlite database. I am guessing on his device he had not set an access passcode to the Signal app so no key was required to decrypt the local message cache. Towards the end of the article the Signal creator hit the nail on the head saying this is a phone security issue and nothing to do with Signals security.
It's written as a statement, not as a figure of speech. If it's not intended to be factual, it should be annotated as such. It literally claims theft as it stands, which makes the article seem juvenile in use of language.
You're interpretation of it isn't wrong. You're still wrong in asserting that it wasn't presentes as a fact in the article. It clearly, linguistically is.
Signal uses a sqlite database with an encryption extension for any long term storage of data. The key to this database is kept in whatever the phone uses for key storage. So if you break the phone you get everything including the old messages. The moral is to not keep the old messages around and delete them after you are done with them and hope they actually end up deleted.
This is a hard problem simply because of the medium. If you do encrypted instant messaging you need to have everything fairly exposed all the time. You simply can't make it as secure as something like encrypted email where you can lock down everything very strongly and only unlock it when you are in a safe environment. The extra level of security also means that keeping the old messages around is a lot less risky.
For instant messaging you need to be able to receive messages even when the phone is locked. So the key material can't be destroyed and must be left exposed.
You could encrypt old messages separately, but the result would end up being very inconvenient.
You can receive encrypted messages and store them while locked. You only need the key material when reading the messages, which then uses the apps unlock pin to retrieve the key material from the Secure Enclave.
Quite how Signal deals with notifications I’m not sure, perhaps your right and the encryption key s persistent while the app is in the background so it can show you the sender and message on the Lock Screen? But it doesn’t _have_ to be that way. Wickr (which I trust way less than Signal) seems to at least have its UI set up to imply that’s what it’s doing.
Fair enough, but my point still remains. You need the key material any time the phone is unlocked. Because we are doing instant messaging that is going to be a lot more exposure time than with some sort of offline system.
You should be able to lock that down to “only while the messaging app is in the foreground”, which limits the exposure time further, but certainly not to zero.
Hopefully, if you care about this, you use and app which can be configured to have its own passphrase/pin which hopefully is used to access properly secured keys from the Secure Enclave only when that’s entered. (Signals interface works like this. I’m only assuming/hoping it’s working that way underneath).
After holding down the power and volume buttons on an iPhone, biometrics will not work, and the phone will require a passcode. Does this still count as AFU?
I think a better fix for this issue (because, of course, they can seize the other party's phone) is to use Signal's expiring/disappearing messages, so that they are (presumably) erased from both devices after the specified period of time.
I wonder if the method of "disappearing" the messages used by the app is vulnerable to forensic analysis or not.
Yeah, not using disappearing messages with Signal is just asking for this type of thing to happen. It's the number one reason I even use it in the first place.
Idea for an app, which would probably require a rooted Android: new USB peripheral detected (incl. charger) when screen locked => power off. I guess it could at least frustrate some of the data collection efforts.
Are there smartphones where all data - except some write-once installed programs - is stored encrypted, with a complex enough key to not be easily breakable by the authorities, on one hand, but being reasonably usable by you, on the other hand?
But would it do that and admit that when it's not really worth? Din't you have to be really really notorious for them to use this kind of evidence in a court? If I were them and could decrypt something I would prefer to keep this fact secret.
Presumably a defence lawyer could ask for demonstration of the technical details: prove that they have the ability to obtain such evidence and didn't just fabricate it.
I suspect it's a needed trade-off between security and practicality. I have no idea how "needed" it is though, can someone shed some light on this? Also, couldn't Signal add their own encryption layer?
Signal could add app-level encryption, but who would this serve? Signal can't do anything better than what the OS/hardware provides in terms of encryption. Even if they let you specify your own signal-specific password/encryption key:
* Non-technical users either won't use it, or will use a weak key
* Technical users are better served by making sure their device is secure and hard-locked with a strong passcode (tip: 5 presses of the lock button on iPhone wipes in-memory encryption keys, essentially exiting "AFU mode")
> (tip: 5 presses of the lock button on iPhone wipes in-memory encryption keys, essentially exiting "AFU mode")
Is this the same thing as holding down the lock button and one of the volume buttons on one of the newer iPhones? I'm referring to this doc: https://support.apple.com/en-us/HT208076
Yes, it's basically a side effect of activating Emergency SOS. The five-press shortcut works on all iPhones as far as I'm aware. As the doc says:
"If you use the Emergency SOS shortcut, you need to enter your passcode to re-enable Touch ID, even if you don't complete a call to emergency services. "
I have an iPhone X and I have it set to not use FaceID for unlocking the phone itself.
But I temporarily enabled it now to test. Maybe I am pressing the power button wrong but rapidly pressing it five times does not prevent it from allowing FaceID to unlock the phone. Whereas power plus volume up button does indeed.
Btw, when I normally have FaceID disabled from unlocking the phone, does it wipe in-memory encryption keys when locked with a single touch to the power button or not? I was assuming that it did, but I realized now that this assumption might not be correct.
Why would criminals not use expiring messages? Bizarre that you’d go to these lengths to use e2e chat and then not expire your messages after say an hour.
If Charlie is selling drugs to Bob and Alice, expiring messages don't help Charlie out if the others are finding ways to capture data on the screen before the message expires (which, is very common for very innocent, non-malicious reasons).
Similarly, though I've not tested this with signal specifically, other chat apps' implementations of expired messages can be futzed with by simply disconnecting the phone from all network connections.
People who need true privacy, regardless of the reason, aren't using chat apps readily available from stores since the apps only prevent passive snooping, they do nothing to help establish circles of trust. Such business is either conducted out in the open without concern for who sees what (you can see this in countless pictures online when people openly sell stuff like weed), or such business stays off chat apps completely because there's no way to validate who is holding the phone on the other end. The transactions occur indirectly using proven safe methods for the courier and buyer (dead drops, mail tricks, etc)
Suppose I use a public key authentication scheme to communicate with a collaborator, Bob. To avoid any possible failures of technology, we've developed a simplified scheme that can be done with pencil and paper and sent through the mail. When Bob receives the message, he uses his private key to decrypt the message, writing each letter in the plaintext on a piece of paper so he can read it. After reading, he burns the paper completely with a small fire.
Are you going to insist that this last step, burning the plaintext copy, is not "valid security"? If you do, haven't we so mangled the meaning of security that it's not even intelligible any more?
I believe that "Let's not keep around unencrypted / minimally protected copies of our communications after we're done using them" is a perfectly valid security measure for two people to use. No, it doesn't solve "trust issues", in that one of them could simply refuse to delete the messages, but neither is it intended to.
If one were to need true privacy as you say, shouldn't there be benefit to overlapping security approaches?
Ie, use E2EE, use expiring messages, use out-of-band challenge/accept (ie, recipient has to mention keyword or conversation stays plausibly deniable) all seem applicable.
The Signal app, to the best of my understanding, does have a built in dead man's switch in the form of the PIN system. You can apparently set the time range in the settings.
I hate be a spoil sport, but generally I don't need protection from the FBI, NSA, CIA. I need convenience to unlock my phone/laptop quickly with a fingerprint, even after a restart. XKCD nails this: https://xkcd.com/538
I'm MOST concerned with Google/Facebook continuously circumventing laws and violating my opt-out preferences. I'd like to have a null advertising ID for instance. Can't do that.
And besides, you don't own root access on your devices. Apple/Google does. To think that a userspace app is secure where Google/Apple controls the kernel, or even something basic as remote screenshots is sort of silly.
I hate be a spoil sport, but generally I don't need protection from the FBI, NSA, CIA.
Awesome. Good for you. However, there are people out there who are busy changing the world, holding the powerful to account, and generally being involved in society in ways that are less safe and more interesting than you do. This is about them, not about you.
the US has problems, but the police won't hit anyone with wrenches to get them to talk. torture taints the evidence obtained.
the reason spy agencies can do it is because they're not law-enforcement. that's why they have to use parallel construction, to launder the evidence trail.
They're not going to hit you with a wrench... it's a comic.
However, they will hit you will civil asset forfeiture, freeze your accounts so you can't pay lawyers, run your name through the news so you lose your job, jail you until your trial etc.
Would switching to say, Protected Unless Open have a negative performance impact? Otherwise, it seems like a kind of obvious oversight not to use a more restrictive data protection class. I'd be curious to know the Signal team's rationale for using PUFUA.
Its not a defense against targeted attacks of individual devices.