[Eric] Grosse echoed comments from other Google officials, saying that the company
resists government surveillance and has never weakened its encryption systems to
make snooping easier — as some companies reportedly have, according to the Snowden
documents detailed by the Times and the Guardian on Thursday.
“This is a just a point of personal honor,” Grosse said. “It will not happen here.”
Some folks are inclined to distrust Google, but there are people here who really, really care about security.
The revelations that NSA is running a HUMINT program should make it very clear that you can't trust everyone at Google or any other major provider. Those risks are mitigable, but it's expensive and I doubt most places take sufficient steps to prevent it.
Even without that, trusting companies because their employees are honest is hard.
There are some people at the NSA who really really care about privacy and not spying on US Citizens and believed we didn't do so. In fact, most of the ones I've met. However, with sufficient compartmentalization, they don't know what they or others are truly doing. Same can be true for any company.
Are you working on Google's data liberation system to not trap users in your system or are you working on NSA's data exfiltration system for Google's data. I's not always clear.
Google (and other organizations) collecting and securing such vast troves of information -- and building the technology to analyze it quickly -- obviously makes them hugely valuable to attackers and defenders alike, since the data they are storing is the very information that attackers/defenders try to keep from each other.
Encrypting it and securing it very well at a technology level means that the human element (I'd argue) becomes the easiest way to get access to it - i.e. someone with sysadmin access, DB access, or just working on a project where the APIs and/or tools available can produce valuable information. This is true even if the 'player' (with system access) has to be 'recruited' by the attacking or defending team some time after taking up the job.
Couple this with the fact that even the security agencies themselves are prone to corruption, malfeasance, human error, (no-one is perfect), and insiders, and you could easily end up with a confusing mess. Bear in mind that everyone wants their agents to operate and be able to communicate back without detection, again regardless of which team.
Compartmentalization must also come into conflict with inter-agency sharing rules -- at some level, people need to know what is going on and make decisions -- and trust must be a big issue for many of these groups - they probably spend a ton of time watching themselves and others, and watching for information leaks / canaries / spread of misinformation.
I'm certain there'll be some fascinating stories eventually from all of this - it all continues to make me believe that concentration of power and information (which I think are continuing as a trend) only end up in creating dangerous situations, and that decentralization is ultimately the preferable way to go (in that it prevents a small number of people from having too much power/influence/control, and equally protects those same people from being targets themselves).
I'm not aware of much successful recruiting. Most moles turn on their own. The game for the intel guys is like baseball: a lot of waiting and then serious hustle to make sure a fresh mole gets trained, vetted, rendered effective without getting caught.
Depressingly makes it sound like an everyday thing which is just monitored for - makes sense I suppose given how many information sinks there are nowadays.
True, yet I imagine it shouldn't be difficult to signal out those employees that present the greatest HUMINT risk and apply extra scrutiny. Any employee that have any sort of top secret clearance, that has worked for intelligence agencies or contractors and the worked in the military, but not out in the field is potentially a mole.
I'd find it hard to believe that there are people that don't fit that profile but are moles for governmental intelligence agencies even exist.
People come to Google from all paths of life. For all you know, some 20-something long-haired unix hotshot could have been busted for drugs at some point and "repurposed" as a mole in exchange for leniency. And there's always the classic sex honeytrap for married men, which will never go out of fashion.
Real spooks don't carry a conscience, they'll exploit anything they can to get their grubby hands on the data they need.
"For all you know, some 20-something long-haired unix
hotshot could have been busted for drugs at some point and
"repurposed" as a mole in exchange for leniency."
That doesn't mean this development is meaningless and should be dismissed, it mitigates real concerns. As for the human element, that is probably the hardest to defend against but enacting sound engineering solutions is more than half the battle.
Since Google is able (and willing, when asked by the government) to decrypt everybody's email at will, and continues to build software that maintains their absolute power to do this, I really don't give a f@#k whether they promise to use 256 bit encryption, 512 bit encryption or 23439287239 bit encryption.
There's still a gigantic difference between the government being able to "vacuum up" everything (weak/no encryption) from everyone, versus the government having to ask for communications from specific users.
If you are actually trying to hide something from a targeted government attack, you certainly don't want to use any hosted services like Google's.
If, however, you are merely trying to avoid the government passively sweeping up all of your data, searching through it, and maybe subjecting it to further scrutiny due to it containing the wrong keyword, it helps to know that it's encrypted in transit, and that in order to decrypt it, someone has to actually present a warrant to Google.
Of course, there's the additional problem of National Security Letters, as they aren't really real warrants and they have the secrecy around them.
These problems can be attacked on multiple fronts. We can improve cryptographic security, and work on more decentralized approaches to online services, and reign in the NSA's power at a legal level, and so on.
Yes, but (as others have already pointed out) the takeaway point from this press release shouldn't be "Google is doing great things to prevent spying" and should instead be "Google admits they have been sending sensitive customer data between data centers in plaintext."
For sure, but in the case of google this probably doesn't apply.
From what was published recently we know NSA has proven methods for bypassing encryption, namely getting the keys used for encryption (so they can decrypt everything) or getting access to the content before encryption or after decryption.
To me this last move by google is a PR attempt at regaining people's trust
I'm so bored of hearing the accusations of PR stunts.
They crop up in every submission detailing an action taken by Google with regards to the Snowden/Prism/NSA revelations. Is it so ridiculous that a large corporation should seek to ameliorate its image in the eyes of users and shareholders?
PR has become such a dirty word.
Of course it would be best if all these actions were taken earlier, purely as the result of a strongly held principle. However, when presented with the realities of public businesses operating on a global scale - I am glad that such steps as those detailed above are taken: at whatever stage, and for whatever reason.
The tinfoil hat brigade needs to, as the old saying goes, "stop seeing reds under the beds" and occasionally ... just occasionally ... take the facts presented to them.
In times when misinformation and confusion is so wont to proliferate, attempting to discern true motive is almost ridiculous - condemnation on the basis of any such discernment doubly so.
When Google does something that makes it impossible for them to hand over certain types of data to the NSA, either by not collecting it, or making it so that only the user is able to decrypt it, wake me up. Until then, it's a PR stunt.
I am not disputing the fact that a major motivation for their actions is PR. I am suggesting that action as a result of PR pressure is still action - vastly preferable to meek acceptance of the status quo.
That being so - dismissing something as "just PR" misrepresents the actual benefits something like this may confer.
IMAP/POP3 has always been a gmail option, which allows local PGP use. Chrome sync allows you to set your own encryption passphrase (provided you trust the binary doing the encrypting...). You've been able to share encrypted files on google docs/drive since they added arbitrary file storage. Etc.
Chrome sync is probably the strongest example that I can think of fitting your criteria, since it's built into the product itself, but a lot of this just comes with the territory of web-based apps.
They haven't done anything there though... They've just provided a standard IMAP service, and a standard file syncing service...
When they provide an option in GMail for people to upload their public PGP keys, and then start encrypting email on the way in, and don't store any non-encrypted versions of those emails, and build PGP support into Chromium for accessing those emails. Then they will have done something worth noticing.
Client side tool which builds a local index as messages are decrypted to be read for the first time. The index is it's self encrypted and incrementally synced between clients.
That took me less than 5 seconds to think up. Google can spend time and money thinking up better solutions if they want to actually do something.
If there's any sort of processing on incoming data, then there's going to be a lot of unencrypted copies floating around in various caches and intermediate staging systems. A secure system requires encrypting the data right off the wire, before it's stored anywhere.
Search indexes are very large -- you don't want to double or triple the amount of storage your email client uses. Also, being able to search only mail that you've downloaded and decrypted is a terrible user experience. I'd estimate over 60% of the mail to my personal inbox is from some automated system, rather than directly from a human, and I typically don't look at them unless a search hits them.
It takes 5 seconds to think of solutions with terrible security and usability characteristics. Thinking of a system that will be a measurable improvement in security and will actually be used by people is much more difficult.
These are all easily solvable issues. But to get back to the point of this thread: Google has done nothing to help secure peoples email.
The fact that you can't identify any ways in which they could, or refuse to acknowledge them, or think they're too difficult for a multi-billion dollar company makes no difference to the point under discussion.
I think it is fair to assume that the guys who set up and run Google, Apple, MS, Yahoo, Facebook, etc, all started with great honorable intentions. Yes, even the hated Bill Gates. I chose to believe that is true. I believe these people were once us. Up till recently, its been a cat and mouse game of how they can get money from customers and how customers can mitigate that. This to me is fine, it is business and they all need financial structures to survive.
Of course what has happened now is that the jack boot of government has poisoned the well, and I cant believe there is no group of people more upset and angry than these pioneers. I bet if we could talk to any of them off record they would be as annoyed as "we" are, if not more. After all, its their baby being ruined, not ours.
I would add to that the corporate high finance thing as a poison too, but again, that's just money. It does soil the, er, purity of things, but doesn't not threaten freedom and liberty.
> Up until today, Google didn't even encrypt the data.
That's not what the article says. The new encryption is specifically for backend datacenter-to-datacenter traffic over leased lines. But even before that project, there was lots of strong encryption being used all over Google: to encrypt user data on servers' hard drives, to encrypt data going between browsers and servers, encrypting tape backups before sending them to offsite storage facilities...
Your critic is valid and I completely agree... but I standby what I wrote with regard to their traffic over leased lines.
'We just sent data in the clear over leased lines so the NSA could read whatever they wanted. But the encryption we never used was never weakened.'
This is nonsense.
Not only that, but when the data is transmitted, that is exactly when Google has the least amount of control over it... ie: that's when encryption is the most important. Yet, they chose not to encrypt the data, and then give everyone a story about their 'personal honor' of keeping things secure. This is a joke.
Depends on your definition of "leased lines". A privately-operated layer 1 backbone over dedicated dark fiber has traditionally been considered to be pretty secure (up until recently, anyway)
I was always curious how HUMINT would look if you were inside the organization as a worker-bee.
In my experience at a certain large SW company in the pacific northwest, I do know that core crypto code, the actual workhorse functionality, is typically walled off from the general developer population. The rationale given is that there are foreign nationals on staff who are not permitted to look at that stuff. That makes sense given the export laws in place.
All the security-like code I saw above that layer was good, to my non-security-trained eyes: Honest use of crypto algorithms, responsible bug fixes and regular and nitpicky reviews of protocols, file formats, APIs, and the code itself. For several shipping products I had confidence that the code we checked in was the actual code that shipped.
For the lower layers (an ideal place to introduce weaknesses):
- The general developer population never sees them
- Even if the sources are utterly honest, the build process might hide the introduction of weaknesses (a variant on "Reflections on Trusting Trust"), or the build machines might ship different bits, or weaknesses might be patched-in later (even after customers get machines) by the OS update infrastructure.
This is the kind of thing I'd HUMINT if I had a mind to.
But do all of them? In my personal experience, such tasks will be given to employees who are likely to perform them.
For example, at a past sysadmin job, I was asked about the technical feasibility of monitoring a certain employees computer use, whom management suspected of some minor infringement. I refused to assist in the matter on moral grounds and was reprimanded. The task was given to a colleague of mine who had no qualms about it. Next time, they went straight to him.
And the more complex, distributed and large a system is, the more people are in positions where they can compromise it. It takes only one person to break the whole system (which is basically what just happened to the NSA). Do you trust everyone who has or can gain access to your SSL private key? Everyone who manages your network?
I'm not convinced that this is not Google's version of "trust us". Keep in mind there is no PR loss for Google to adopt a pro-encryption stance now. If they are really serious about this, they would a) stop trawling emails and b) help develop tech for seamlessly encrypting both in-flight and at-rest email.
Meanwhile, Google Argues for Right to Continue Scanning Gmail
"This company reads, on a daily basis, every email that's submitted, and when I say read, I mean looking at every word to determine meaning," said Texas attorney Sean Rommel, who is co-counsel suing Google.
Why are you posting this shit on HN? Everyone here knows how contextual advertising in gmail works, and excepting those too young to have been aware back then, have known about it since 2004. If you have a point, make it, but scare quotes specifically made to induce an emotional reaction from those without technical knowledge really have no place here.
I'm not sure what your point is. If you don't want your email accessed, don't send it unencrypted from your client, and definitely don't send it via a service that has features (search, spam, google now, etc) and is payed for by a system (contextual advertising) that explicitly accesses the contents of your email.
Download Thunderbird and a PGP client[1]. Boom, done.
Use another email service. Boom, done.
I'm not objecting to the idea that you'd find it objectionable to have your email contents used for advertising. I'm objecting to a useless quote that tries to turn this into a soundbite-off instead of an actual discussion (little hope as this thread has).
Diluting the expectation of privacy about e-mail in general has implications under constitutional law. The 4th amendment only protects against unreasonable searches by the governmnet. The use of mail connotes private communication. As in contrast to post (which is presumed public). Implied consent to forfeighting the right to stop 3rd parties reading your e-mail is not something the public has an interest in establishing. Regardless of the purpose of such 3 party incursion. I don't want to get into a side-bar explanation or legal debate TBH, but its not mindless fear mongering.[1]
No, this is not correct. The particulars of your agreement with a third-party for storage of your email does not extend government rights to examine that data (the ads in your inbox are as non-public as the email in there too). Even the horribly flawed ECPA recognizes that (it buttresses it, in fact). Moreover, Google[1] is currently standing behind the US v Warshak shield and requiring warrants for email contents.
The problems with the third-party doctrine are much more fundamental than the ways in which that third-party is storing and displaying your data, activities that continue for any webmail client even in the absence of ads when doing spam filtering, searching, etc. Merely the fact that a third-party is involved at all is enough for the outdated sections of the ECPA to rear their ugly heads. Here's hoping the Supreme Court takes up a case like US v Warshak soon.
Your talking statutes, not the constitution. Obviously the constitution trumps both statute and executive readings. Reasonable is per the constitution, an it is plastic in case law. That's why the questions are important, fundamentally. In any event, its worth keeping in mind the right level of abstraction.
I'm talking both. The ECPA was important in that Congress avoided decades of court cases by making explicit the protections afforded electronically stored media, though they did not extend those protections far enough (which today in practice weakens protections that may have been more clearly delineated by now had the ECPA not been enacted).
Constitutional protection superseding (among other things) the fairly arbitrary 180 day requirement for a warrant set by the ECPA was clearly recognized by the Sixth Circuit in the US v Warshak second (criminal) case, stating that "The government may not compel a commercial ISP to turn over the contents of a subscriber’s emails without first obtaining a warrant based on probable cause."[1]
In both US v Warshak cases, though, the Sixth Circuit emphasized the higher protection afforded content over transactional data just for being content by the the tests established by both Katz v US and Smith v Maryland. They laid out that even the supremely terrible precedent of Smith v Maryland (which is the proud parent of allowing the government to seize "metadata" without a warrant) did not allow the government to "bootstrap" limited access to full access, including the access needed for automated processing of email contents by the email provider:
"The government also insists that ISPs regularly screen users’ e-mails for viruses, spam, and child pornography. Even assuming that this is true, however, such a process does not waive an expectation of privacy in the content of e-mails sent through the ISP, for the same reasons that the terms of service are insufficient to waive privacy expectations. The government states that ISPs “are developing technology that will enable them to scan user images” for child pornography and viruses. The government’s statement that this process involves “technology,” rather than manual, human review, suggests that it involves a computer searching for particular terms, types of images, or similar indicia of wrongdoing that would not disclose the content of the e-mail to any person at the ISP or elsewhere, aside from the recipient. But the reasonable expectation of privacy of an e-mail user goes to the content of the e-mail message. The fact that a computer scans millions of e-mails for signs of pornography or a virus does not invade an individual’s content-based privacy interest in the e-mails and has little bearing on his expectation of privacy in the content. In fact, these screening processes are analogous to the post office screening packages for evidence of drugs or explosives, which does not expose the content of written documents enclosed in the packages. The fact that such screening occurs as a general matter does not diminish the well-established reasonable
expectation of privacy that users of the mail maintain in the packages they send."[2]
I have not personally seen a good argument for differentiating between spam filtering and contextual advertising in terms of access. Regardless, this is a clear argument for automated access being immaterial to the question of an expectation of privacy of the contents of an email.
I have not personally seen a good argument for differentiating between spam filtering and contextual advertising in terms of access.
Are you seriously proposing free e-mail and/or a spam filter is a good trade for one of the major pillar Bill of Rights? So goes my spam filter, so goes the constitution? What's ironic is that the spam guys use 1st amendment to justify the spam (same as junk mail and the credit rating agencies).
> Are you seriously proposing free e-mail and/or a spam filter is a good trade for one of the major pillar Bill of Rights?
What? Where are on earth are you getting that from what I'm writing?
I'm saying that the Sixth Circuit has ruled that just because you use an email provider that scans your email contents for things like spam (or ads), you have not given up your 4th amendment right for that content to be secure against searches without a warrant.
What you quote is me arguing that your premise that contextual advertising is somehow distinct compared to scanning for spam both in function and legal implication is flawed. The next statement states that even if such a distinction could be made, the above quote from US v Warshak I is a perfect explanation of why agreeing to automated scanning of your email does not imply consent to an abrogation of your rights.
I really don't see how I can be clearer than "The government also insists that ISPs regularly screen users’ e-mails for viruses, spam, and child pornography. Even assuming that this is true, however, such a process does not waive an expectation of privacy in the content of e-mails sent through the ISP...."
What you quote is me arguing that your premise that contextual advertising is somehow distinct compared to scanning for spam both in function and legal implication is flawed.
Its flawed because that premise is at once irrelevant and falsely asserted. Neither a spam filter nor contextual advertising are inherent to private communication.
What is relevant to private communication is that it is private. If I CC larry page on a "private and confidential" e-mail to my lawyer, Mr page is a party to the conversation. It is no longer "private" nor "confidential". If every e-mails sent to a g-mail account is by default cc'd to Mr page, none of those communications are "confidential". By (statute) law, the senders are forfeiting attorney client privledge...by "opening" the communication to a thrid party. Its google's stated position that person sending an e-mail to a g-mail account has a no "reasonable expectation of privacy". And this is what that means. This means that google (wants to) treat your mail like Mr page is reading it, and it believes that users are in fact waiving their expectation of privacy by using or communicating with g-mail recipients. That includes presumably senders of mail who have not agreed to g-mails T&Cs (ie, who presumably do not have reason to know what they entail).
It is the insertion of an active third party into the communication which is a problem. Its a problem because it damages the inherent idea of 'mail' as a sender-recipient private relation (post office =/= an active recipient). And from here, the problems start.
In any event, I think you are missing the legal abstraction at the core of the analysis. Its not a problem you can wish away, nor is it one you can trust current statutes of case law to protect into the future. That is the nature of 'reasonable' modifiers; they are ultimately contextual. And here, we have self-interested parties strategically eroding the context of the 4th amendment, to the detriment of the the public at large.
> What is relevant to private communication is that it is private
This is not the basis for 4th amendment protections. You are also confusing things: attorney-client privilege comes to us from Common Law, not the 4th amendment and is not a good basis for discussing what is private, as there are many more restrictions on it (a warrant can almost never compel your attorney to testify against you, for instance, which is not the case for almost all normal communications).
The mere existence of a third party does not negate the reasonable expectation of privacy, otherwise no third party communication system would be safe from warrantless searches. What has long mattered is the reasonable expectation of privacy, which under current case law does not always but in many situations does override any details like the extent that a third party is involved in that communication (for instance, cc-ing Larry Page on an email does not make a message suddenly have no expectation of privacy any more than sending it to anyone else, as the limited list of recipients makes it on its face not for publication or public posting).
> It is the insertion of an active third party into the communication which is a problem. Its a problem because it damages the inherent idea of 'mail' as a sender-recipient private relation (post office =/= an active recipient). And from here, the problems start.
Again, this is wrong. The fact that there are people at the post office, people that could open your mail, people that do actively examine your mail for things like drugs or bombs does not negate your 4th amendment protections. Are you reading anything I'm writing? That's directly addressed in the quote three posts above this one.
> In any event, I think you are missing the legal abstraction at the core of the analysis.
This is just silly. What you are suggesting is that the third party doctrine has overruled all, and that merely using an email provider that scans for spam or looks for abuse has left you open for warrantless searches (which is almost all of them except ones your run yourself since open mail relays are virtually extinct). Not only have you provided no evidence for this belief, the ECPA says you are wrong for emails newer than 180 days, and it looks increasingly unlikely that the courts will agree with you for emails that are older.
You appear to be confusing Google saying that people sending email to users of gmail expect their emails to be handled by the machines that run gmail (or they should, because that's the only way it can physically work) with an argument about the 4th amendment. Breathe easy. That is not the case. Whether or not Google is breaching the plaintiff's expectation of privacy (and it would, again, be bad news for every email provider out there if they are found to), scanning your email is not publicly posting your email, and this tort case has no bearing on your 4th amendment protections from searches by the government. This was established in Katz v US 46 years ago, and remains true today.
If nothing else, it reminds people that it's going on. Which is a good thing. Myself, with adblockXYZ installed, often forget or don't notice. And when these do come up, I'm grateful because it reminds me to stop being a lazy bastard and set up an email server (as I've done many times for others) for myself. I appreciate that.
I guess. There are plenty of other people in this thread who have brought up the fact that if your email service can read your email you are inherently vulnerable without quoting the plaintiff's attorney in a suit about the practice talking to ABC news. Talk about your lowest common denominator.
I didn't come here to be propogandized at, so, yes, I will object to the dumbing down of debate to appeals to emotion (especially on a subject that's obviously already so emotionally charged).
magicalist: It was a widely reported national headline because it came from AP. It was not a story by some perma-tanned fake news-anchor. Alternative sources, same headline. This is also very much a PR war, on both side (google, NSA, etc). In case you haven't also icked that up. Everyone is in the business of manufacturing headlines.
Um, you do realize that pretty much every single mail service provider is "scanning" their customer's email to screen out spam, right? That's also an algorithmic analysis of the body of the e-mail, and it's providing a service that most customers appreciate (since the generally don't want to swamped by hundreds of Viagra and "make money fast" emails every day)
Spam scanning can be thought of a continuous stream without a given email assigned to a user whereas contextual scanning for the purpose of advertisement necessarily ties emails to you.
Maybe it's just a semantic difference but I would argue that that is a sufficiently big differentiator.
This quote makes it sound like Google employees (i.e., humans) are reading your email. It's an algorithm that is processing your email, as most (all?) of us are aware. It's not like there's some employee sitting there reading your stuff and then determining you're in the market for a new camera.
They have the infrastructure for a fully paid version set up now with being able to buy more space for your google account. Maybe add a client-side encryption option along with no ads. It will make things like server-side search incredibly more difficult although.
Well, it reflects well on Google that they say this, but in the end, the only "truly secure" host is one that cannot decrypt your data even when compelled to do so.
Bottom line is, if a capability exists, it can be exercised, willingly or not, coerced or through oversight. Anyone willing to put data in the cloud should be conscious of this no matter what the provider says.
It's not really Google's call. If they're issued an NSL the opinions of any employees are irrelevant. Either they comply or someone goes to jail.
I, and many others, would appreciate it if they fought such things. And if they would fight the good fight on the policy fronts. But ultimately Google does not make or interpret the law.
I am sure there are people there who do care about security and this is a victory, but it is a small one and not one that really saves their reputation.
The problem is that the NSA presumably gets access to the information before it is encrypted, so this does not limit what the NSA can get from Google. What it does do though is possibly cloud the traffic to some extent regarding cracking stuff, but then the NSA could probably just disregard the traffic between data centers.
The real victory is that other companies are more likely to follow Google and this may have an impact.
Completely off topic but trying to scroll the way HN handles 'quoted' / 'code' bits is terribly painful. Has anyone solved this? I'm using 'Hacker News Enhancement Suite' but this problem still make me unhappy and constantly causes me to rewrite such blocks in my own posts.
“This is a just a point of personal honor,” Grosse said. “It will not happen here.”
Some folks are inclined to distrust Google, but there are people here who really, really care about security.
You bringing us to tears Mr Googler. I remember the "personal honor" or whatever about being adamant in providing the best results (now full of Google crap and advertiser sites) about not mixing ads with content (need I show examples?) and in many pages everything is ads, trying to trick the users in clicking them. Oh and all that crap about doing the right thing, "not evil" or whatever.
Google as a corporation is a just as scummy, if not more, as other corporations and will do anything for a dollar. So I trust them. Not. Sure they are decent people there, just as they are at Oracle or IBM but most will go with the flow and even defend the new policy.
Personal honor is what Edward Snowden showed. Getting caught for selling out your users and then endeavoring to do something about it is not honorable. It's manipulative and pathetic.
I tried this, but it keeps forgeting the choice to use the classic maps and I always have to go to the options and pick it again. It's not such a big deal, but yeah, it's annoying.
Web apps and me are constantly at odds because I run all sorts of privacy extensions, I disabled Local Storage (via folder permissions) and I don't let cookies persist. So, I just stopped using web apps wherever possible in favor of native ones. It pisses me off too because the web (and the Internet) could be so much better than this.
Anyway, for the longest time - even with a bare browser (no extensions) - YouTube wouldn't save certain settings (annotations disabled, video quality), even while I was logged in.
Hopefully, it will work like Google Groups. Meaning that for the next two or three years, we will be told that "The old Google Maps will be going away soon," but it won't actually happen.
It's so nice of Google to give their users so much advance warning of incoming suckage. I guess that's part of the whole don't-be-evil mission statement. Kudos to them.
I'm not sure if I can assuage your anger, but the Transparency Report is upfront that there are reasons not all requests for user data are disclosed. The data isn't purported to be comprehensive, though we'd like it to be as comprehensive as possible.