I guess these types of vulnerabilities could be placed intentionally. It would allow certain agencies to again access via "exploit" and all the while claim they support user privacy. These companies are under pressure from governments (like the recent Australian government law to requiring access to encrypted messages). Seems like a decent solution for company and governments.
The industry calls this a "bug-door" and yes, plausible deniability is key. Most of this has been hypothetical possibility. This case does not fit that bill though as the vendor discovered it was being used by another country, prevented the exploit against a user, fixed it, and alerted the authorities. Would be more peculiar if it was a US-based company selling the spyware.
The update can be analyzed to see what was changed, even if we only have the binary executable. If we know that an app contains intentional bugs, just looking at where the update made changes could eliminate a lot of looking & find the bugs even faster! There are many automated tools that can do this too, eg. Fuzzing. The updates can also hint us where the previous bug was and what to look out for in the future.
So, nope. Introducing security bugs and backdoors just makes it insecure for everyone.
Oh, so you are reverse engineering and thoroughly analyzing every WhatsApp update? That's reassuring. Cause otherwise I'd have said nobody does this on a regular basis which would mean it still is a viable method.
So yes, I'm pretty sure that there are various teams, including white-hats such as Google, black-hats, nation-states such as China / Russia, analyzing each and every update.
There was also an interesting article on hackernews a while back demonstrating the technique, there are some nice tools for this. Sorry, can't find the link now.
There's also the curiously peculiar, and consistent, wording from companies that deny their involvement in programs such as PRISM [1]. As people seem to have forgotten about PRISM, NSA slides not meant for public consumption stated it enabled "extensive, in-depth surveillance on live communications and stored information" with examples including email, video and voice chat, videos, photos, voice-over-IP chats (such as Skype), file transfers, and social networking details.
But here's the fun part. Here are the corporate denials:
- Google: "We have not joined any program that would give the U.S. government direct access to our servers."
- Apple: "We do not provide any government agency with direct access to our servers."
- Facebook: "We do not provide any government organization with direct access to Facebook servers."
And so on. An exploit with plausible deniability enables these companies to make these comments completely truthfully, and at least mostly truthfully if they claim they are not providing a backdoor. But more to the point, there is absolutely no reason these companies would all say "direct access" as that's very specifically a subset of "access." If you do not facilitate direct or indirect access, why would you not simply say access? If this were a one-off thing, that'd be one thing since on occasion some PR is... odd. But literally all the companies were saying the exact same very peculiar thing. That's not a coincidence.