I had one of these when I worked for Google, I didn’t quite trust it though, so I had it in the bathroom to limit what my employer could listen to.
Imagine discussing a product idea at home with friends, then having that stolen by an engineer at one of these companies (who figured out how to find the interesting conversations) for a promotion or new position.
On a larger scale these might get hooked up with some advertising algorithms that manipulate you further based on conversations you didn’t know were recorded.
I hope for more pressure to move these systems and other devices, that don’t necessarily need to be in the cloud to function, to a local solution or one that is closed to outsiders. Technologically this should be possible in so many cases.
And sure, my phone can function the same (though i specifically have siri disabled on our iphones)
Its just a cost benefit. I can get off the couch and flip a switch or use a remote to turn on a song. I really dont need speach recognition for those functions.
Absolutely agree that the convenience doesn’t make up for the downsides.
I do think that some cases can be covered without putting data out in the wild with some third party. Siri and similar apps don’t do well with complex commands, but just fine with simple things (light switch, calendar entry, alarm) that an offline tool might also understand. Same goes for fitness trackers, note taking apps, etc.
We as consumers should choose such alternatives. As developers and entrepreneurs we should try to enable these solutions, so we don’t have to rely on apps that might monetize differently than advertised or turn off when some company decides it’s not profitable enough.
There’s also work being done on decentralizing analysis and monetizing your own data, but I’m not sure it’s ready yet.
Many of the same benefits can be had without always-listening devices. You can control things from your phone. Or in my case, I usually have a pair of bluetooth earphones around my neck, I push a button and say something and that's all it listens to. The always-listening "okay google" can be disabled.
I unplug my parents' Google wiretap device when I enter the house for holiday parties. For some reason my parents can both know that I know way more about this topic than them, and still totally disregard me and spend hundreds on Google hardware devices because they seem cool. \o/
It's clear that your parents have different values from you regardless of what you tell them, an obvious point that seems to be lost on many commentators in this thread who perpetually oscillate between "you don't know what you're doing!" and "how could you use such a device?!"
The problem isn't different values, it's that non-technical users aren't able to make informed consent, they don't understand how they're being harmed.
So instead of informing them you just unplug the devices or you did inform them, they don't agree with you, and now you take it upon yourself to do what's best for them because they just don't understand.
When attempting to inform them of real world events has them treat you like you're a UFO nut...
I take it upon myself to do what's best for me. They're free to use whatever they want when I'm not there, but I'm not going to be in the house with the blasted thing.
You don't have to be a tech genius to comprehend that putting a microphone in your house comes with privacy concerns. Your parents aren't idiots; they just have different priorities than you.
Side note: If a houseguest came over and unplugged my Google Home without asking, they would not be invited back.
This seems like a personal choice, and perhaps you can try to keep an open mind.
Also consider the use case of your parents being alone and falling down and not being able to get up. They can just say 'ok Google, call ocdtrekkie' (if they have setup the device fully with contacts etc.).
Disclaimer: I work for Google, but views are my own.
I can't speak to the merits of the family relations, but telling people to "keep an open mind" to continuous monitoring is quite the ask. The right to privacy isn't some kind of pizza topping to be "open-minded" about just because some company happens to make a product that violates the norm. (And to be clear, my statement isn't isn't Google-specific in any way.)
What objective norm does this product violate? People should be free to share (or not share) their personal information any way they please. Is being "open-minded" about other people's choices in this regard so much to ask?
Just a friendly outside view that maybe it's their house and you could just assume you are talking in a public space if that fits your worldview.
Otherwise you may not want to talk to them on the phone if they are using android phones and should ban them from discussing anything you talked about while you were there after you're gone and they plug devices back in.
To me the worst part is you don't need to have constantly-connected-to-the-internet voice recognition. It certainly wasn't a thing until recently. That's why I tend to think of it as an act of malice.
And that is what I consider functioning as intended.
While I don’t think they target me specifically, I can’t be really sure either and like many other companies the people there behave quite contrary to what is advertised.
This is just too obvious of a potential attack to simply ignore and too easy to avoid.
> I had one of these when I worked for Google, I didn’t quite trust it though, so I had it in the bathroom to limit what my employer could listen to.
I don't understand why you just didn't throw it away, or put it in a box powered down.
A possible subtext to what you have written is that your employer (google) was mandating that you keep using the device at home as part of your employment... but that is outrageous and I would like clarification as I feel it cannot be true.
> A possible subtext to what you have written is that your employer (google) was mandating that you keep using the device at home as part of your employment... but that is outrageous and I would like clarification as I feel it cannot be true.
The requirement to use a Google device at home has never been imposed to the best of my knowledge, at least unless this individual was working on the product. Even then they would not be required to have it on unless they were actively working on it. If they were working on it, then they should know its capabilities and trust or distrust would be irrelevant, since they would have actual knowledge instead.
Also, the devices are not recording all the time. Presumably this experiment added the ability to treat fire alarms as a hot word, in support of the feature in question. That doesn't mean that Google has access to recordings of every conversation.
Also, there is no way anyone would ever be permitted to run the type of conversation analysis that the GP proposes. Far more benign analyses are rejected all the time due to privacy concerns. It's unlikely that any single person could even run such an analysis due to various access controls (e.g. the inability to access logs as a person-user, system-enforced requirements to run only checked in, reviewed code over logs). Even if a person could run an analysis like this as a rogue, it would be extremely risky, because they would get terminated immediately if it were ever discovered.
Also! Identifying useful, novel product ideas is beyond even Google's ML capabilities.
>Also, there is no way anyone would ever be permitted to run the type of conversation analysis that the GP proposes.
I flatly disagree with this statement. I have experienced first-hand how conversation analysis can turn into targeted advertising... and there are many others who have shared similar experiences.
I personally think this is a conspiracy theory, but I doubt there's anything I could say or do to convince you. Fortunately, it's a free country and you don't have to use one of these devices if you don't want to.
What’s important from a user perspective is that it’s not possible to verify these safety measures, while potentially exposing some of the most private pieces of information.
I agree that usually it should be safe and anyone messing with logs would probably get terminated, but there’s not much more than to trust people there and I’m not sure all safeguards hold in all cases, e.g., if people were to collude within the organization. Therefore I wouldn’t put such a device in my home again and if I’m having a meaningful conversation somewhere, I’ll make sure no device is connected to power.
I'd say that's important for a very tiny subset of users. For most users, the main question is whether the devices are useful. I have them, and I'm not sure myself.
For the users who would want to verify that their stuff isn't being used any way they don't like, it's hard to say what could practically be done to convince them. Even if Google explained the inner workings of its privacy protection systems in detail (and it has, in some various contexts), you still can't watch the code running and verify that it's doing what Google says. Short of building from source and running it yourself, there's no way to verify that things are behaving as you desire. Not using devices in this class is the only option, I guess, at least until it becomes practical to move all the computation on prem, and until someone has an economic incentive to build a system with that feature.
It was a new device (already public though) at that time and it wasn’t mandated, I got it for free though.
I did want to test it and see how it works and if it’s useful for me, I just don’t completely trust them and don’t think people should trust large corporations putting microphones in their houses, so I decided to put it in a room where I usually don’t have conversations until I either have a very strong assurance that it is safe and useful enough for me, or give it away to someone who doesn’t have these concerns, which is what I decided to do.
If it were mandated though I would have left Google immediately and news of that policy would have probably made its way to HN, so no subtext here.
There is a big difference between smoke alarm detection built into the wake-up-word circuitry, versus every sound sent to Google's servers for analysis. I am not bothered by the former, but would be aghast at the latter. Does anyone know for sure?
It’s easy enough to test, but I don’t really need to.
Alexa Guard has had this functionality for a while, and I’d expect the folks here at HN to be able to infer a few things from the support link and basic reasoning.
So:
1) if an event is detected, you can listen to a 10 second clip or drop in (2-way call) to listen in or look.
2) Echo devices have relatively small amounts of RAM
3) Echo devices aren’t constantly hammering WiFi connections
From this, one should be able to deduce that the wakeword engine detects events and streams clips to servers only in situations that match events and settings to support these features. Why? Because processing, transit, and storage aren’t free, and one can’t store data in RAM that isn’t there or transmit data over WiFi without the physical layer showing signs of it. Furthermore, Amazon hasn’t cracked the code on hyper-efficient GB into KB lossless compression only to squirrel it away only for use in voice assistants.
Take the number of Alexa devices sold and run the numbers for all of those devices sending audio data to AWS all the time. The costs would be astronomical. The same goes for Google (though not with AWS). They’re no doubt incorporating the detectors into their on-device models.
You addressed storage but skipped the CPU portion: Yes, Opus can create good quality audio with low storage requirements, but it cannot do so without a high CPU cost.
ARM CPUs can be rather performant, especially for specialist tasks. These things are always plugged in and don't need to save battery, therefore they can always run at maximum performance.
The HomePod for example uses the Apple A8 chip which is a very capable chip used to power the iPhone 6 than did way more than encode/decode audio.
Wouldn't heat generation start to become an issue? As far as I know, none of these assistants have fans. I think the average consumer would notice that their device starts giving off a lot of heat if there's a lot of speech in range of its microphone - "I wonder why my Echo turns into a space heater when I leave the TV on."
> I think the average consumer would notice that their device starts giving off a lot of heat if there's a lot of speech in range of its microphone.
Interesting point, my guess would be that not many would notice since a HomePod/Alexa/Google Home would usually sit somewhere in a corner of a room/under the TV and not be regularly touched since you don't need to touch it to control it most of the time.
I am not even sure it would be that much heat, my x86 laptop can play video for a very long amount of time before getting noticeably hot, granted with a fan, however these ARM CPUs get noticeably less hot than your average Intel chip, even without a fan.
True, but not expensive either. Especially considering it's on sale all the time.
Even for the cheaper devices, the CPU is probably capable enough, (maybe excluding the cheapest Echo Dot/Nest).
The Echo Show devices even have a screen and are actually designed to play videos from all kind of sources, (decode), as well as for videocalling, (encode) and they're £60 right now on Amazon UK.
Look further up the thread. The bandwidth required to transport such data has not been observed. I don't think anyone would argue that these companies wouldn't have the ability to build a device that streamed everything home. It's that to do so would mean there'd be some observable effect in the device's network usage that has not been observed.
>Furthermore, Amazon hasn’t cracked the code on hyper-efficient GB into KB lossless compression only to squirrel it away only for use in voice assistants.
They could do speech recognition on the device and then ship off the plain text. I don't think they do this, but it is most certainly within their technical ability.
As a practical example, I have a copy of a 458,045 word audiobook on my computer and I just downloaded a copy of the e-book. The audiobook is just over 1 GiB, while the plain text of the e-book compressed with bz2 comes in at 800 KiB.
Small knit pick, but speech to text isn’t lossless. How someone says the words has a lot of impact on the meaning of those words. It’s possible Amazon doesn’t care about that lost information but that is a completely different conversation.
For anyone curious about this, I highly recommend developing on CMU Sphinx for a weekend project. It will really paint some pictures about machine interpretation based on training data and the actual application code.
What does it matter? The bottom line is that once there’s microphones in your home, Google just has to wait out the normalization of deviance until nobody’s surprised or upset that they transfer everything to their servers for analysis.
If they don’t today, they will one day. They don’t have a choice on the matter because it’s a money maker.
Google did this with “we don’t scan your emails for ads”. Initially, 2009 ish, that was a big line a lot of people cared about but years later here we are, Google scanning emails for targeted ads and that fight is over.
EDIT:
Thanks for the reply exittheone, I see GMail actually stopped this ad scanning practice in 2017[1], likely with google sign in on chrome and so many places it’s very possibly they just don’t that extra info. They still can let third party extensions read your email so I wouldn’t say we’re exactly in a better world...[2]
Right: this went in the other direction. Initially, GMail scanned messages for ads, and was public in that this was part of how it was funded:
"Google also places advertising on Gmail based on key words that appear in messages transmitted through our system (it’s a good example of ads helping to pay for the free services we all enjoy online) - so if you’re emailing a friend about a trip to Paris, for example, ads might appear on the right hand side of the page for trains to France. Google does this using software similar to the kind that scans emails for viruses, to filter out spam and turn the bits of data received into the characters on the screen. No human being other than the user ever reads the messages sent or received on Gmail – it’s simply a computer matching up key words in peoples’ emails with targeted ads." --
https://static.googleusercontent.com/media/services.google.c... (~2008)
(Disclosure: I work for Google, speaking only for myself)
Yeah, as far as I can tell this is actually the exact opposite from what was described - in reality people didn't really care about GMail scanning messages for advertising that much back in 2008, but it turned into a big deal later due to worries about Big Tech and then Google dropped it.
Your words are as spurious as Goog's. Where you state Still means, they STOPPED scanning emails for targeted ads, supposedly in 2017(Did they "pinky-swear"?). So, they always did before, as stated it in plain words in 1st 2 paragraphs of their previous EULAs. Now that they claim they don't "scan emails for ads", they don't state they no longer scan emails and it does not state what they do, currently, scan emails for.
Please don't retort "they don't sell PII", as P2 most definitely is what they use to sell X to 3rd parties in whatever form the ubiquitous NDAs prevent the public from knowing.
> Why do you think Google would sell pii? It doesn't make business sense.
In these discussions, "selling PII" is sometimes a short for "providing a service that allows targeting ads based on PII without actually releasing PII itself to the service users", which is only a little less bad.
Given the context, it's clear that's not what GP means.
Also, from a privacy perspective, there is a vast difference between the two. One person who you trust to keep your data safe (but not to use it in ways you find ethical necessarily) is vastly different from them giving it to other parties who you don't know.
I don't think the difference is that vast in practice.
What's currently stopping evil actors from exfiltrating data from Google's PII through buying narrowly targeted ads, each time with slightly different targeting, and intersecting these results to build a more detailed picture of people who viewed the ads?
> It's great that they do that, but it's still the bare minimum over here on the Old Continent. We can and should demand more.
Sure, but its been possible to do this since well before the GDPR mandated it. Now maybe you can argue that the threat of regulation is what keeps Google in check here, and ok fine that's an unfalsifiable claim but maybe it's true. But even still, that doesn't actually justify "we should demand more". Maybe more privacy regulation is justified, but "we already have some" isn't actually justification.
> What's currently stopping evil actors from exfiltrating data from Google's PII through buying narrowly targeted ads, each time with slightly different targeting
The snarky answer first. PII has a specific meaning. It means personally identifying information. Your ZIP code isn't PII. Your name is. No matter what ad targeting tricks you do you can't pull my name or address out of what Google sends you. So you don't get PII.
Now the less snarky answer. The actual attack you're describing does this repeated targeting thing, which ties private data to some pseudonymous ID, like a browser fingerprint. At this point they don't have any PII. Then, you get the victim to enter their personal information on your site. Now you can tie the PII to the other information from the shadow profile you've built.
So why isn't this useful? Mostly, cost. To get this to work, you need to have some one or some group click on multiple different ads you control ($ + time cost) and then enter their identifying information on a site you control. Click through isn't assured, and conversion to entering information is very unlikely. When you're, you know, actually selling a product, this is a worthwhile investment.
But this attack is essentially paying to advertise to people with the goal of learning who you are advertising to. As a result, this only really makes sense in the context of targeted attacks or generic blackmail. Targeted attacks don't work because now you need a specific person to enter their PII in your site (and then what?, you've learned that someone is interested in LGBT topics. I'm interested in LGBT topics and straight). And similarly broad blackmail doesn't work.
But I'm interested in how you think an attacker could do something in a cost effective manner.
Are you talking about some kind of an "NDA" on employees? I work for Google. I am being frank when I say you are full of it, and you will not find this message in any script.
That is exactly what I meant. You use PII to create whatever product(s) you sell to 3rds. I read the EULA over a decade ago. Good on Goog for stating it plainly. My answer is still, NO. Even when I go to .govs & other sites using Goog apis(UBO). More people would concur if they had an idea of how much all the minutia the SV giants(& the other alphabets)hoover up about our lives. And how it's used against us in daily commerce.
As for your red herring, I can only guess how mal-scans can be monetized. If anyone can do it, I have no doubt Goog will.
> As for your red herring, I can only guess how mal-scans can be monetized. If anyone can do it, I have no doubt Goog will.
Well yes, they're very clearly monetized: Google supplies a email service which it sells to businesses as part of GSuite. One of the selling points of this email service is spam protection. Is that a bad thing?
> You use PII to create whatever product(s) you sell to 3rds.
Ok, so if your concern is that a company has your PII, that's a concern I guess. But it makes it difficult to use the internet (or, like, shop at stores), since there are all kinds of companies that gobble up your PII but don't tell you or give you control over it (whereas Google does, for example, let you delete the data it has on you and control collection of much of the data it collects).
If you're going to argue that Google is bad for the same reason that your credit card company and Amazon and CVS are bad, then sure they all have your PII, but I'd still argue that Google behaves more ethically than the others when dealing with it.
The problem isn't just with companies having data, but with how they're using that data.
> whereas Google does, for example, let you delete the data it has on you and control collection of much of the data it collects
That's table stakes under GDPR. It's great that they do that, but it's still the bare minimum over here on the Old Continent. We can and should demand more.
Interesting, thanks for the correction! I could've sworn that Gmail provided some "we don't scan your email" privacy since the beginning but it sounds like I was confused.
First of all, thanks for linking to the change in policy. I was not aware of the timeline.
To address both links:
1) As a user I'm actually happy about the tight integration into calendar and other apps. These necessarily require reading my mail though. I'm already trusting Google with my mail so using it for better integration into other Google products is fine with me.
2) They are not randomly handing out access to your emails to third parties, they just have an API that lets third parties read your emails _after you gave explicit consent via the regular oAuth flow_. Which is also completely reasonable for me.
Most likely, an AI model that detects specific sounds is running directly on those devices. No need to transfer anything to Google's servers for analysis =)
From what I have heard you don't need any fancy AI models for detecting either of these sounds. Some of the older alarms were using classic signal processing to to this decades ago.
Similarly glass-break detectors for home security systems have done this for years.
The advantage of an ML model is that you can do multi-class prediction for a sound clip with somewhat arbitrary complexity and the cost of execution is more or less the same even if you add an extra class or two. It's just an extra row in your output prediction vector. By complexity I mean signals that don't have an obviously characteristic spectrum, like glass smashing. A typical CNN backbone is capable of classifying hundreds if not thousands of classes with high accuracy - even the edge architectures. Always-on detection tends to use very compact networks (kB in size) that will run on low-power ARM cores, or even specialist ASICs, but even so 20-30 common audio event types seems very feasible.
For people worrying about sending data back, there's no reason why you'd do this off-device. The exception might be that you feedback to Google that there was a false alarm, so they can use your sound clip as a negative training example. Just a guess there, but Tesla does this extensively for Autopilot - they deploy models in your car and specifically ask it to capture images of rare events (Andrej Karpathy gave an example of tree-occluded stop signs).
Well they might do it with AI, there are however some issues, constantly running sounds through ML models is power hungry which is as far as i know why devices such as these optimize a lot for commands prompts such as "OK google" which is when the ML loop actually starts.
On the other smoke alarm sound is very easy to detect with classical methods so might be extremely cheap to run it without any ML.
> constantly running sounds through ML models is power hungry which is as far as i know why devices such as these optimize a lot for commands prompts such as "OK google" which is when the ML loop actually starts.
How power hungry is it actually? I'd only have to run it on the big model if a pretty dumb model thinks that the voice is even in the audio stream.
Now that the feature is disabled, it's hard to know. Once the feature is enabled, it's pretty easy to tell by looking at the network traffic the device generates. Sending every sound to google is going to be obvious.
The article makes it unclear, but this feature is available right now for all Nest Aware subscribers (if you have any type of Nest camera, you'll have a Nest Aware subscription). The article mentioned that this feature was "accidentally" available to non-Nest Aware subscribers as well, but it's the same existing feature and not some test of upcoming features.
You can toggle glass break and fire alarm detection on any of your Google Home devices in the Home app, and this includes the old Google Home mini pucks and hubs, as well as new Nest branded minis and hubs.
Nest cameras are a completely different system than Echo, all Nest camera devices stream full motion video and audio back to Google 24x7. That's why you buy them.
What about transcribing all the audio you get and then sending it as a chunk of stuff every time the Hone is activated. Don't think you could catch that.
That would require a lot more computational power than is currently available in an Echo device. It's possible that Amazon could sneak a custom ASIC into the device to do something like this, but it would add quite a bit to the device's Bill of Materials, and would still probably be noticed by the iFixit crowd.
How surprised would you be if in 5 years there was a Snowden-type revelation, that yes, the speakers and Facebook/Insta apps, Amazon echo, etc, etc, were listening all the time?
Sorry.
And that there was a secret court ruling that meant all that data was live-streamed to NSA Utah.
Some are for certain. The Samsung TV's for example explicitly do it and they have advised you not to have private conversations where the TV can hear you.
that information is just not that valuable. I think this myth that the companies are listening 24/7 comes from an egotistical mindset that everything you say must be interesting.
What a callous view point. It's true that most of what the majority of people say is not that interesting, but there's plenty of people who do have conversations that are valuable for 3rd parties to listen in to. Throw a big enough net and you begin to capture a lot of these
>I mean it's pretty easy to prove that's not true. Just stick it on a network and check the network traffic.
It's pretty easy to hide that network traffic too. Just save/compress everything to the local device and pipe it over when the phone/app is being used. (ever wonder why those apps take up so much memory and use up so much battery?).
You can't trust the devices themselves to tell you that, so the best approach is to put them behind a (hardware) firewall that protects both ways, not just from external connections but also to prevent devices to call home if their behavior is unclear.
Dragon Naturally Speaking ran on clients without network access in the 90s.
A given household might have 20,000 words spoken in a day. With speech to text, then compression, this would be a 20kb network request. If you only sent data once a month the data/day could likely be reduced even further. This could easily be Trojan-horsed alongside valid requests without anyone noticing.
As a CE I’m glad some people get this. I see all the posts here about “we could it’s not listening by looking at encrypted traffic amounts” or “it can’t do local speech recognition” or “they wouldn’t do that!” and want to bang my head against the wall.
These devices can absolutely be abusing trust - it’s not even unlikely to some degree.
Yeah, I'm surprised about it too. ~13 years ago, I've been running my own completely off-line speech recognition system on a cheap PC to control music in my room. With a microphone mounted on a wardrobe. With very little pre-training, it worked pretty much flawlessly, and it could recognize commands over very loud music. And I built it in few afternoons using MS Speech API, which was included with the OS.
That's why I don't buy "you need the cloud for speech recognition" arguments in general. And in context of this discussion, it means you could absolutely snoop on people through local speech-to-text on low-powered devices - particularly if you limit yourself to a set of keywords (vs. free-form dictation). And for usual profiling&advertising, a set of keywords (that can be updated over time) is more than enough - you could learn from it e.g. whether people talk about product X or politician Y in the household.
Beyond that, closer to 20 years ago, I remember on Win98 experimenting with an offline speech-to-text program I'd downloaded from somewhere. It required training, but I remember it being pretty accurate - I just didn't find a use for it because we had one shared desktop and I'd be annoying everyone else in the room. I think it was called Vox, or something like that...
And 25 years ago there was that IBM card that allowed realtime voice recognition on a 486, no connection required.
At the presentation I saw at IBM, the operator loaded a word processor, wrote a letter, saved it as an image, sent it as a fax, received it on a second machine and printed it without moving a finger. I also seem to remember one machine ran OS/2 Warp and the other one Windows.
It wasn't that fast for sure, and she had to correct some errors, but the point is that if done on dedicated hardware (FPGAs?) the performance can be a lot higher than on software. A lot of powerful hardware can be fitted into those assistants, and unless they fully open source them, there's no way to know what they do and what they could do if instructed to.
I’m always slightly surprised by anecdotes like this from so long ago. When I tried using MacOS Classic speech recognition 20 years ago, it interpreted every command as “Tell me a joke” including the line the user was supposed to say to make the joke script continue.
For the folks who think your devices aren’t listening try an experiment: start talking out loud about a product you’ve never, ever searched for. Something you’ve never, ever needed or wanted. See how long it takes for ads for that very thing to start showing up in Facebook and in your targeted ads. I’ve done it and Started getting ads in a matter of days.
This happened to me with Google's search suggestions the other day and I didn't even say anything out loud. Should I consider that as evidence that Google has developed mind reading technology, or is it more likely that I just ignore search suggestions unless they're eerily relevant?
First, get a friend or relative to go along with the ride here. Try to have that person not be on your local network of wifi and the like.
Then, go a website that will generate a list of random words: https://www.randomlists.com/nouns . Make sure that you are selecting for nouns or adjectives. Copy the first three words that you get. More is fine too. Just get enough words to be pretty specific.
Then go to amazon or some such online retailer: https://www.amazon.com/ . Search for the three random words you got.
Now, here is an important step, sort the returned list by price, from high to low.
Take the most expensive item as your experimental item. You can do this with a few items if you'd like. You're just trying to get something that is not what you or your demographic would normally look to purchase.
Then, talk about that item around your gadgets, critically, with someone that is not you. Your amazon search history is already corrupted just by searching for this.
Check back in with your friend or relative in about a week. You can set a reminder in your phone to do this. See if they got any ads that were trying to sell them on the random item you chose.
Please add a control for this experiment. There should be another list of products selected that you did the same with, but you didn't talk infront of the gadgets. There might be some signals that the ads networks are getting from you or your friends.
To be double-blind, they should talk about both lists of products, but a third-party should make a (hidden) gadget present for one conversation and not for the other conversation.
This fails if the ad companies track your online activity, link it to your location, link your friend to your location, and then show the ads based on that.
I've always dismissed this as coincidence. But over the past few months, my wife reported to me several incidents of a relevant Facebook ad showing up a day or two after a conversation she had with me or her friends, and that was not followed up by any on-line searches on the topic. At this point, I'm starting to consider something may be going on after all - it seems to happen too often to be easily explained away as happenstance.
Usually when I see this sort of coincidence, I think that it is a matter of information relevance/leakage.
You read an article about bug spray chemicals and don’t think about it, but start seeing exterminator ads online. If you read the article because you’re looking for pest control, the it feels suspicious. If you’re not, you ignore the ads and go on your way.
I don’t want to discount your experience. I haven’t seen it myself, but maybe I’m not paying enough attention.
Thinking about it now, I can't be sure why we started to talk about particular products (and then saw ads about them day or two later). It could've been because one of us saw an article or an ad about it earlier. So perhaps this is neither by coincidence nor surveillance - perhaps there's an earlier link in causal chain that we've missed.
I suppose the only way to be sure is to start doing proper experiments.
My theory is that those friends did search for those topics/products, and Facebook decided it would be wise to advertise them to your wife because of her proximity to the people searching for them. "Proximity" determined either literally with geotracking, or simply with something like chat records.
That's a crappy experiment. There is no way to disprove the result. So, how about instead, you choose 2 products. Flip a coin to decide one them to talk about. Just think of the other one. See how long it takes to see the product you talked about, and how long it takes to see the product you only thought about. Repeat a few times for different pairs of products.
If google is listening, the time for the product talked about should be way shorter then the one you only thought about. If google isn't listening, there shouldn't be a pattern.
Now, it is really important that you flip the coin. Also, do NOT TALK TO ANYONE ABOUT THE PRODUCTS YOU ARE USING. We want to test the microphones, if you make a search for the product, or you tell a friend, and the friend makes a search for the product, that is all information signal that ad companies could be using.
It's just confirmation bias. If this were actually happening then:
a) You'd be able to detect it by sniffing the network traffic, and
b) Some journalist or scientist would have reproduced it and written about it.
It also makes absolutely zero sense that Google would do this. The press would be terrible if they were found it, it's outright illegal in Europe, and they don't need to! They have an amazingly good signal for the things you are looking to buy because you type that into a nice easy to collect search box for them!
I know HN is home to lots of paranoid folks but I thought they were smart enough not to believe this dumb conspiracy theory.
>You'd be able to detect it by sniffing the network traffic,
How? I can very easily come up with a compression/local storage + stream when app is being used mechanism to record whatever the app is hearing and sending it back to the mothership. Why is everyone assuming that the transmission is happening in realtime and not scheduled/hidden with usage?
I'm always up for a bit of google bashing but it's fairly ridiculous to assume they'd try and hide random bits of other audio in the snippets they send, google is staffed by actual humans, and if the us govt. can have whistleblowers, google sure as hell would have by now.
a) how would they know what data to send? if they're able to do speech recognition offline and somehow gauge how important the audio is prior to transmitting it then why the hell aren't they using this technology to absolutely blow alexa out of the market?
b) I've done some tracing on my router and seen minimal difference between data uploaded and the size of the voice clips you can freely playback in your google account history, not saying clever compression etc. couldn't cover this, but again - why?
Im definitely not implying corporations like google aren't more than capable of great evil and deserve to be watched like a hawk, but let's pick our battles?
> For the folks who think your devices aren’t listening try an experiment:
1. Which devices: iOS or Android?
2. Are any apps that have been given the microphone permission in the foreground (for iOS) while the conversation is going on (but the app isn't in an explicit recording mode triggered by the user)?
3. Are any voice assistants enabled to always listen for the activation command words (like "Hey Siri" or "Ok, Google")?
I have my iOS device set only to get typed text input for Siri (no voice activation). I rarely give any app microphone permissions unless the main purpose of the app is to record audio (or video and audio). I've never encountered targeted ads on topics I talk about.
How long until insurers start coercing customers into installing intrusive smart home devices in order to get reasonable premiums? It might not happen tomorrow but if enough people are willing to opt into alarm surveillance you can see how they could lower the liability for insurers.
I'm surprised that people are outraged by this. It's a given that you're losing some privacy by putting one of these devices in your home. Any argument otherwise assumes that the companies manufacturing these devices are trustworthy.
Thankfully the solution to this problem is simple.
Does any company or person do privacy audits of these smart home devices? At the very least doesn't seem difficult to sniff packets sent out on the home network? Seems like it is generally accepted that these things are a privacy nightmare but haven't seen a lot of actual analysis.
People here are just as bad as those on Reddit. The article title is clickbait and the actual content describes alerts that Nest speakers could send you when they detect beeping smoke detectors or breaking glass. The top comment in this thread right now talking about presence detection to disable Netflix account sharing is so frustratingly ignorant. You can have privacy concerns all you want, but the bottom line is that Google has seen that people care about privacy and any funny business would surely tank their reputation.
It’s not clear from the article but I assume the Google implementation was pushed out to users and turned on by default (likely without a notification that the device would now start doing this).
Since the iOS feature is listed under “Accessibility” I would assume that it’s opt-in.
I am running iOS 14 beta right now - certainly a good update.
In my perfect world, there would be excellent open source software and hardware designs for home monitoring, etc., where we would have 100% control of our own data. We don't have a perfect world, and right now, Apple seems like the best bet (for me at least) for a company to trust (with healthy skepticism).
I get gizmos like this as gifts every now and then. I never bothered opening any of them up. What are some fun things that we can do with these things besides their intended use?
Would be a fun open source project to find the cheapest board that can fit in the box, ideally take some components (speakers, WiFi) and have similar functionality like controlling music, home devices, setting timers. I’d try to contribute if anyone is working on topics like this.
I find a security system that only lets me know when it’s been triggered useless. My wife set off our alarm the other day. I used an external webcam to verify that it was my wife, and disabled the alarm before she could put her things down and disable it herself. You can’t do that with a system that isn’t connected externally.
Although with home security services, it's one of the areas where I think external control is advantageous because the best cameras don't help you when the thieves take the device that keeps the recordings with them.
Exactly this. Even in security contexts, my Home Assistant-based security system is running fully-local object detection and runs automations when "Person" shows up in an unexpected place at an unexpected time. It emails me the offending image (thereby offloading from the camera) and records video locally. If it happens at night while I'm asleep it runs a sequence of turning lights on to scare them off and then wakes me up if it didn't work.
This is kind of a combo of local-only with a few pings to the outer world.
Is there any reason the manufacturer couldn't have simply supplied an additional box you put in your home, that has all the storage and processing to do all that? The only thing the manufacturer would have to do is provide a dyndns-like service.
"one" because it is too much of a situation of "all of your eggs in one basket". I understand why people heavily invest in a specific ecosystem but I feel it's short-termism. I would be open to the argument however that tech moves so quickly that short-termism doesn't really apply to most consumer products.
"particular company" because Google has been a bad actor in all of the projects I have personally been involved with where they have been involved. This is purely a personal anecdote and YMMV (and probably does). I am sure I am an exception with this opinion.
There’s so many things I find disturbing that whether my is Google speaker is listening to me doesn’t even crack the Top 10.
What’s the negative outcome? I don’t believe this is the start of an inevitable slippery slope decline into a Big Brother surveillance state.
Yeah maybe they’ll get more data and target ads better, but that’s a good thing. I want to know about products that might interest me and then buy them!
The infrastructure of the Big Brother surveillance machine is the same as the infrastructure of the target-ads-better machine, the only difference is who is in charge of it. And relying solely on our current mechanisms to prevent the system being co-opted is naive. Especially given it’s already public record that some of these mechanisms have already been co-opted.
Also, that is just talking about the worst outcome, there are plenty of horrible possible intermediate outcomes, with the data being sold without restriction or not properly protected.
you can't imagine the problem of having your private conversations recorded, then a business compelled to release them without a warrant? You can't imagine what inferences might be made about your personal conversations and sold to others? I do not think you are giving the words you say enough credit.
Of course I can imagine those problems! Those problems are brought up literally every time I mention that in-home speakers are not the end-of-the-world-nightmare-scenario so many people treat them as.
I just don’t particularly care.
Also if someone wants to cut out the middleman and buy my personal conversations directly, feel free to reach out. Great value! Last night I talked about weed and Top Chef!
Enjoy your privilege of not being interesting or important enough to be the target of criminal activity or government opppression. Millions of people aren't so lucky, and they deserve empathy as they work to build a better world for you to carelessly enjoy.
The privilege is that you have so few real problems that this is something you have time to worry about. The millions of people you talk about couldn't care less since they have immediate non-hypothetical problems to focus on.
For most people they're personally investing their energy in what they consider more important issues. We all only have so much energy to go around, and this obviously comes low on GP's list. This is in the same vain as online advertising or buying food from supermarkets over local stores.
So, not caring because because oneself is not being persecuted at this minute, and determined to err on the side of being no threat to any criminals or people who persecute others, ever, and to hand that "value" down the generations. I could get more eloquent descriptions of that argument from history books, and don't even care for those.
In the abstract, this reminds me of the selfish ledger concept video, where they discussed pushing specific products on relevant customers in order to fill gaps in data collection.
Probably because you signed up for or bought something else and it was a deal. Either that or they made a mistake. Google aren't in the practice of just sending free hardware to people for no reason so I'm not sure what your point is?
I might sound like a grumpy old granddad to some of you (and I am not even that old), but did you really think that a huge company would offer to put microphones into your living room for your advantage? They are going to benefit from this and you are going to pay for it. How? For instance with dynamic pricing. Your loved ones tell you how much they want that vacation on that Greek island? Good luck finding a cheap flight. You are sharing your Netflix subscription? Sorry, but not while that person is in the same room. You are applying to that exciting position? Nah, with that statement we heard on your dinner table, we think you are a cultural bad fit for our team.
Any of those things would be such massive trust violations that the fallout would almost certainly sink the product and badly tarnish the company. That's not saying a company wouldn't be stupid enough to try it, but it's very clearly a bad business decision - so logically they wouldn't do it.
Likely not. Intangible damages rarely matter any more, and companies have become very adept at PR damage control in collusion with the elected representatives. If it doesn't cause huge financial losses or death (in the first world), then it won't sink these behemoths. Public memory outside tech circles in not very long or sharp. The public expectations and standards around privacy and security in particular are hopelessly toothless. The tech has vastly outpaced the understanding of ordinary people, and the "experts" are mostly busy building these systems, not asking themselves if they should. The technological hydra, driven by geopolitical fear and rivalry will sweep everything aside. I constantly hear "if we don't do it, the bad guys will..." even on this forum, and tight regulations, oversight and audits are unwelcome obstacles in this arms race.
Just wait a little. It’s too soon now but then again, I suspect very few people if they were teleported from 20–30 years ago would accept all the privacy violations we take as a fact of life today.
sure, it's company policy not to. but if you do it anyway and don't get caught, your risk-taking will be rewarded, so it becomes a numbers game/informal policy.
I have given up hope that the wide public cares at all about privacy
Time and time again large companies get outed violating privacy, it hits the news for 5 mins then Trump tweets something stupid and it is all forgotten about
The fact that Twitter, Facebook, Google, Microsoft, etc are all still massive companies with billions in revenue proves the public does not care about privacy
it's absolutely a good business decision, because you place too much value in the bad pr. there is almost none.
facebook has done a crazy number of bad things. a few nerds like me hopped of, and total their user base grew. but nerds like me never had facebook in the first place.
linkedin used your password to log into your emails and download them, then spam people in your contacts with invites, coming from you. then they sold user data to facebook. now their app was caught copying clipboard data from your phone and computer. no one cares, except for me who has a harder time finding work -because i'm not on linked in.
giving you bad flight prices when you're looking for flights? that literally happened. but surprise -you don't remember.
If you told someone 2008 that the fun scrappy startup offering Gmail would be reading your emails for ads* (see reply to this!) until 2017 and only stopped when another tech company had a massive leak of personal information which helped sway a presidential election* people would call you absolutely mad.
Emails were sacred back then and it took Google and long and slow creep to face little backlash when they started reading our emails. I see no reason why listening devices will be any different.
If the information is available and valuable to Google (and me talking to my spouse about buying a new computer next week sure is!) they will slowly walk towards getting it. The steps are pretty clear:
1. Have listening device which only listens to "OK Google" mass adopted
2. Increase capabilities for EASILY defensible safety reasons, listening to alarms, then screams, then personal threats, etc
3. Face backlash over 2. and easily defend it because who wouldn't want to save children from burning buildings etc.
4. Public begins to paint privacy minded folks as crazies for wanting to burn children alive.
5. Repeat 2 through 4 a few times.
6. Ease into that ad listening because everyone either thinks you do it already or they just don't care because they're exhausted hearing people raise alarms for 2.
I'm not saying this is a nefarious management plan to ease into surveillance either. Google's MO is put money and time into making a feature useful then eventually monetize it. Making "OK Google" listening devices useful in ways the public finds acceptable is where they're at now. Eventually they will have paved the way for it to be profitable and palatable for the public when it's always listening to us.
* Google changed this policy right around Cambridge Anlytica time
> If you told someone 2008 that the fun scrappy startup offering Gmail would be reading your emails
Google explicitly stated their would read your emails to show ads next to them at launch. This was a surprise to nobody.
> and only stopped
It never stopped reading mail. How do you think searching your inbox or filtering spam work?
In light of this, the rest of your post is pure conspiracy theory nonsense. Google was upfront about the reasons for reading your mail at the beginning. Why should any of that change?
Wow, Google did lunch with ad scanning huh! I misremembered the controversy, I followed it closely at launch but that was loong ago.
That does hurt my premise but I think it overall still holds:
- Google did stop reading email for ads in 2017 as I said in an other post[1].
- Google does have a policy of innovate then monetize later. That was explicitly what they did with Google Maps
- GMail was a good (but bit wrong as I brought up!) example of creeping surveillance, but Chrome provides a good example as well[2]. Google sells ads, it's products will be optimized to gather information for it.
"Reading your emails for ads" [and search]: Even though that term was popularized by lots of MS negative campaigns (remember Scroogled?) but it makes as much sense as saying "Excel is reading your private financial records on your spreadsheets to give you its auto complete" or "Photoshop looking at your most private photos to let you change their saturation/brightness".
Obviously some are more useful to the user than others but they're all computer programs taking your data, running some code and returning a response. There is no human in the loop "reading" things
The NSA on the other hand... well, I'll just say that every once in a while, when I lose something, I'll whisper into my phone: "I know you're listening, I know you're watching. And I really need to find my car keys".
Then I'll wait 5 minutes or so, and step outside, and there will be a small note taped to my door. Last time it said, "You left them in the car --a friend"
"The Secret Police said 'We have secrets. We have many secrets. We desire all secrets. We do not have your secrets and that is what we are after, your secrets. ... However our mood is melancholy. There is a secret sigh that we sigh, secretly. We yearn to be known, acknowledged, admired even. What is the good of omnipotence if nobody knows? However that is a secret, that sorrow."'
"Engineer-Private Paul Klee Misplaces an Airplane Between Milbertshofen and Cambrai, March 1916", Donald Barthelme
Why is there no way they're doing it? They're certainly capable of it. I think it's pretty unlikely they are, but it would be trivial for them to do so.
I'm not going to trust memory, mine, yours or anyone's on this and don't have time to traipse back in time to fully research it, but that is besides the point. The point is that Google have form for engaging in privacy violations that extend to the scanning of personal communications for advertising purposes without gaining informed consent (no I don't think click through t&C's that can be changed by Google on a whim count). Worrying that they might do similar with verbal communications for advertising or other purposes is a legitimate concern that shouldn't be dismissed.
I agree with you on click-though T&Cs! That's a shitty place to hide what you're doing with customer trust and data. 100% agree there.
But it was widely known and discussed at the time, and included in their policy on day one[0]. It was the plan to make 1GB work. It was their answer when people asked, skeptically, how they could make money giving away such unfathomable storage "for free".
And, as far as privacy policies go, that document is well organized and addresses the scanning in the very first section.
You don't have to trust my memory of this, but their above-board handling of keyword scanning in emails is not an example that illustrates how they will be evil with surreptitious voice recording on their other products.
Are you sure? Where's the citation to back that up?
What I remember are many products released around that time by Google, et al that were sold to us as: we're making gobs of money off of our core offering (e.g. search + little text-based ads), that we can afford to all this other stuff for the benefit of humanity. So much was sold as "making the world a better place". The other sales pitch we got amounted to, "don't worry about it, we'll monetize later."
Nobody was up-front in those days. Which is why not a month has gone by where ToS or Privacy Policy changes need another click to accept to keep using. The frog was certainly slow-boiled.
I have to laugh at this one [1]. It almost seems reversed these days (not quite, but kinda)
I'll see if I can dig up any press releases or official Google communications about it, but with linkrot what it is, it may take a while. (EDIT: Here's Google's privacy policy when Gmail was released [2])
Much has changed in the nature of the beast but you're right, the terms do state the obvious. Still feels sinister. Reading those old comments and privacy policy brought so much flooding back to me.
The value proposition has changed but not in the way that was feared at the time. We're certainly inundated with ads to a greater degree than it was at the time. They never turned around and charged for it but they certainly require a lot more personal information to have an account these days.
There was so much hate for Microsoft but it was wishful thinking that Google would turn out differently.
Ask average person - hell, average techie - about it, and I'm pretty certain you'll discover they don't know what you're talking about.
They really weren't up front about it, they might have just not been hiding it very hard.
Part of the solution to the tech vs. privacy problems could be forcing companies to be actually up front about their business models. After all, it's not wrong for an individual to give up their data, after being informed about what is requested and what will be done with it. The wrong behavior is having a business model that depends on most customers not being aware their data is being mined and used against them.
> They really weren't up front about it, they might have just not been hiding it very hard.
It was the answer to the main question everyone had, how are they providing 1GB (huge at the time) inbox space for FREE? Because they were serving relevant ads against your mail. It was in no way a secret.
It was obvious. The ads were displayed right next to the messages and we're relevant to the messages being shown. At launch, they prominently described how this worked on the sign up page. https://web.archive.org/web/20040410020148/http://gmail.goog...
To my mind, the biggest privacy violator that doesn't admit it's privacy violations right now is Apple. Even as a non-Apple user, I have my WiFi location slurped up by Apple, and they don't give me any way to opt out. People who use Apple devices have it far worse.
Yeah, this. The ads were COMPLETELY RELEVANT TO THE EMAILS. My recollection was that this was the very obvious trade-off/not-at-all-secret when they first introduced ads to Gmail. (I don't think Gmail had ads initially?)
I don’t think users really understood this, both in fact and in practice. Now, you may rightly argue that this is on the users, but I’d just say that perspectives have changed drastically.
>did you really think that a huge company would offer to put microphones into your living room for your advantage?
can you not think of any advantage that google could gain from their smart speakers other than the nefarious ones? All these conspiracy theories here just seem so incredibly shortsighted. like yeah, google could listen in on your every word and do their best to fuck with you, but the instant they got found out everybody would chuck their smart speakers in the garbage. And tbh, google already has a ton of data about your preferences, your conversations at home probably aren't that good of a signal compared to your email, calendar, browsing, search, and location history.
Dynamic pricing for a greek vacation sounds scary, but google doesn't sell vacations, so why would they bother abusing your trust in that way - it literally doesn't benefit them at all. They want the smart speaker in your home because they want to be the service that you ask for assistance - they want you to say "hey google, book me a flight to mykonos for the weekend" instead of "hey alexa, book me a flight to mykonos for the weekend", because they can charge a commission on that sale. If you don't have a smart speaker in your home, or if you have another brand, then they have no opportunity to make a piece of that sale.
> can you not think of any advantage that google could gain from their smart speakers other than the nefarious ones?
There is literally no advantage that you won't pay for. The commission you just mentioned? Your money. The advertisement they show you? Your rational decision being attacked. The browser they let you download for free? Your choice of a suitable alternative. They are not evil, mind you, but they are also not your friend or partner. They make money, lots of it, and that money comes from your pocket.
the price isn't going to be lower without the commission. the price is set by what people will pay. if a comission doesn't go to google, the money will just go to somebody else. it was my money, but if i'm booking a trip that same amount will be paid.
Google isn't my friend, or my partner, but nor are they my enemy. They're just competing with all the other entities who want some piece of every purchasing decision I make, and the best way to do that is to be a bit more convenient and a bit more available than somebody else, rather than engaging in elaborate conspiracies or trying to make me angry at them.
I do not think "they" (whoever likes these living room mics_ ever think that. I think they believe "convenience" is the advantage. In the long run, I predict that this idea of convenience will be long forgotten. Eventually, no one is going to think of these mics as "convenient". That is because there will be no alternative. In order to book that vacation, you will be required to use the mic. If people do not recognise the value in the alternatives that are being lost, certainly those providing them won't either.
Yes, sure, it could be a slippery slope, but there's a reason that phrase denotes a logical fallacy. As with most technologies, we don't need to ban something desirable, but we do need to develop a set of social norms and corporate transparency around responsible usage. We're at the very beginning stages of that process, because all of this stuff, at this scale, is relatively new. The GDRP is far from perfect, but it sets the stage for this sort of effort.
I can't even get Google Assistant to reliably work for commands they claimed to have rolled out years and years ago https://i.imgur.com/ucmKl5l.png. I would literally never trust Google Assistant with anything remotely near my safety or security.
"Google, call the police!!!" Playing the Police on Living Room TV
"Google, lock my doors" The Doors were a psychedelic rock band formed in 1965...
We would all be surprised if it turns out that a giant corporation who sells you always-on networked microphones wasn't abusing it in any way, shape or form.
Here's a slippery slope for you:
1. "OK Google, order pizza from Sal's."
2. "Broken glass detected in living room. Should I call police?"
3. "The baby has been crying for more than 60 minutes. Do you want assistance?"
4. "Shots fired. Calling 911 now."
2016-2020 has taught me that slippery slope arguments are not fallacious in and of themselves; they just aren't convincing by themselves.
The sound of a Siamese cat in heat is extremely similar to a crying baby.
Watching an unusually well-Foleyed action movie on a good sound system can probably fool any recognition system Google can jam into next year's $150 smart speaker.
And that's how Google could swat you with the best of intentions.
> Watching an unusually well-Foleyed action movie on a good sound system can probably fool any recognition system Google can jam into next year's $150 smart speaker.
Many of the smart speakers use microphone arrays. Amazon Echo has 7 in its array, showing that arrays are feasible in $150 smart speakers (Echo 3rd Gen is $100). Even the Echo Dot has 4. Google seems a little lacking in this respect--Google Home only had 2 it seems.
These devices then should be able to recognize that a sound they hear, like glass breaking, is actually coming from more that one distinct source, and the directions of those sources. It should also be able to recognize that it often hears duplicated sounds from those same directions, and infer that this is where speakers for the user's home theater are.
It would be a neat feature, though, if these smart speakers recognized the sounds that home theaters calibration systems such as Audyssey use so after you get a smart speaker, you could tell you A/V receiver to run through calibration. The smart speaker could then recognize that and learn about all of your home theater speakers.
> Logic and critical thinking textbooks typically discuss slippery slope arguments as a form of fallacy but usually acknowledge that "slippery slope arguments can be good ones if the slope is real—that is, if there is good evidence that the consequences of the initial action are highly likely to occur. The strength of the argument depends on two factors. The first is the strength of each link in the causal chain; the argument cannot be stronger than its weakest link. The second is the number of links; the more links there are, the more likely it is that other factors could alter the consequences."
Indeed, many people don't realize that accusations of committing the (or any) fallacy need to come with justifications of their own.
This was the first thing that popped into my head. As is, the system sends audio clips for you to verify. One upgrade away from google sending you child porn while you're at work because your kid skipped out on an assembly with their sweetheart.
I would disagree that example is a slippery slope, but more of a leap across a vast canyon.
Going from providing information and requesting an action, to making an automated decision and action, is much more difficult as you pointed out. Therefore I'd be very surprised to see this actually occur.
> Watching an unusually well-Foleyed action movie on a good sound system can probably fool any recognition system Google can jam into next year's $150 smart speaker.
Sure, but pair that with "Shazam!" style song recognition and now it knows you're watching a movie, and what movie it is, and can more easily filter for false positives there, no?
Maybe I'm watching any one of a billion YouTube videos or livestreams, or a Japanese police drama from the 90s that only exists on DVD - the set of things that may cause a false positive seems almost intractably large and inaccessible.
Don't let the perfect be the enemy of the good, here; of course not EVERY POSSIBLE EXAMPLE can be detected; I was merely positing a way of also avoiding missing every possible example.
Do you want to bet your life on whether someone found that feature idea exciting enough for a promotion project at work?
Even if the feature works perfectly, were still talking about (hypothetically, as this isn't a real feature yet) automatically summoning a platoon of people with the
de facto legal right to kill you if they send danger which is exactly the reason they were called just because a potentially dangerous noise was heard
Even if calling them is not dangerous, it seems like it'd have to be extremely reliable to not cause a problem. If a large percentage of people have similar devices, it seems you would have to have a low false positive rate for the police to not be spending an unreasonable amount of time dealing with auto-nuisance calls.
A Google spokesperson told Protocol that the feature was accidentally enabled for some users through a recent software update and has since been rolled back.
"Surprise" is not relevant. This is holding Google accountable for shady shit, which is something we should never stop doing.
Ah, google and their hapless hardware updates accidentally spying on users, killing my friend's and others Nexus phones as well as my own Nexus Player (oops!). No more Google hardware for me.
This abuse was always likely to happen. We’ve yet to discover what secret subpoenas that get filed to eavesdrop on possible felons or other persons of interest. A wireless mic to the cops is not what you want in the house.
Hence me toying with respeaker setups. I'd rather have no voice control than have big brother voice control. Google already knows more than enough about me (gmail primarily)
TLDR: The google home device also listens for the sound of glass shattering and smoke alarms which is very useful. In fact I wish it would listen for dog barks too. I hope they add that.
What the hell is going on in the comments here? It's like half the comments are edgelords with some version of "i am shocked that an advertising company would do such a thing"
Has HN just turned into Reddit? Do people really not care that people's privacy is being dumped down the toilet? Google is known for shit like this. They even scan your emails and alert you when bills are due. Shit like that is not ok. Don't fuck with my privacy unless I opt in.
Hide from it, run from it, Eternal September arrives all the same. It was bound to happen. Honestly, it's a miracle and really hard work from dang to delay it for this long
I would even go further and tell from the younger a change in trends between our generation and those after: we understand privacy is a right won over tears and blood, but younger generations don't seem interested so far on this fight. Sadly we will need to fight twice.
I've unplugged mine too but saying it's spying is mincing words a bit.
I highly doubt they care what individuals say in their home but probably care a lot more about what your demographic talks about, watches on TV, listens to on the radio and any other data they can glean from sound.
Think about it, distinct actions make certain sounds, (i.e. from washing dishes to sleeping)
They probably have a lot more of a complete picture of your home than you realise.
I am with you on this. Google sent me a free Google Home Mini a few months ago and I have an Amazon echo device. I sometimes like to experiment with both devices. I plug them in, use them, then unplug them.
Interesting technology, but I would rather have everything on hardware and software that I control.
I have a big cognitive dissonance here of what led you to buy it in the first place? The gulf of knowledge or personal risk tolerance between those 2 stances is so vast that I can scarcely comprehend them being in 1 person.
These findings of Google spyware is getting pretty boring. I feel sorry for whoever is Google's Chief Privacy Officer to defend all these privacy violations in their own products. They must be facing daily privacy investigations at this point.
Another day another Google listening device article. Get ready to comment on the next one.
Google added smoke alarm sound detection to a product that users bought specifically for "smart" smoke alarms and "smart" audiovideo recording of their home
>It's an obvious enhancement to a home monitoring product users paid money for, specifically for this purpose.
And yet it was turned on for people who did not pay for the service, or give permission to Google to do so. Which means they can listen in whenever they feel like and chalk it up to "whoops, pushed the wrong code, sorry!" apparently.
Consequences are in line with what happened, not the least charitable possibility of what that class of error can be.
Google absolutely has the technical ability to auto-update my phone to send deepfaked porn of me to all my contacts, but it's pretty reasonable for me to expect them not to.
Imagine discussing a product idea at home with friends, then having that stolen by an engineer at one of these companies (who figured out how to find the interesting conversations) for a promotion or new position.
On a larger scale these might get hooked up with some advertising algorithms that manipulate you further based on conversations you didn’t know were recorded.
I hope for more pressure to move these systems and other devices, that don’t necessarily need to be in the cloud to function, to a local solution or one that is closed to outsiders. Technologically this should be possible in so many cases.