We would all be surprised if it turns out that a giant corporation who sells you always-on networked microphones wasn't abusing it in any way, shape or form.
Here's a slippery slope for you:
1. "OK Google, order pizza from Sal's."
2. "Broken glass detected in living room. Should I call police?"
3. "The baby has been crying for more than 60 minutes. Do you want assistance?"
4. "Shots fired. Calling 911 now."
2016-2020 has taught me that slippery slope arguments are not fallacious in and of themselves; they just aren't convincing by themselves.
The sound of a Siamese cat in heat is extremely similar to a crying baby.
Watching an unusually well-Foleyed action movie on a good sound system can probably fool any recognition system Google can jam into next year's $150 smart speaker.
And that's how Google could swat you with the best of intentions.
> Watching an unusually well-Foleyed action movie on a good sound system can probably fool any recognition system Google can jam into next year's $150 smart speaker.
Many of the smart speakers use microphone arrays. Amazon Echo has 7 in its array, showing that arrays are feasible in $150 smart speakers (Echo 3rd Gen is $100). Even the Echo Dot has 4. Google seems a little lacking in this respect--Google Home only had 2 it seems.
These devices then should be able to recognize that a sound they hear, like glass breaking, is actually coming from more that one distinct source, and the directions of those sources. It should also be able to recognize that it often hears duplicated sounds from those same directions, and infer that this is where speakers for the user's home theater are.
It would be a neat feature, though, if these smart speakers recognized the sounds that home theaters calibration systems such as Audyssey use so after you get a smart speaker, you could tell you A/V receiver to run through calibration. The smart speaker could then recognize that and learn about all of your home theater speakers.
> Logic and critical thinking textbooks typically discuss slippery slope arguments as a form of fallacy but usually acknowledge that "slippery slope arguments can be good ones if the slope is real—that is, if there is good evidence that the consequences of the initial action are highly likely to occur. The strength of the argument depends on two factors. The first is the strength of each link in the causal chain; the argument cannot be stronger than its weakest link. The second is the number of links; the more links there are, the more likely it is that other factors could alter the consequences."
Indeed, many people don't realize that accusations of committing the (or any) fallacy need to come with justifications of their own.
This was the first thing that popped into my head. As is, the system sends audio clips for you to verify. One upgrade away from google sending you child porn while you're at work because your kid skipped out on an assembly with their sweetheart.
I would disagree that example is a slippery slope, but more of a leap across a vast canyon.
Going from providing information and requesting an action, to making an automated decision and action, is much more difficult as you pointed out. Therefore I'd be very surprised to see this actually occur.
> Watching an unusually well-Foleyed action movie on a good sound system can probably fool any recognition system Google can jam into next year's $150 smart speaker.
Sure, but pair that with "Shazam!" style song recognition and now it knows you're watching a movie, and what movie it is, and can more easily filter for false positives there, no?
Maybe I'm watching any one of a billion YouTube videos or livestreams, or a Japanese police drama from the 90s that only exists on DVD - the set of things that may cause a false positive seems almost intractably large and inaccessible.
Don't let the perfect be the enemy of the good, here; of course not EVERY POSSIBLE EXAMPLE can be detected; I was merely positing a way of also avoiding missing every possible example.
Do you want to bet your life on whether someone found that feature idea exciting enough for a promotion project at work?
Even if the feature works perfectly, were still talking about (hypothetically, as this isn't a real feature yet) automatically summoning a platoon of people with the
de facto legal right to kill you if they send danger which is exactly the reason they were called just because a potentially dangerous noise was heard
Even if calling them is not dangerous, it seems like it'd have to be extremely reliable to not cause a problem. If a large percentage of people have similar devices, it seems you would have to have a low false positive rate for the police to not be spending an unreasonable amount of time dealing with auto-nuisance calls.
Here's a slippery slope for you:
1. "OK Google, order pizza from Sal's."
2. "Broken glass detected in living room. Should I call police?"
3. "The baby has been crying for more than 60 minutes. Do you want assistance?"
4. "Shots fired. Calling 911 now."
2016-2020 has taught me that slippery slope arguments are not fallacious in and of themselves; they just aren't convincing by themselves.
The sound of a Siamese cat in heat is extremely similar to a crying baby.
Watching an unusually well-Foleyed action movie on a good sound system can probably fool any recognition system Google can jam into next year's $150 smart speaker.
And that's how Google could swat you with the best of intentions.