Editor’s note: Although this is not original MLH content, I thought this subject falls squarely within the interests of MLH’s audience. When you put what is being said in the context of what is best for you personally and your family, it definitely is applicable. Although we take no official position on what is said below, we do believe the points that are made should be consideredΒ by all families as technology becomes more pervasive and continues to permeate all aspects of our lives.
Read the original article on LinkedIn.
+++++++
The βwilling suspension of disbeliefβ is the idea that we (the audience, readers, viewers, content consumers) are willing to suspend judgment about the implausibility of the narrative for the quality of our own enjoyment. We do it all the time. Two-dimensional video on our screens is smaller than life and flat and not in real time, but we ignore those facts and immerse ourselves in the stories as if they were real.
We have also learned the βconventionsβ of each medium. While we watch a movie or a video, we donβt yell to the characters on the screen βDuck!β or βLook out!β when something is about to happen to them. We just passively enjoy the show.
The Willing Suspension of Our Privacy
We apply similar concepts to our online lives. Most of us are willing to give up our data (location, viewing, purchasing or search history) for our online enjoyment. We can call this the βwilling suspension of our privacyβ because if you spent a moment to consider what your data was actually being used for, you would refuse to let it happen.
The Willing Suspension of Our Agency
Which brings us to the next level of insanity: the willing suspension of our agency for our own enjoyment. This is past the point of giving up a βreasonable amountβ of data or privacy to optimize the capabilities of our digital assistants. Suspension of our agency exposes our normally unmonitored physical activity, innocent mumblings and sequestered conversations. Some people believe this is happening with Alexa, Google Home, Siri and other virtual assistant and IoT systems. It may well be.
First, Letβs Give It a Name
Since we are discussing a combination of automatic speech recognition (ASR) and natural language understanding (NLU) engines that enable a system to instantly recognize and respond to voice requests, for this article, letβs call the interface an intelligent voice control system (IVCS).
How It Works
You activate most commercial IVCSs with a βwake word.β For an Amazon Echo or Echo Dot, you can choose one of three possible wake words, βAlexaβ (the default), βAmazonβ or βEcho.β Unless you turn off the microphones (the Echo has seven) and use a mechanical button or remote control to activate its capabilities,Β Alexa Voice Service, the system that powers the Echo and Alexa, and other IVCSs are always listening for their wake word.
In Amazonβs case, it keeps approximately 60 seconds of audio in memory for pre-processing so the responses can be situationally aware and βinstant.β Amazon says the listening is done locally, on the device, not in the cloud. So technically, the audio does not leave the premises.
Always Listening Does Not Mean Always Transmitting!
Yes, an IVCS is always listening AND recording. Which raises the question, βWhat does it do with the recordings it does not use?β In Amazonβs case, the official answer is that they are erased as they are replaced with the most current 60 seconds. So while the system locally stores approximately 60 seconds of audio preceding your wake word, it transmits only a βfraction of a secondβ of audio preceding your wake word, plus your actual query and the systemβs response. For Alexa, you can find a record of your query on the Home screen of your Alexa app.
More Questions
What happens to the approximately 60 seconds of audio recording preceding a wake word? The one that has a recording of the TV soundtrack, footsteps, the loud argument in the next room, the gunshot, etc.? What happens with that audio? Again, Amazon says it is erased and replaced with the next 60 seconds of audio. Skeptics say if a wake word is detected, the previous 60-ish seconds of audio is put in a database for further IVCS training. If so, could that audio be subpoenaed? Yep! Just like your browser history or phone records. Itβs just data. But does it actually exist? Amazon says no. As for other systems? Weβll have to ask.
What About Hackers?
Seven microphones! Could a hacker tap into one or all of them and eavesdrop on me? The official answer is no, and specific technical reasons are cited. However, atΒ The Palmer GroupΒ we have several theses for 2017 including, βAnything that can be hacked will be hacked.β Anyone who believes otherwise is simply naΓ―ve.
βItβs the Profile, Stupid!β
Data is more powerful in the presence of other data. It is an immutable law of 21st-century living, which in this case means that the most serious threat to each of us is the profile that can be created with the willing suspension of our agency.
Most people have no idea how much information about them is available for sale. The willing suspension of agency has the potential to take us right up to the line that separates where we are now from an Orwellian future. (Many people believe we already live in a surveillance state. Weβll explore this in another article.)
We Must Deal with This Sooner or Later
Alexa is NOT dangerous. The data it collects is NOT dangerous. Nothing about an Amazon Echo is dangerous. Itβs awesome. I have one in the kitchen, in the living room, in my home office, and on my night table. Itβs an amazing controller, great alarm clock, spectacular Spotify and Amazon Prime interface, an exceptional news and weather reporter, and it does lots of other stuff you can look up online. I love it.
I also love my Google Home. Its ASR/NLU system is second to none! Letβs face it: Google is βtheβ repository of publicly available knowledge. When Iβm on my handheld, I rely on βOK Google,β and while I think Siri is audio impaired and database challenged, sometimes I use it too.
But β¦
The world will be a very different place when Google, Amazon, Microsoft, Apple and other AI-empowered players have assembled 1st-party profile data that includes our agency. It will make what they do with our current behavioral profiles look like primitive data processing.
We are predisposed to pay for convenience. We happily do it with cash and with data every day. However, we should not suspend our judgment about the implausibility of this narrative for convenience or for the quality of our enjoyment. Though this is a story we have been told before, there are no conventions of this medium. So let me be the first to scream: βLook out!β
About Shelly Palmer
Named one ofΒ LinkedInβs Top 10 Voices in Technology,Β Shelly PalmerΒ is CEO ofΒ The Palmer Group, a strategic advisory, technology solutions and business development practice focused at the nexus of media and marketing with a special emphasis on augmented intelligence and data-driven decision-making. He is Fox 5 New York’sΒ on-air tech and digital media expert, writes a weekly column forΒ AdAge,Β and is a regular commentator on CNBC and CNN.
Read the original article on LinkedIn.
2 comments
This is an interesting topic — how much are we willing to suspend our privacy concerns for these beneficial trade-offs?
And it’s not a slippery slope unto itself, but you can see, most of our collective attitudes have softened when it comes to privacy, So yes, it may become a slippery slope into more prevalent surveillance, served with a side of our own lackadaisical attitudes.
It relates to an idea I read before: If you get something for free, YOU are the commodity. Your data and information are what is being sold. Always makes me think twice about free game apps, etc.