Why every user needs a smart speaker security policy

Credit to Author: Jonny Evans| Date: Mon, 24 Feb 2020 06:06:00 -0800

Does your voice assistant wake up randomly when you are engaged in normal conversation, listening to radio, or watching TV? You’re not alone, and this may have serious implications in enterprise security policy.

“Anyone who has used voice assistants knows that they accidentally wake up and record when the ‘wake word’ isn’t spoken – for example, ‘seriously’ sounds like the wake word ‘Siri’ and often causes Apple’s Siri-enabled devices to start listening,” the Smart Speakers research study says.

In an ideal world it wouldn’t matter.

You’d say what you wanted to say and if your voice assistant accidentally woke in response to random conversation it wouldn’t matter, because no one else would ever know it happened.

Unfortunately, this is not the world we’re in.

This is because elements of accidentally gathered conversations are listened to by people we don’t know anything about as part of most voice assistant company’s “grading process”. (A process in which what these systems do is checked and software improved).

This was the cause of some discussion last year, with most players in the space subsequently doing some work to mitigate the consequences of this and put users in a little more control. However, it seems plausible to think that some information still leaks.

While it is true that some of the more egregious elements of this have been addressed, particularly by Apple, that’s not going to be a big enough commitment for enterprise security teams seeking to protect confidential information from being accidentally picked-up by entities outside of their control.

Preventing things like this taking place this must be pretty high up the list of most security priorities.

Apple (at least) does promises that any recordings it keeps “are not associated” with your identity in the form of your Apple ID.

All the same, in some cases, stripping the identity away from the statement may still not be security enough, particularly as criminals put more effort into breaking into Echo, Google and HomePod devices.

How big a threat is this? It may be greater than many think. The Smart Speakers research study conducted over six months by researchers at Northeastern University and Imperial College London provides insight into how often such systems can be accidentally triggered.

They found that:

It is worth observing that some devices remain active for longer than others when accidentally activated:

“Echo Dot 2nd Generation and Invoke devices have the longest activations (20-43 seconds). For the HomePod and the majority of Echo devices, more than half of the activations last 6 seconds or more,” the researchers said.

In other words, some Echo devices may listen and record almost a minute of what you say once you accidentally enable the machine.

A HomePod at least only listens for up to six seconds.

The study attempted to figure out which word and sentence constructions are most likely to activate the systems.

Unsurprisingly, they found words/phrases that sound like the trigger phrase were the most likely to trigger the devices.

In the case of the HomePod, for example:

“Activations occurred with words rhyming with Hi or Hey, followed by something that starts with S+vowel, or when a word includes a syllable that rhymes with “ri” in Siri. Examples include “He clearly”, “They very”, “Hey sorry”, “Okay, Yeah”, “And seriously”, “Hi Mrs”, “Faith’s funeral”, “Historians”, “I see”, “I’m sorry”, “They say”.”

Some consumer users may not be so concerned at snippets of conversation being picked up by accidentally invoked smart speaker systems.

At the same time, it does seem reasonable to expect manufacturers to make it possible for consumers to check the frequency of such accidents, and manage those recordings that do exist.

The researchers agree, and will next be exploring if smart speaker system manufacturers “Correctly show all cases of audio recording to users?”

That’s important, of course, because any home or enterprise users wanting to conduct a security audit of accidental or otherwise recordings made using such devices will want to know this.

To be fair, Apple strips the ID from the recording, which means those snippets of sound are no longer directly related to you.

But any information that does leak in this way may still be of value to your or to your business.

The beauty of using Siri on your HomePod is that you can easily ask it to send messages, play music at outstanding quality, take reminders and lots of other tasks.

However, the trade-off in terms of accidentally triggering the device may make it unsuitable for some deployments.

Fortunately, there are ways to stop Siri listening to your conversations on most devices, including the HomePod. Here are three ways to prevent your HomePod listening to you:

Ironically, you can ask Siri, just say “Hey Siri, stop listening.” The device will ask you if you are sure.

While in this state, Siri will not listen to you, which also means it won’t take accidental recordings. You will still be able to stream music from another device through the system, or just tap the top of the device to enable Siri again.

Not using the HomePod? Disconnect it from power and there’s no chance it will be listening. Though you won’t get to enjoy the fantastic-sounding audio these systems create.

I don’t intend to be an alarmist in focusing on smart device security. I think it is important to do so. In part this is because this is still a very early stage industry and mistakes do and will get made.

This is why it seems reasonable to me that every smart speaker user should take an audit of the security of their systems, just as I advise every iPhone, iPad or Mac user to regularly check their own system security.

Enterprise users should make such security audits a regular part of what they do.

Stay safe out there!

Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

http://www.computerworld.com/category/security/index.rss