‘Hey Siri, buy $100 Bitcoin for the burglar guy’

Credit to Author: Jonny Evans| Date: Tue, 14 Nov 2017 06:08:00 -0800

Apple will apparently bring FaceID to its long-awaited HomePod smart speaker systems next year, but voice assistant tech may be a weak link in domestic and enterprise security, fresh research claims.

Researchers at the University of Eastern Finland claim that voice impersonators can fool smart speaker systems into thinking they are an authorized user of those systems.

It’s known that you can undermine voice authorization systems using speech synthesis, voice conversion or even dubbing recordings of a target voice.

While technology-based countermeasures to such attacks are constantly being developed, the Finnish research suggests that voice impersonators are much harder to protect against.

“Skilful voice impersonators are able to fool state-of-the-art speaker recognition systems, as these systems generally aren’t efficient yet in recognising voice modifications,” the researchers claim. This “poses significant security concerns,” they said.

I think this is why Apple and others in the high-tech space have refrained from creating smartphones and other devices that recognize speech as a biometric ID.

The study analysed speech from two professional impersonators. It found that impersonators were able to fool automatic systems and a panel of listeners.

The study notes:

“In the case of acted speech, a successful strategy for voice modification was to sound like a child, as both automatic systems’ and listeners’ performance degraded with this type of disguise.”

We’ve had plenty of high-profile incidents in which security scares have been raised concerning use of non-Apple smart speaker systems. These always-on, always listening systems mean every user has a permanent ear in their home.

The notion that these ears can be activated relatively easily by using a combination of imitation, voice disguise, or other means suggests that when it comes to what those devices can do, users need to ensure payment and personal data demand additional security steps before being enabled.

This was made clear fairly recently when a bunch of Amazon Echo devices began to order a child’s toy once a TV station spoke the Echo keyword and told it to buy.

That was a relatively easy to resolve flaw, but it’s not particularly hard to imagine what might happen if someone gained access to your home, activated your smart speaker system and used that system to make a payment to an untraceable account, such as Bitcoin or Dash.

That’s a domestic threat, but it poses an even bigger threat to enterprise CIOs who may be considering investment in smart speaker systems such as these for some element of their business.

It seems pretty clear that enabling smart systems like these to gain unfettered access to even a limited quantity of the data held by a company could pose some threat.

In some enterprises, such a threat may even put a company in breach of regulated security protocols.

In addition to which, anyone considering an investment in a smart speaker system needs to be 100 percent certain that they have absolute control of who is listening to what they are saying at any time.

You do not want a manufacturer holding onto recordings of your personal speech, and you do not want business competitors to undermine device security in hope of learning secrets about you or your business.

It is also important to note that only Apple’s HomePod anonymizes any recordings stored online of your transactions around the device.

Both Amazon and Google link those recordings to your account. This opens up another potential attack vector. (Apple deletes this data after six months, but it is never associated with your user ID).

There is no real need to panic, of course. What matters is to ensure these systems are deployed securely. This means:

We don’t yet know all the details concerning how Apple plans to secure HomePod systems, but we will find out more when the $349 product ships in December 2017. 

We recently learned that Apple will only offer limited support to third party apps on launch. That’s a good thing: It limits potential security risk and will change as threats are recognized and overcome over time. Your friendly voice imitating burglar will not be able to order themselves an Uber getaway car using your Apple device.

We also know that SiriKit for HomePod relies on a nearby iPhone or iPad to work. This suggests you’ll need to first unlock the device from your iOS device before using it, which will make it a lot less vulnerable to voice mimic attacks.

Developer Guilherme Rambo claims we will be able to define those who have access to HomePod controls: “people on the same network, only people sharing your home, everyone or only password-protected users”.

Apple has been designing connected devices for long enough to learn the need to secure them. Face ID and Touch ID biometric authorization systems demand a passcode at critical times to support their use, for example.

All the same, Apple does seem to be working to ensure that your HomePod system will not undermine your digital security. And that’s a really good thing.

Google+? If you use social media and happen to be a Google+ user, why not join AppleHolic’s Kool Aid Corner community and get involved with the conversation as we pursue the spirit of the New Model Apple?

Got a story? Please drop me a line via Twitter and let me know. I’d like it if you chose to follow me there so I can let you know about new articles I publish and reports I find.

http://www.computerworld.com/category/security/index.rss