When the threats get weird, the security solutions get weirder

Credit to Author: Mike Elgan| Date: Sat, 02 Dec 2017 02:00:00 -0800

The world of security is getting super weird. And the solutions may be even weirder than the threats.

I told you last week that some of the biggest companies in technology have been caught deliberately introducing potential vulnerabilities into mobile operating systems and making no effort to inform users.

One of those was introduced into Android by Google. In that case, Android had been caught transmitting location data that didn’t require the GPS system in a phone, or even an installed SIM card. Google claimed that it never stored or used the data, and it later ended the practice.

Tracking is a real problem for mobile apps, and this problem is underappreciated in considerations around BYOD policies.

Yale University Law School’s Privacy Lab and the France-based nonprofit Exodus Privacy have documented that more than 75% of the more than 300 Android apps they looked at contained trackers of one kind or another, which mostly exist for advertising, behavioral analytics or location tracking.

Most of that location tracking relies on accessing GPS information, which requires user opt-in. But now, researchers at Princeton University have demonstrated a potential privacy breach by creating an app called PinMe, which harvests location information on a smartphone without using GPS information.

In general, our belief that turning off the location feature of phones protected us from location snoops has been invalidated.

In fact, many of our assumptions around security are being challenged by new facts. Take two-factor authentication, for example.

A report last month by Javelin Strategy & Research claimed that current applications of multi-factor authentication are “being undermined.” Two- or multi-factor authentication is also underutilized by enterprises, with just over one-third using “two or more factors to secure access to their data and systems.”

So we can’t trust two-factor authentication like we used to, and even if we could it’s wildly underutilized.

But surely we can trust Apple devices, right? Apple has a sterling reputation for strong security. Or, I should say, “had” such a reputation.

Apple apologized and issued a patch this week for a major security flaw that enabled anyone with physical access to an Apple computer running macOS High Sierra to gain full access without even using a password (by simply using “root” as the username).

Apple fixed the flaw. But the fact that it existed at all is new and weird and challenges our beliefs about Apple’s security cred.

Apple’s new Face ID authentication has been defeated by researchers, and some security experts refuse to use it. The methods for overcoming Face ID range from simply finding someone who looks similar to creating a realistic mask to fool it. Cybercriminals are going to be building and wearing masks, apparently.

And some authentication systems sound worse than the risks they’re supposed to protect us from.

Facebook is reportedly testing an authentication scheme that requires users to take a selfie at the point of logging in. Many smartphone photos contain time and location information.

In the past month or two, our assumptions around security have been upended. Things we used to believe were secure are not.

And it’s going to get worse before it gets better.

The software security company McAfee said this month that 2018 will be characterized by a new intensity in attacks, as “adversaries will increase their use of machine learning to create attacks, experiment with combinations of machine learning and artificial intelligence (A.I.), and expand their efforts to discover and disrupt the machine learning models used by defenders.”

Our current security systems are broken, and “adversaries” are getting super sophisticated.

What we need are much better and more extreme security measures that are also usable in real-world, everyday scenarios by regular users.

But there’s reason for optimism.

Two Google researchers have developed a machine-learning technology that instantly detects whether anyone else is looking at your smartphone screen.

The system combines facial recognition (who is on camera) and gaze detection (what they’re looking at) to prevent “shoulder surfers” from sneaking a peek at your screen.

The detection works in a fraction of a second, and in practical use a shoulder-surfer event could cause the screen to go dark.

The face-recognition technology’s ability is akin to the Not Hotdog app from HBO’s Silicon Valley: It’s not looking to identify everyone, merely to identify whether each human is the authorized user or not the authorized user. When the latter occurs, access is denied.

This is obviously superior in concept to the current use of face recognition on smartphones, where the authorized face unlocks the device then, once unlocked, anyone can see what’s on the screen.

The key concept behind this technology is constant, real-time authentication, rather than authenticate once, then let anyone see or use the device afterward.

Google is also thinking about a “user-detecting laptop lid,” according to a recently granted Google patent.

The patent describes a laptop lid that automatically opens for authorized users, then repositions itself to directly face you as you move your head around.

It works by using two cameras — one on the outside of the lid, and one on the inside. These detect and recognize faces. When the authorized user approaches the Pixelbook (presumably), the lid physically unlocks and opens. After a certain amount of time after the authorized user has left the room, the laptop lid automatically closes and physically locks.

The patent also holds out the possibility of using alternative means of authentication, namely NFC, Bluetooth pairing, voice ID, iris scanning or gesture recognition — or combinations of methods.

From a security standpoint, the idea introduces a physical lock to authentication, with convenient, automatic unlocking for authenticated users.

Some forms of authentication are being perfected, too. For example, voice ID in concept is great because it’s easy — we’re all going to be talking to our phones anyway, so authenticating with voice is natural. Unfortunately, it’s easy to spoof.

State University of Florida researchers have come up with technology that verifies voice ID. It’s designed to be used with technologies that verify users based on patterns in their voice. Because these can be spoofed with high-quality recordings, the researchers came up with VoiceGesture, which uses a smartphone to transmit ultrasonic sound waves that are reflected off the user’s face. It confirms that the authorized voice is in fact being spoken in real time by the physical person and is not a recording.

All this technology, of course, uses A.I. And A.I. is the key to better cybersecurity going forward.

It’s a well-known maxim in IT that as soon as you idiot-proof something, they build a better idiot. Which is to say: Users are often the weakest link in any chain of security.

That’s why A.I. will come into play to help users make better decisions.

A company called KnowBe4, for example, is building an A.I. virtual assistant that advises users on security decisions (“You may not want to download that attachment, Dave”).

What you need to know is this: Yesterday’s cyberattacks are going to be superseded in the year ahead by strange and unexpected new threats, many of which will deploy A.I. And the best (or only) defense will be weird new solutions themselves based on A.I.

An A.I. arms race is coming. And it’s going to be like nothing we’ve ever seen.

http://www.computerworld.com/category/security/index.rss