Mobile security: Worse than you thought

Credit to Author: Evan Schuman| Date: Tue, 18 Feb 2020 03:00:00 -0800

Many security professionals have long held that the words “mobile security” are an oxymoron. True or not, with today’s mobile usage soaring in enterprises, that viewpoint may become irrelevant. It’s a reasonable estimate that 2020 knowledge workers use mobile devices to either supplement or handle much of their work 98% of the time. Laptops still have a role (OK, if you want to get literal, I suppose a laptop can be considered mobile), but that’s only because of their larger screens and keyboards. I’d give mobile players maybe three more years before that becomes moot.

That means that security on mobile needs to become a top priority. To date, that usually has been addressed with enterprise-grade mobile VPNs, antivirus and more secure communication methods (such as Signal). But in the latest Verizon Data Breach Investigations Report — always a worthwhile read — Verizon eloquently argues that aside from wireless, the form factor of mobile in and of itself poses security risks.

From the Verizon report: Users are “significantly more susceptible to social attacks they receive on mobile devices. This is the case for email-based spear phishing, spoofing attacks that attempt to mimic legitimate webpages, as well as attacks via social media. The reasons for this stem from the design of mobile and how users interact with these devices. In hardware terms, mobile devices have relatively limited screen sizes that restrict what can be accessed and viewed clearly. Most smartphones also limit the ability to view multiple pages side-by-side, and navigating pages and apps necessitates toggling between them—all of which make it tedious for users to check the veracity of emails and requests while on mobile. Mobile OS and apps also restrict the availability of information often necessary for verifying whether an email or webpage is fraudulent. For instance, many mobile browsers limit users’ ability to assess the quality of a website’s SSL certificate. Likewise, many mobile email apps also limit what aspects of the email header are visible and whether the email-source information is even accessible,” the report said. “Mobile software also enhances the prominence of GUI elements that foster action — accept, reply, send, like, and such — which make it easier for users to respond to a request. Thus, on the one hand, the hardware and software on mobile devices restrict the quality of information that is available, while on the other they make it easier for users to make snap decisions. The final nail is driven in by how people use mobile devices. Users often interact with their mobile devices while walking, talking, driving, and doing all manner of other activities that interfere with their ability to pay careful attention to incoming information. While already cognitively constrained, on screen notifications that allow users to respond to incoming requests, often without even having to navigate back to the application from which the request emanates, further enhance the likelihood of reactively responding to requests.”

In short, users dealing with email on a mobile device — an incredibly common happening in corporate campuses — are far more susceptible to a phishing attack then on a typical desktop device. You can be confident this point has not been lost on cyberthieves and cyberterrorists. They already know the likely OS of a device that clicks on one of their evil phishing attacks, which makes them quite happy.

What to do about this? First, all phishing/malware training must now prominently feature mobile-only training. Yes, training only helps so much, but it’s a good start. Secondly, when you are testing your own people by trying to trick them into clicking, design the tricks to assume mobile. Your success rates will go up and people will learn faster what would have happened had your test been a real attack.

Third, pressure handset manufacturers and the two mobile OS giants — Google and Apple, of course — to address the most shameful part of Verizon’s concerns, which is that “many mobile browsers limit users’ ability to assess the quality of a website’s SSL certificate. Likewise, many mobile email apps also limit what aspects of the email header are visible and whether the email-source information is even accessible.”

Although both points are critical and should have been addressed long ago, it’s the email header part that is the most irresponsible. Granted, most consumers rarely bother to check even on desktop devices, but Fortune 1,000 employees need to told that checking is mandatory before clicking. Although some phishers are sophisticated and make the attempt to craft reasonable-appearing fake domain names, most don’t bother. And with the expectation that far more of those messages are going to be seen via a tiny mobile screen over the next couple of years, I’d guess that most will continue to not bother. I’m still impressed by phishing attempts I see every day and a quick peek at the email header makes everything quite clear.

Hence, full headers musts be easily accessed on any mobile device and there is no reason — not business-based, not marketing-based, certainly not security-based — for Apple and Google to not fix this. Let both firms know that this will influence which phones you’ll buy later this year and you’ll have their attention.

As for SSL certs, I suppose it can’t hurt to fix that glaring hole. But unlike email headers, the very crappy tactics most cert issuers use to verify the authenticity of the request makes me more worried that cyberthieves will turn this to their advantage and trick cert-issuers (not that difficult) to give them legitimate credentials for not-so-legitimate domains. So, yes, certificate verification is important (not so sure how much any browser can do to “assess the quality of a website’s SSL certificate,” but if they think of a way, by all means do so), but I’d first like to dramatic improvements in certificate firm verification methods.

It’s not merely “verify that you are who you claim to be” (unless you have a constantly updated list of bad guys, not sure of the point) nor is it making sure that you own the domain for which you want a certificate. You need to verify that, but that is a remarkably low bar. I want the cert outfits to look at the domain and see if it close to someone else’s well-known domain. I’m not suggesting that you decline all such requests, but at least reach out to the owner of the well-known domain to see if there are issues. And perhaps question the requester far more closely. If a cert is supposed to make a user comfortable to click, let’s put some robust and intelligent authentication into the process. And then make it easy to check a cert’s authenticity. With today’s verification methods, a major cyberthief is more likely to trick a cert player into issuing a real one than is using a fake one. Both problems need to be addressed.

http://www.computerworld.com/category/security/index.rss