How Hackers Are Leveraging Machine Learning

Credit to Author: Trend Micro| Date: Tue, 13 Feb 2018 00:34:40 +0000

Machine learning can be leveraged for both beneficial enterprise purposes as well as malicious activity.

For business executives and internal information security specialists, it seems that every day brings a new potential risk to the company – and in the current threat environment, it isn't hard to understand this viewpoint.

Sophisticated cybercriminals are continually on the lookout for the next big hacking strategy, and aren't shy about trying out new approaches to breach targets and infiltrate enterprises' IT assets and sensitive data. One of the best ways to stem the rising tide of threats in this type of landscape is to boost awareness and increase knowledge about the latest risks and how to guard against them.

Currently, an emerging strategy among hackers is the use of machine learning. Unfortunately, like many advanced and innovative technological processes, machine learning can be leveraged for both beneficial enterprise purposes as well as malicious activity.

Machine learning: A primer

Many internal IT and development teams as well as technological agencies are experimenting with machine learning – but white hats aren't alone in their use of this method.

As SAS explained, machine learning is an offshoot of artificial intelligence, and is based on the ability to build automated analytical models. In other words, machine learning enables systems to increase their own knowledge and adapt their processes and activities according to their ongoing use and experience.

"The iterative aspect of machine learning is important because as models are exposed to new data, they are able to independently adapt," SAS stated. "They learn from previous computations to produce reliable, repeatable decisions and results. It's a science that's not new – but one that has gained fresh momentum."

Individuals have likely encountered some form of machine learning algorithm in their daily life already – things like online recommendations from streaming services and retailers, as well as automated fraud detection represent machine learning use cases already in place in the real world.

Digital brain surrounded by computer parts. Artificial intelligence and machine learning can be used to bolster malicious attacks.

Machine learning on both sides of the coin

However, as legitimate agencies and white hat security professionals continue to dig deeper into advantageous machine learning capabilities, hackers are increasingly looking toward AI-based processes to boost the effects of cyberattacks.

"We must recognize that although technologies such as machine learning, deep learning, and AI will be cornerstones of tomorrow's cyber defenses, our adversaries are working just as furiously to implement and innovate around them," Steve Grobman, security expert and McAfee chief technology officer told CSO. "As is so often the case in cybersecurity, human intelligence amplified by technology will be the winning factor in the arms race between attackers and defenders."

But how, exactly, are hackers putting machine learning algorithms to work, and how will these impact today's enterprises? Let's take a look:

ML vs. ML: Evasive malware

When hackers create malware, they don't just look to breach a business – they also often want to remain within victims' systems for as long as possible. One of the first, and likely most dangerous, ways machine learning will be leveraged by hackers is to fly under the radar of security systems aimed at identifying and blocking cybercriminal activity.

A research paper from Cornell University authors described how this type of instance could be brought to life by hackers. Researchers were able to create a generative adversarial network (GAN) algorithm which, in and of itself, was able to generate malware samples. Thanks to machine learning capabilities, the resulting infection samples were able to effectively sidestep machine learning-based security solutions designed specifically to detect dangerous samples.

Security experts also predicted that machine learning could be utilized by cybercriminals to modify the code of new malware samples based on the ways in which security systems detect older infections. In this way, hackers will leverage machine learning to create smarter malware that could potentially fly under the radar within infected systems for longer periods of time.

This will require enterprises to be increasingly proactive with their security posture – monitoring of critical IT systems and assets must take place continually, and security officers must ensure that users are observing best protection practices in their daily access and network activities.

Magnifying glass on binary code with the word DATA in red in magnifying glass. Hackers could automate data gathering processes with machine learning.

Preemptive efforts: Laying the groundwork for attack

Forbes contributor and ERPScan co-founder and CTO Alexander Polyakov noted that hackers could also begin utilizing machine learning to support the work done leading up to an attack.

Before they look to breach an organization, cybercriminals typically begin by gathering as much information about a target as possible. This includes details about company stakeholders that could potentially later be used to spur a phishing attack. With machine learning in place, hackers wouldn't have to carry out these research efforts manually, and instead can automate and speed up the entire processes.

Leveraging machine learning in this way could mean a spike in targeted attacks that utilize personally identifiable information about company leaders and even lower level employees. Polyakov reported that this style of phishing attack could boost the chances of success by as much as 30 percent.

As phishing and targeted attacks become more sophisticated, it's imperative that executives and employees are educated about how to spot a fraudulent message created to appear legitimate. Often, phishing messages will include the recipient's name, title and other details to encourage the victim to open it. However, these emails may also include spelling errors or small changes in sender email addresses, company names, logos and other items used to support the appearance of legitimacy. Ensuring that employees don't fall for these tricks begins with proper security education and training as part of a layered security posture.

Bypassing CAPTCHA systems: Unauthorized access

Many websites and systems leverage CAPTCHA technology as a way to distinguish human users from bots or machine input. However, in the age of machine learning, even these formerly tried-and-true access protections aren't impervious.

This isn't the first time machine learning has emerged as a way for hackers to break through CAPTCHA access – in 2012, researchers proved that machine learning could bypass reCAPTCHA-based systems with an 82 percent success rate. More recently in 2017, researchers used machine learning to support 98 percent accuracy to sidestep Google reCAPTCHA protections.

This threat means that enterprises will have to strengthen their security protections, particularly those that prevent botnet access on customer-facing systems. Polyakov recommended replacing recognition CAPTCHA with MathCAPTCHA, or another more robust alternative.

Machine learning for security

Thankfully, as noted, machine learning can also be leveraged to boost security on the side of the enterprise.

As noted in this blog, machine learning can help pinpoint and close gaps in IoT security, improve the monitoring of data exchange between employee users, and even predict and stop zero-day threats. Click here to read more.

And to learn more about how to safeguard your enterprise against machine learning-based attacks, connect with the security experts at Trend Micro today.

http://feeds.trendmicro.com/TrendMicroSimplySecurity