Bad Choices, Exposed Data

Credit to Author: Mark Nunnikhoven (Vice President, Cloud Research)| Date: Wed, 15 Feb 2017 16:46:07 +0000

Our researchers produce a lot of really interesting material. This week, they published a paper called, “U.S. Cities Exposed In Shodan.” The research looks systems that are exposed online…exposure that systems owners aren’t always aware of.

The most disturbed statistic in the research is a simple one: the fourth most exposed system is MySQL. I had to double and then triple check that but the stat holds up.

The rest of the top 20 exposures are expected; web servers, file servers, and remote access systems. These systems have to be exposed to the Internet in order to do their job.

But why is a database exposed to the outside world?

Databases Targeted

This isn’t the first time that we’ve seen databases exposed directly to the Internet. There has been a recent spike in attacks on MongoDB installations. The attackers quickly moved on to ElasticSearch, Hadoop, and other sources of data that were publicly accessible.

This isn’t surprising. Data is the a valuable currency in the underground and criminals are now double dipping with ransomware. First selling the data back to it’s owners (read more) and for larger data set, selling the data in the underground.

Protecting a valuable asset like a database should be priority No. 1 for your teams. Which loops back to the original question…why are these data sources exposed to the outside world?

Design Choices

The answer to that is multi-faceted. Teams are looking to deploy at speed and with cloud technologies, they can go to market faster than ever. In that haste, security measures can be forgotten or passed over until “later” (pro tip: later never comes, security needs to be built in from the start).

There’s also a higher demand for client-side processing in modern web apps. Data is needed to fuel that processing. As result, we see a tighter integration of the client-side and the data backend. In order to increase performance, a direct connection is often made.

This stands in stark contrast to the traditional design approach where data flowing between the front end and back end would pass through strong security controls.

Having a direct route to your data source opens up additional risks–as we’ve seen with these attacks. If you’re making this design choice, you have to make sure that you’re taking steps to protect your data.

Paramount among those steps in limited the privileges of the front end identities making the request and making sure that each session uses a unique identity instead of a having one for the entire application.

Misconfigurations

Compounding the issue of design choices is a lack of awareness of the configuration options available for each of these data sources.

In a lot of cases where a data source is exposed, there’s actually no requirement…it’s simply a misconfiguration. The tooling supports a more secure design but the team needs to take advantage of those options to lock down their data.

Misconfigurations happen even when the team has the best of intentions. That’s why security testing should be part of your regular test suite. While it may sound like a major undertaking, it’s actually quite simple in this case.

As a part of your regular deployment testing, you should be running basic tests to see if services are accessible from places they should be. If you’re database should only accept requests from your web services, try accessing it from another system (like the test server itself).

Catching the error from a basic connection request is all it takes. It’s a simple step that provides a major return on your five-minute investment.

Other basic tests you should consider:

  • removing data beyond the limits of your apps privilege
  • enumerating other data sources / tables / indices
  • queries with too broad a scope for the task at hand

These ideas are just the tip of the iceberg. There’s a lot more your can do for testing the security configuration of your data source but the listed tests are simple to write, run quickly, and identify common avenues of attack.

Next Steps

Teams are doing more and moving faster than ever. That’s a wonderful thing. But if you’re moving fast and exposing your data, you’re going to spend a lot more time cleaning up from a breach than you’ve gained by moving faster.

The first step for anyone deploying a publicly accessible data source is to step back and ask if you really want to be doing that. There are any number of ways to expose the data securely without making a directly connection from the Internet to the source.

If you decide that it is the write design choice, understanding the technology and services in your solution stack is the next step to a strong defense. We’ve been screaming RT*M for a long time. There’s a reason for that…it’s great advice.

After you’ve got a solid understanding of what and where you’re deploying your application, it’s time to implement some basic testing to ensure that you’ve configured the deployment correctly.

What are some of the reasons you’ve exposed a data source directly to the Internet? Performance? Simplicity? Let’s chat on Twitter (where I’m @marknca) to better understand the motivation and issues.

http://feeds.trendmicro.com/TrendMicroSimplySecurity