Rise of the Funnel Cloud: When Good Clouds Go Bad

Credit to Author: James Cabe| Date: Mon, 07 Aug 2017 12:50:00 +0000

What is a cloud, really?

In the simplest terms, the cloud allows users to store and access data and programs on someone else’s hardware, usually over the internet, rather than using their local device or network resources. But it is much more than simply offsite storage. It also includes services that allow users to replicate some or all of their local environment, from running applications to designing complex infrastructures. And it needs to be able to scale to lots of users.

Simply put, you do not have a “cloud” unless there is a way to commoditize those services. That means that cloud providers also need some sort of repeatability or automation that enables dynamic scalability, tools to enable self-service and, lastly, some sort of platform from which to deliver product.

That is a cloud.

The industry has lived through a couple of cloud meltdowns on both the public cloud (that virtual environment “out there”) and the hybrid side of the cloud (where local and remote cloud environments merge), depending on what kind of meltdown it was. Some of these were the result of flawed procedures and planning, and others were the result of external forces, such as malicious attacks. But in every case, there seems to have been a failure in vision at the architectural level.

While these early challenges have largely been sorted out, we are now seeing many of these same issues reemerge in the IoT space. There are really two types of technical commodities that are causing these challenges, and both are cloud-like and enable the cloud: DVRs and commoditized routers and firewalls.

DVR cameras and services have been delivered in the cloud, but many were built with hardcoded backdoor passwords for factory support. Unfortunately, many of those built-in accounts or backdoors have been leaked to the cybercriminal community. The truth is, a DVR is really nothing more than a small Linux server, which means it is an open source operating system that can have a lot of software stacked on it. That includes attack software and packages that can do damage to the device or can weaponized that DVR so, when combined with tens of thousands of similarly compromised boxes, can be used to harm others.

The same is the case with home routers and firewalls.

Commoditized wireless routers/firewalls deployed in millions of homes have many of the same problems. Manufacturers and service providers have delivered and deployed these devices across the globe, and have layered services on top of them that create cloud environments for homes and offices.

When it all goes funnel-shaped

When the word “cloud” was first being adopted, I will admit I was one of the first people to start rolling my eyes. This especially happened when someone with stars in their eyes would start explaining what they (clouds) were and why they (clouds) were so different.

Start eyeroll.

I, of course, had worked on many “clouds” before. One comes to mind that was very sophisticated for the time. It was nearly 15 years before Amazon had any real traction or worked well. The company I worked for liked my skill set because I could script a little and I knew both network and server infrastructure like back of my hand (I ride bikes so I do indeed look at them often.) The team was fairly sophisticated and had already automated the build process of their servers — completely without any management software. This was hard to do.

What made us so different as a group is that we wanted a user or manager to be able to request an application or service, and then be able to deliver it within an hour or less. This required quite a bit of automation on our part. So we scripted out server installs based on need and the location of the user, and were able to deliver them through Citrix within 15 minutes — even if the process included adding more capacity by adding new servers to the fold. Oh, and all of this was managed through a web console.

Welcome to the cloud… uh, in the mid-2000s.

I left not long after that project was finished. I then found out the company planned to automate the entire lifecycle of an application, including resources. So if an older server needed to be replaced or an application needed updating, this “pre-cloud” system would provide it through a change request made through the web portal. Once it was done, I heard it was a great success.

That is, until I got a call one day that the entire company was shutting down because of a failure.

I called some friends that were still there, and the story went something like this: After the company put in the new lifecycle system, there were no real controls put directly on the software it installed because it believed that the external stuff would take care of security controls. So when a junior admin made a mistake directly in the system, supposedly to fix another problem, it ended up dropping a “rebuild” command on every server in the entire company.

Cue the sound of a tornado… or maybe the sound of the funnel inside a toilet bowl.

Apparently, once the admins realized what was happening, they got up from their desks en masse and ran into the data center to rip cables out of the build servers before they could send more commands to rebuild all the servers.

Unfortunately, this is not the first story told about how automation, scripting and tool sets have turned against the network or users.

In part 2 of this topic, we’ll take a look at how these problems continue to persist. A number of public cloud companies have suffered similar meltdowns, and I expect that more will happen over time. This doesn’t mean that they are bad, or that cloud-based architectures are the wrong direction to go. But auditing and intelligence must be brought to bear on all parts of the process, and not just on the technology itself.

This blog was orginally posted in Tech Target's IoT Agenda

https://blog.fortinet.com/feed