Home > Cloud Computing, Cloud Security, Cloud Security Alliance, CloudAudit, Compliance, Virtualization, Virtualization Security > What’s The Problem With Cloud Security? There’s Too Much Of It…

What’s The Problem With Cloud Security? There’s Too Much Of It…

Here’s the biggest challenge I see in Cloud deployment as the topic of security inevitably occurs in conversation:

There’s too much of it.

Huh?

More specifically, much like my points regarding networking in highly-virtualized multi-tenant environments — it’s everywhere — we’ve got the same problem with security.  Security is shot-gunned across the cloud landscape in a haphazard fashion…and the buck (pun intended) most definitely does not stop here.

The reality is that if you’re using IaaS, the lines of demarcation for the responsibility surrounding security may in one take seemed blurred but are in fact extremely well-delineated, and that’s the problem.  I’ve seen quite a few validated design documents outlining how to deploy “secure multi-tentant virtualized environments.”  One of them is 800 pages long.

Check out the diagram below.

I quickly mocked up an IaaS stack wherein you have the Cloud provider supplying, operating, managing and securing the underlying cloud hardware and software layers whilst the applications and information (contained within VM boundaries) are maintained by the consumer of these services.  The list of controls isn’t complete, but it gives you a rough idea of what gets focused on. Do you see some interesting overlaps?  How about gaps?

This is the issue; each one of those layers has security controls in it.  There is lots of duplication and there is lots of opportunity for things to be obscured or simply not accounted for at each layer.

Each of these layers and functional solutions is generally managed by different groups of people.  Each of them is generally managed by different methods and mechanisms.  In the case of IaaS, none of the controls at the hardware and software layers generally intercommunicate and given the abstraction provided as part of the service offering, all those security functions are made invisible to the things running in the VMs.

A practical issue is that the FW, VPN, IPS and LB functions at the hardware layer are completely separate from the FW, VPN, IPS and LB functions at the software layer which are in turn completely separate from the FW, VPN, IPS and LB functions which might be built into the VM’s (or virtual appliances) which sit stop them.

The security in the hardware is isolated from the security in the software which is isolated from the security in the workload.  You can, today, quite literally install the same capabilities up and down the stack without ever meeting in the middle.

That’s not only wasteful in terms of resources but incredibly prone to error in both construction, management and implementation (since at the core it’s all software, and software has defects.)

Keep in mind that at the provider level the majority of these security controls are focused on protecting the infrastructure, NOT the stuff atop it.  By design, these systems are blind to the workloads running atop them (which are often encrypted both at rest and in transit.)  In many cases this is why a provider may not be able to detect an “attack” beyond data such as flows/traffic.

To make things more interesting, in some cases the layer responsible for all that abstraction is now the most significant layer involved in securing the system as a whole and the fundamental security elements associated with the trust model we rely upon.

The hypervisor is an enormous liability; there’s no defense in depth when your primary security controls are provided by the (*ahem*) operating system provider.  How does one provide a compensating control when visibility/transparency [detective] are limited by design and there’s no easy way to provide preventative controls aside from the hooks the thing you’re trying to secure grants access to?

“Trust me” ain’t an appropriate answer.  We need better visibility and capabilities to robustly address this issue.  Unfortunately, there’s no standard for security ecosystem interoperability from a management, provisioning, orchestration or monitoring perspective even within a single stack layer.  There certainly isn’t across them.

In the case of Cloud providers who use commodity hardware with big, flat networks with little or no context for anything other than the flows/IP mappings running over them (thus the hardware layer is portrayed as truly commoditized,) how much better/worse do you think the overall security posture is of a consumer’s workload running atop this stack.  No, that’s not a rhetorical question.  I think the case could be argued either side of the line in the sand given the points I’ve made above.

This is the big suck.  Cloud security suffers from the exact same siloed security telemetry problems as legacy operational models…except now it does it at scale. This is why I’ve always made the case that one can’t “secure the Cloud” — at least not holistically — given this lego brick problem.   Everyone wants to make the claim that they’re technology is that which will be the first to solve this problem.  It ain’t going to happen. Not with the IaaS (or even PaaS) model, it won’t.

However, there is a big opportunity to move forward here.  How?  I’ll give you a hint.  It exists toward the left side of the diagram.

/Hoff

Enhanced by Zemanta
  1. October 18th, 2010 at 01:45 | #1

    Thanks for this excellent overview of some of the problems of taking a layered piecemeal approach to security in the cloud. To add a vendor perspective from the IaaS space, we don't try to secure the networks for our clients, its not possible as you rightly point out especially given the diverse multi-tenant environment we actually have. We take a clear division of responsibility approach rather than offering security services that in reality aren't deliverable.

    Users can run private and/or public networking interfaces. Traffic is separated at the hypervisor level in all cases. Users can run private networks in the cloud, critically disk traffic and private VLAN traffic runs on physically isolated private networking equipment. Why? This ensures performance in the case of DOS issues on the public network.

    We give our users the power to secure their infrastructure through completely open software and networking layers. There is no one size fits all 'magical forcefield' approach. Securing cloud servers in our cloud is very much the same as securing dedicated hardware.

    That still leaves one key area within our remit and that's access.

    The convenience of the cloud through easy management, automation etc. opens up new security needs for access and offering secure access is the primary role of a cloud vendor in the IaaS space in our opinion. Our users have full root access to their servers without any shared software resources so direct access is secured by them in the usual way. That leaves API and web console access as very attractive targets. Our approach is a granular one allowing a user to implement their own access security policies using the various measures we have made available.

    For the web console that means allowing users to have two factor authentication whatever their size (we use SMS second factor authentication as recently adopted by Google Apps for that reason). Crucially our web console communicates solely through our API (it essentially floats above it in a modular fashion) also giving one single point of entry to our cloud. The API itself can be customised including:

    – turning it off completely (for users without dynamic infrastructure or not using it)

    – API IP white list

    – authentication options (choose from http/https, https only, plain or digest mode, UUID/API key or Username/Password etc.)

    Our aim has been to use tools that all our customers can implement in a straightforward and transparent way. Along with other aspects of cloud computing (like billing and usability!) it appears that security is often unnecessarily complicated and obscured.

    Best wishes,

    Robert

    Robert Jenkins

    CloudSigma

  2. October 18th, 2010 at 05:43 | #2

    Its an interesting and valid argument. Thanks for the elegant point of view.

    As you hint towards the end – APIs at each level, if properly designed, can definitely improve the situation. Maybe even that can be game-changing.

    We've blogged about this here:
    http://www.porticor.com/2010/10/multi-layered-sec

  3. October 18th, 2010 at 10:49 | #3

    will a cloud protocol provide the level of trust required? Perhaps this document (CloudTrust Protocol) helps? http://www.trustedcloudservices.com/images/storie

  1. October 17th, 2010 at 17:36 | #1
  2. October 18th, 2010 at 17:22 | #2
  3. October 19th, 2010 at 12:33 | #3
  4. November 3rd, 2010 at 16:18 | #4
  5. December 7th, 2010 at 09:33 | #5
  6. February 1st, 2011 at 13:32 | #6
  7. February 23rd, 2011 at 05:02 | #7