Home > Disruptive Innovation, Information Security, Risk Assessment, Risk Management, Security Strategy, Virtualization > The Challenge of Virtualization Security: Organizational and Operational, NOT Technical

The Challenge of Virtualization Security: Organizational and Operational, NOT Technical

Bullfight
Taking the bull by the horns…

I’ve spoken many times over the last year on the impact virtualization brings to the security posture of organizations.  While there are certainly technology issues that we must overcome, we don’t have solutions today that can effectively deliver us from evil. 

Anyone looking for the silver bullet is encouraged to instead invest in silver buckshot.  No shocker there.

There are certainly technology and solution providers looking to help solve these problems, but honestly, they are constrained by the availability and visibility to the VMM/Hypervisors of the virtualization platforms themselves. 

Obviously announcements like VMware’s VMsafe will help turn that corner, but VMsafe requires re-tooling of ISV software and new versions of the virtualization platforms.  It’s a year+ away and only addresses concerns for a single virtualization platform provider (VMware) and not others.

The real problem of security in a virtualized world is not technical, it is organizational and operational.

With the consolidation of applications, operating systems, storage, information, security and networking — all virtualized into a single platform rather than being discretely owned, managed and supported by (reasonably) operationally-mature teams — the biggest threat we face in virtualization is now we have lost not only visibility, but the clearly-defined lines of demarcation garnered from a separation of duties we had in the non-virtualized world.

Many companies have segmented off splinter cells of "virtualization admins" from the server teams and they are often solely responsible for the virtualization platforms which includes the care, feeding, diapering and powderering of not only the operating systems and virtualization platforms, but the networking and security functionality also.

No offense to my brethren in the trenches, but this is simply a case of experience and expertise.  Server admins are not experts in network or security architectures and operations, just as the latter cannot hope to be experts in the former’s domain.

We’re in an arms race now where virtualization brings brilliant flexibility, agility and cost savings to the enterprise, but ultimately further fractures the tenuous relationships between the server, network and security teams.

Now that the first-pass consolidation pilots of virtualizing non-critical infrastructure assets has been held up as beaconing examples of ROI in our datacenters, security and networking teams are exercising their veto powers as virtualization efforts creep towards critical production applications, databases and transactional systems.

Quite simply, the ability to express risk, security posture, compliance, troubleshooting and measureing SLA’s and dependencies within the construct of a virtualized world is much more difficult than in the discretely segregated physical world and when taken to the mat on the issues, the virtual server admins simply cannot address these issues competently within the scope of language of the security and risk teams.

This is going to make for some unneeded friction in what was supposed to be a frictionless effort.  If you thought the security teams were thought of as speed bumps before, you’re not going to like what happens soon when they try to delay/halt a business-driven effort to reduce costs, speed time-to-market, increase availability and enable agility.

I’ll summarize my prior recommendations as to how to approach this conundrum in a follow-on post, but the time is now to get these teams together and craft the end-play strategies and desired end-states for enterprise architecture in a virtualized world before we end up right back where we started 15+ years ago…on the hamster wheel of pain!

/Hoff

  1. March 25th, 2008 at 18:07 | #1

    What about the "little big shop" that is big enough to have to be concerned about virtualization security but not big enough to have a separation of duties? Do we just bite the bullet and do the best we can and have a glass of Johnny Walker to help us sleep at night?

  2. March 25th, 2008 at 18:23 | #2

    Yup. 😉
    Seriously, I didn't do a good job of qualifying out the SME/SMB who usually has that person of many hats versus the larger organizations to which I was referring. Methinks I shall go back and add that caveat…I made it on the phone to someone earlier, but forgot to include it here. Thanks.
    The point of my post is that folks are constantly hoping (not a good strategy) that we're going to produce technology solutions to problems that are largely human and operational in nature.
    Many of the things one should do (that I've written about before) to secure virtualized environments are quite simply carry-forward tasks that folks have not done in the non-virtualized world, like network segmentation, procedural and operational practices definition and standards, workflow documentation, monitoring, incident response, etc…
    A higher-level classic example…
    Getting asked about how to assess risk in a virtual environment only tells me one thing — you're not assessing risk in your non-virtualized environment which means you're not efficient, effective and possibly not investing appropriately.
    This is as true in both the SMB/SME world as it is in the F500…
    Make any sense?

  3. March 25th, 2008 at 19:16 | #3

    I think a lot of computer consultants and solution providers miss the boat big-time when they get too caught up in the technological aspects of the services they provide. The truth is, in order to be REALLY successful at providing the best solutions for your clients, if you are a responsible consultant, you need to really put your “geek” hat away and think about the business side of things and how technology can fit into the spaces of what is going to work best from a BUSINESS perspective for your clients. Really, your clients aren’t going to care about the latest and greatest bits of technology (or at least most of them won’t). What they are going to care about, especially when it comes to security is that you can answer “Absolutely and completely” to the question, “Is my important information completely and totally protected?” And, from a big-picture perspective, they are going to want to know that you are thinking about increasing their productivity and improving their efficiency and bottom line.

  4. March 26th, 2008 at 19:33 | #4

    Chris-
    Great points about the organizational challenges inherent with data center virtualization. I do think there will be technical challenges, especially for deep packet-centric IPS partitioning (sensors/agents deployed at checkpoints in a now fluid mesh of VMs sprawled across multiple hosts inspecting and alerting/blocking based on full traffic pattern matching). I think the problem will have to be solved at layer 7 with full protocol context and exception-based correction. Otherwise the noise, false alarms and latency (multiple agents inspecting and reporting, etc)would erode the business case on multiple fronts: from noise management and tuning to inflexibility.
    We're certainly seeing an array of new solutions appearing to tackle the virtsec problem but I think performance, accuracy and overhead issues will be driven more and more by core architecture choices. Deep packet could come to mean deep trouble.
    Greg
    Blue Lane

  5. March 26th, 2008 at 19:37 | #5

    Chasing the Dragon

    Today started well enough. We had two people at a Microsoft Server 2008/SQL 2008/Some Developer Crap 2008 launch event and it was setting up for a nice quiet day with a short staff. Then about 9 oclock the wheels fell off. Ive posted a…

  6. Jammer
    March 26th, 2008 at 23:28 | #6

    Sorry for the long post. I agree that there may be some organizational/operational challenges; however, I feel that the real challenge is simply the architectural makeup of virtualization in regards to the implementation of controls to meet regulatory requirements. This is a completely different paradigm then the traditional network segmentation model. The tools and processes that we have used for years in these areas to meet compliance requirements and to increase our security posture through defense-in-depth have not been developed in the virtualization architecture. Unlike the tradition model where we could rely on our own resources by implementing third party tools (e.g. firewall, ids, etc), with virtualization we are having to depend on the vendor as it is now all contained within the system (e.g. hardware, software (hypervisor), etc.). And today, developing security controls that can be managed and monitored hasn’t been the top priority for the vendor.
    Another thing I think is interesting is the claim to lower cost of ownership. I don’t believe it is a significant cost saving as people might think; I feel that virtualization simply shifts the hardware and environmental cost to other places. It doesn’t really reduce OS’s, licenses, etc. as you still have to pay for these; and it increases the cost of management as the organizations IT Department has to be extremely disciplined in developing and following operational processes. This includes flawless change management, configuration management, release management, patch management and as we all know these are expensive processes. If a company hasn’t already implemented good processes like ITIL and isn’t at a Level 3+ maturity level, based off of CMMI, it will be very difficult to properly manage a virtualized environment. And to implement this will have a substantial cost to a company.
    I also feel that companies should look at the whole picture, not just the initial short term hardware cost savings of virtualization. If a company receives compliance fines because they cannot demonstrate that they are meeting regulatory requirements virtualization isn’t a good idea. One of the things we are doing at my company is that we are focusing on system classes (e.g. critical versus non-critical) and then looking at how we can virtualize in these spaces, keeping them separate both physically and logically. For example, we will create a VM farm for only critical assets and chassis will only contain critical system blades and the same for non-critical assets; however, the two will never co-mingle.
    Because the market drives what vendor do I expect that we will see virtualization and security play together soon (18-36 months). As a matter-of-fact, for virtualization to reach its full capability the vendors will have to think about this in order to enable their customers to meet regulatory requirements. If not, organizations will continue to contain it only to non-production and non-critical production activities and they vendor will not make as much money as they could. A good example of this recognition by the vendors was recently seen by the VMSafe initiative from VMWare.

  7. March 28th, 2008 at 14:22 | #7

    I was on a panel a few weeks ago and one of the vendors asked the netsec audience how many were virtualizing production. I was frankly surprised by how many raised their hands. Then the audience was asked if they knew how many servers they were protecting. Not one who raised their hand knew and the room chuckled. I think the fluid nature is going to take some getting used to by netsec pros using a deep packet (signature/anomaly)pattern match defense.
    I think movement and change will require much higher levels of accuracy and proactive protection than what many are used to.
    Greg
    Blue Lane

  1. No trackbacks yet.