Home > Cloud Computing, Cloud Security, Compliance, PCI, Virtualization, Virtualization Security > Navigating PCI DSS (2.0) – Related to Virtualization/Cloud, May the Schwartz Be With You!

Navigating PCI DSS (2.0) – Related to Virtualization/Cloud, May the Schwartz Be With You!

[Disclaimer: I’m not a QSA. I don’t even play one on the Internet. Those who are will generally react to posts like these with the stock “it depends” answer, to which I respond “you’re right, it does.  Not sure where that leaves us other than with a collective sigh, but…]

The Payment Card Industry (PCI) last week released version 2.0 of the Data Security Standard (DSS.) [Legal agreement required]  This is an update from v1.2.1 but strangely does not introduce any major new requirements but instead clarifies language.

Accompanying this latest revision is also a guidance document titled “Navigating PCI DSS: Understanding the Intent of the Requirements, v2.0” [PDF]

One of the more interesting additions in the guidance is the direct call-out of virtualization which, although late to the game given the importance of this technology and its operational impact, is a welcome edition to this reader.  I should mention I’ve sat in on three of the virtualization SIG calls which gives me an interesting perspective as I read through the document.  Let me just summarize by saying that “…you can’t please all the people, all of the time…” 😉

What I find profoundly interesting is that since virtualization is a such a prominent and enabling foundational technology in IaaS Cloud offerings, the guidance is still written as though the multi-tenant issues surrounding cloud computing (as an extension of virtualization) don’t exist and that shared infrastructure doesn’t complicate the picture.  Certainly there are “cloud” providers who don’t use infrastructure shared with other providers beyond themselves in order to deliver service to different customers (I think we call them SaaS providers,) but think about the context of people wanting to use AWS to deliver services that are in scope for PCI.

Here’s what the navigation document has to say specific to virtualization and ultimately how that maps to IaaS cloud offerings.  We’re going to cover just the introductory paragraph in this post with the guidance elements and the actual DSS in a follow-on.  However, since many people are going to use this navigation document as their first blush, let’s see where that gets us:

PCI DSS requirements apply to all system components. In the context of PCI DSS, “system components” are defined as any network component, server or application that is included in, or connected to, the cardholder data environment. System components” also include any virtualization components such as virtual machines, virtual switches/routers, virtual appliances, virtual applications/desktops, and hypervisors.

I would have liked to see specific mention of virtual storage here and although it’s likely included by implication in the management system/sub-system mentions above and below, the direct mention of APIs. Thanks to heavy levels of automation, the operational movements related to DevOps and with APIs becoming the interface of the integration and management planes, these are unexplored lands for many.

I’m also inclined to wonder about virtualization approaches that is not server-centric such as physical networking devices, databases, etc.

If virtualization is implemented, all components within the virtual environment will need to be identified and considered in scope for the review, including the individual virtual hosts or devices, guest machines, applications, management interfaces, central management consoles, hypervisors, etc. All intra-host communications and data flows must be identified and documented, as well as those between the virtual component and other system components.

It can be quite interesting to imagine the scoping exercises (or de-scoping more specifically) associated with this requirement in a cloud environment.  Even if the virtualized platforms are operated solely on behalf of a single customer (read: no shared infrastructure — private cloud,)  this is still an onerous task, so I wonder how — if at all — this could be accomplished in a public IaaS offering given the lack of transparency we see in today’s cloud operators.  Much of what is being asked for relating to infrastructure and “data flows” between the “virtual component and other system components” represents the CSP’s secret sauce.

The implementation of a virtualized environment must meet the intent of all requirements, such that the virtualized systems can effectively be regarded as separate hardware. For example, there must be a clear segmentation of functions and segregation of networks with different security levels; segmentation should prevent the sharing of production and test/development environments; the virtual configuration must be secured such that vulnerabilities in one function cannot impact the security of other functions; and attached devices, such as USB/serial devices, should not be accessible by all virtual instances.

“…clear segmentation of functions and segregation of networks with different security levels” and “the virtual configuration must be secured such that vulnerabilities in one function cannot impact the security of other functions,” eh? I don’t see how anyone can expect to meet this requirement in any system underpinned with a virtualized infrastructure stack (hardware or software) whether it’s multi-tenant or not.  One vulnerability in the hypervisor makes this an impossibility.  Add in management, storage, networking. This basically comes down to trusting in the sanctity of the hypervisor.

Additionally, all virtual management interface protocols should be included in system documentation, and roles and permissions should be defined for managing virtual networks and virtual system components. Virtualization platforms must have the ability to enforce separation of duties and least privilege, to separate virtual network management from virtual server management.

Special care is also needed when implementing authentication controls to ensure that users authenticate to the proper virtual system components, and distinguish between the guest VMs (virtual machines) and the hypervisor.

The rest is pretty standard stuff, but if you read the guidance sections (next post) it gets even more fun.  This is why the subjectivity, expertise and experience of the QSA is so related to the quality of the audit when virtualization and cloud are involved.  For example, let’s take a sneak peek at section 2.2.1, as it is a bit juicy:

2.2.1 Implement only one primary function per server to prevent functions that require different security levels from co-existing
on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)
Note: Where virtualization technologies are in use, implement only one primary function per virtual system component

I  acknowledge that there are “cloud” providers who are PCI certified at the highest tier.  Many of them are SaaS providers.  Many simply use their own server stacks in co-located facilities but due to their size and services merely call themselves cloud providers — many aren’t even virtualized per the description above.   Further, there are also methods of limiting scope and newer technologies such as tokenization that can assist in solving some of the information-centric issues with what would otherwise be in-scope data, but they offset many of the cost-driven efficiencies marketed by mass-market, low-cost cloud providers today.

Love to hear from an IaaS public cloud provider who is PCI certified (to the VM boundary) with customers that are in turn certified with in-scope applications and cardholder data or even a SaaS provider who sits atop an IaaS provider…

Just read this first before responding, please.


Enhanced by Zemanta
  1. November 1st, 2010 at 16:42 | #1

    Nicely said. The gap on multi-tenancy is definitely interesting.

    It has been handed to the Virtualization Special Interest Group (SIG) to figure out, who most likely are going to continue to defer to entities like card brands, the QSAs, and security industry pundits.

    Until there is a settled view among those revising the standard (anyone can submit) of how to control and test for multi-tenant risks the council will not play their hand.

    Tokenization, which you also mention, is a good comparison. Although further ahead in regulation maturity than virtualization (guides to tokenization and end-to-end encryption have been made available by card brands, pushed by Heartland and pulled by RBS Worldpay after high-profile breaches) the council itself still has left many questions unanswered:


  2. November 5th, 2010 at 08:16 | #2

    I'd like to add on to your lovely work by relating some of the obstacles that bring clusterf*cks like this about. I'll also note that the new guidance (as you point out well) boils down to "treat virtualization like a real network because we can't figure out how to do it differently"

    I had a conversation with a person from the PCI virt SIG (vendor side) who laid out the fly in the hash as far as both cloud and virtualization are concerned. This person freely admitted that at the hypervisor and machine state level, there were more or less universally applicable security standards that could be laid down, solving that conundrum.

    However, the Council simply isn't going to do that, because they're afraid of new tech that will make standards obsolete(Hellooooo Intel, still waiting on better TXT…)

    the bigger problem, as it was explained to me, is that the next levels up, virt networking and then on to systems management, was a royal mess(from the viewpoint of creating standards), and there wasn't a useful way to seperate the layers, nor were vendors particularly interested in building a model that incorporated the idea of some utopian security universe(looking at you VMware…And you Cisco…) where everything makes sense and the world is safe for kittens, small children and credit card payments.

    This person also said at this point, small merchants could be PCI certified on something like Amazon, but they'd basically have to build out their own encyption and monitoring tools without buying that stuff in- nuts to even think about for Gas Station Bob or Your Local Bank.

    however, the middlemen payment processors who provide the bulk of PCI-certified transctions are DYING to join the human race and use virt and cloud services to do DC consolidation, so this person expects those guys to basically hound their auditors into finding ways to certify virtualization.

    Next step, cloud, but its a hell of a mess, because cloud providers are still breaking ground or automation and systems management(see above technology too new to be codified into standards), and the PCI council is not ready to adopt a machine-centric security model over the process and control centric one.

    At this rate my beard will be longer than methuselahs before PCI leaves the bailiwicks of specialized SaaS operators, no matter what any provider does to demonstrate/guarantee security. Anyway hopefully i can write about this next week, flesh this out. This blog post is an excellent start.

    [trying comment again, didn't work]

  3. Mark D
    May 26th, 2011 at 07:54 | #3

    There is nothing terribly confusing about virtualization and PCI compliance. Having actually faced a QSA over a virtualized environment for 1.2, the language over virtualization means exactly what it sounds like. In simple terms, the host is in scope along with the virtual guests. There would be no confusion if we replaced "virtualization" with "access server" or "terminal server".

    So, in practical terms, an ESXi (free version of vShpere) environment is a no-go because it lacks support for several PCI requirements such as specific event logging and security policies.

  1. November 1st, 2010 at 16:33 | #1
  2. November 2nd, 2010 at 00:04 | #2
  3. November 20th, 2010 at 10:53 | #3
  4. December 6th, 2010 at 22:08 | #4