Search Results

Keyword: ‘private cloud’

VMware vCloud Architecture ToolKit (vCAT) 2.0 – Get Some!

September 8th, 2011 No comments

Here’s a great resource for those of you trying to get your arms around VMware’s vCloud Architecture:

VMware vCloud Architecture ToolKit (vCAT) 2.0

This is a collection of really useful materials, clearly painting a picture of cloud rosiness, but valuable to understand how to approach the various deployment models and options for VMware’s cloud stack:

Enhanced by Zemanta

Clouds, WAFs, Messaging Buses and API Security…

June 2nd, 2011 3 comments
An illustration of where a firewall would be l...

Image via Wikipedia

In my Commode Computing talk, I highlighted the need for security automation through the enablement of APIs.  APIs are centric in architectural requirements for the provisioning, orchestration and (ultimately) security of cloud environments.

So there’s a “dark side” with the emergence of APIs as the prominent method by which one now interacts with stacks — and it’s highlighted in VMware’s vCloud Director Hardening Guide wherein beyond the normal de rigueur deployment of stateful packet filtering firewalls, the deployment of a Web Application Firewall is recommended.

Why?  According to VMware’s hardening guide:

In summary, a WAF is an extremely valuable security solution because Web applications are too sophisticated for an IDS or IPS to protect. The simple fact that each Web application is unique makes it too complex for a static pattern-matching solution. A WAF is a unique security component because it has the capability to understand what characters are allowed within the context of the many pieces and parts of a Web page.

I don’t disagree that web applications/web services are complex. I further don’t disagree that protecting the web services and messaging buses that make up the majority of the exposed interfaces in vCloud Director don’t require sophisticated protection.

This, however, brings up an interesting skill-set challenge.

How many infrastructure security folks do you know that are experts in protecting, monitoring and managing MBeans, JMS/JMX messaging and APIs?  More specifically, how many shops do you know that have WAFs deployed (in-line, actively protecting applications not passively monitoring) that did not in some way blow up every app they sit in front of as well as add potentially significant performance degradation due to SSL/TLS termination?

Whether you’re deploying vCloud or some other cloud stack (I just happen to be reading these docs at the moment,) the scope of exposed API interfaces ought to have you re-evaluating your teams’ skillsets when it comes to how you’re going to deal with the spotlight that’s now shining directly on the infrastructure stacks (hardware and software) their private and public clouds.

Many of us have had to get schooled on web services security with the emergence of SOA/Web Services application deployments.  But that was at the application layer.  Now it’s exposed at the “code as infrastructure” layer.

Think about it.

/Hoff

[Update 6/7/11 – Here are two really timely and interesting blog posts on the topic of RESTful APIs:

Mark’s post has some links to some videos on secure API deployment]

Enhanced by Zemanta

On the CA/Ponemon Security of Cloud Computing Providers Study…

April 29th, 2011 4 comments
CA Technologies

Image via Wikipedia

CA recently sponsored the second in a series of Ponemon Institute cloud computing security surveys.

The first, released in May, 2010 was focused on responses from practitioners: “Security of Cloud Computing Users – A Study of Practitioners in the US & Europe

The latest titled “Security of Cloud Computing Providers Study (pdf),” released this week, examines “cloud computing providers'” perspectives on the same.  You can find the intro here.

While the study breaks down the  survey in detail in Appendix 1, I would kill to see the respondent list so I could use the responses from some of these “cloud providers” to quickly make assessments of my short list of those to not engage with.

I suppose it’s not hard to believe that security is not a primary concern, but given all the hype surrounding claims of “cloud is more secure than the enterprise,” it’s rather shocking to think that this sort of behavior is reflective of cloud providers.

Let’s see why.

This survey qualifies those surveyed as such:

We surveyed 103 cloud service providers in the US and 24 in six European countries for a total of 127 separate providers. Respondents from cloud provider organizations say SaaS (55 percent) is the most frequently offered cloud service, followed by IaaS (34 percent) and PaaS (11 percent). Sixty-five percent of cloud providers in this study deploy their IT resources in the public cloud environment, 18 percent deploy in the private cloud and 18 percent are hybrid.

…and offers these most “salient” findings:

  • The majority of cloud computing providers surveyed do not believe their organization views the security of their cloud services as a competitive advantage. Further, they do not consider cloud computing security as one of their most important responsibilities and do not believe their products or services substantially protect and secure the confidential or sensitive information of their customers.
  • The majority of cloud providers believe it is their customer’s responsibility to secure the cloud and not their responsibility. They also say their systems and applications are not always  evaluated for security threats prior to deployment to customers.
  • Buyer beware – on average providers of cloud computing technologies allocate 10 percent or less of their operational resources to security and most do not have confidence that  customers’ security requirements are being met.
  • Cloud providers in our study say the primary reasons why customers purchase cloud  resources are lower cost and faster deployment of applications. In contrast, improved security  or compliance with regulations is viewed as an unlikely reason for choosing cloud services. The majority of cloud providers in our study admit they do not have dedicated security  personnel to oversee the security of cloud applications, infrastructure or platforms.

  • Providers of private cloud resources appear to attach more importance and have a higher  level of confidence in their organization’s ability to meet security objectives than providers of  public and hybrid cloud solutions.
    _
  • While security as a “true” service from the cloud is rarely offered to customers today, about  one-third of the cloud providers in our study are considering such solutions as a new source  of revenue sometime in the next two years.

Ultimately, CA summarized the findings as such:

“The focus on reduced cost and faster deployment may be sufficient for cloud providers now, but as organizations reach the point where increasingly sensitive data and applications are all that remains to migrate to the cloud, they will quickly reach an impasse,” said Mike Denning, general manager, Security, CA Technologies. “If the risk of breach outweighs potential cost savings and agility, we may reach a point of “cloud stall” where cloud adoption slows or stops until organizations believe cloud security is as good as or better than enterprise security.”

I have so much I’d like to say with respect to these summary findings and the details within the reports, but much of it I already have.  I don’t think these findings are reflective of the larger cloud providers I interact with which is another reason I would love to see who these “cloud providers” were beyond the breakdown of their service offerings that were presented.”

In the meantime, I’d like to refer you to these posts I wrote for reflection on this very topic:

/Hoff

Enhanced by Zemanta

Cloud Computing, Open* and the Integrator’s Dilemma

April 11th, 2011 4 comments

My esteemed co-tormentor of Twitter, Christian Reilly (@reillyusa,) did a fantastic job of describing the impact — or more specifically the potential lack thereof — of Facebook’s OpenCompute initiative on the typical enterprise as compared to the real target audience, the service provider and manufacturers of equipment for service providers:

…I genuinely believe that for traditional service providers who are making investments in new areas and offerings, XaaS providers, OEM hardware vendors and those with plans to become giants in the next generation(s) of Systems Integrators that the OpenCompute project is a huge step forward and will be a fantastic success story over the next few years as the community and its innovations grow and tangible benefits emerge.

I think Christian has it dead on; the trickle-down effect with large service providers leveraging innovation in facilities and compute construction looking to squeeze maximum cost efficiencies (based on power, density, cooling, and space efficiency) from their services will be good for everyone, but that it’s quite important to recognize why and how:

…consider that today’s public cloud services and co-location providers are today’s equivalent of commercial airlines, providing their own multi-tenant services, price structures and user experiences on top of just a handful of airframe and engine manufacturers. OpenCompute has the potential to influence the efficiency and effectiveness of those manufacturers by helping to contribute towards ideas and potentially standards that can be adopted across the industry.

Specific to the adoption of OpenCompute as an enterprise blueprint, he widened the bifurcation between “private clouds operated by service providers as public clouds” (my words) and “private clouds operated by enterprises for their own use” with a telling analog:

Bottom line ? To today’s large corporate IT shops; those who either have, or will continue to operate on-premise or co-located “private cloud” environments, the excitement levels around the OpenCompute project (if anyone actually hears of it at all) will be all-to-familiarly low as sadly, to wake some of these sleeping giants, it will take more than a poke from the very same company who’s website their IT teams are trying to prevent employees from accessing.

This is the point of departure for OpenCompute — it’s not framed for or designed for enterprise consumption.  In an altogether fascinating description of why Facebook open-sourced its data center design, the Huffington Post summarized it thus:

“[The Open Compute Project] really is a big deal because it constitutes a general shift in terms of what how we look at technology as a competitive advantage,” O’Grady said. “For Facebook, the evidence is piling up that they don’t consider technology to be a competitive advantage. They view their competitive advantage in the marketplace to be their users.”

Here we see the general abstraction of technology in line with Nick Carr’s premise that “IT Doesn’t Matter:”

“Sharing its blueprints may gain Facebook not only free manpower, but cheaper equipment. The company’s bet, analysts say, is that giving away intellectual property will help it foster an ecosystem of competing vendors that will drive down the cost of parts.”

With that in mind, I am just as worried about the fate of OpenStack and its enterprise versus service provider audience and how it’s being perceived as they watch the mad scramble by tech companies to add value and get a seat at the table.

Each of these well-intentioned projects are curated by public cloud operators and technology vendors and are indirectly positioned for the benefit of enterprises, but not really meant for their consumption — at least not if they don’t end up putting enterprises right back where they were trying to escape from in the first place with cloud computing: the integrator’s dilemma.

If you look at the underlying premise of OpenStack — it’s modularity, flexibility and open design — what you get is the ability to craft a solution finely tuned to an operating environment of your design. Integrate solutions into the stack as you see fit.  Contribute code.  Develop an ecosystem. Integrate, manage, maintain…

This is as much a problem as it is a solution for an enterprise.  This is why, in many cases, enterprises choose to use a single vendor with a single neck to choke in order to avoid having to act as an integrator in the first place or simply look to outsource to one or more public cloud providers and avoid this in the first place.

Chances are, most are realistically caught up somewhere in the nether-regions in between the two.

I wish to make it clear that I am very much a proponent of Open* but I realize that the lack of direct enterprise involvement in standards bodies, “open” initiatives and a lack of information sharing and experience for fear of losing competitive advantage is what drives enterprises to Closed* in the first place; they want to lessen their developmental and integration burdens and the Lego erector-set approach in many ways scares conservative, risk-averse CxO’s away from projects like this.

I think this is where we’ll see more of these “clouds in a box” being paired with managed services to keep it all humming, regardless of where it lives. [See infrastructure solutions from: Dell, VCE, HP, Oracle, etc. paired with “Cloud” distributions layered atop]

Let’s hope we see enterprise success stories built on leveraging OpenCompute and OpenStack…it will be good for all of us.

/Hoff

Update: I just saw that my colleague, James Urquhart, wrote a blog titled “Cloud disrupts, creates channel opportunities” in which he details the channel’s role in this integration challenge. Spot on.

Related articles

Enhanced by Zemanta

Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit…

April 5th, 2011 6 comments

My wife is in the midst of an extended multi-phasic, multi-day delivery process of our fourth child.  In between bouts of her moaning, breathing and ultimately sleeping, I’m left to taunt people on Twitter and think about Cloud.

Reviewing my hot-button list of terms that are annoying me presently, I hit upon a favorite: Cloudbursting.

It occurred to me that this term brings up a visceral experience that makes me want to punch kittens.  It’s used by people to describe a use case in which workloads that run first and foremost within the walled gardens of an enterprise, magically burst forth into public cloud based upon a lack of capacity internally and a plethora of available capacity externally.

I call bullshit.

Now, allow me to qualify that statement.

Ben Kepes suggests that cloud bursting makes sense to an enterprise “Because you’ve spent a gazillion dollars on on-prem h/w that you want to continue using. BUT your workloads are spiky…” such that an enterprise would be focused on “…maximizing returns from on-prem. But sending excess capacity to the clouds.”  This implies the problem you’re trying to solve is one of scale.

I just don’t buy this.

Either you build a private cloud that gives you the scale you need in the first place in which you pattern your operational models after public cloud and/or design a solid plan to migrate, interconnect or extend platforms to the public [commodity] cloud using this model, therefore not bursting but completely migrating capacity, but you don’t stop somewhere in the middle with the same old crap internally and a bright, shiny public cloud you “burst things to” when you get your capacity knickers in a twist:

The investment and skillsets needed to rectify two often diametrically-opposed operational models doesn’t maximize returns, it bifurcates and diminishes efficiencies and blurs cost allocation models making both internal IT and public cloud look grotesquely inaccurate.

Christian Reilly suggested I had no legs to stand on making these arguments:

Fair enough, but…

Short of workloads such as HPC in which scale really is a major concern, if a large enterprise has gone through all of the issues relevant to running tier-1 applications in a public cloud, why on earth would you “burst” to the public cloud versus execute on a strategy that has those workloads run there in the first place.

Christian came up with another ringer during this exchange, one that I wholeheartedly agree with:

Ultimately, the reason I agree so strongly with this is because of the architectural, operational and compliance complexity associated with all the mechanics one needs to allow for interoperable, scaleable, secure and manageable workloads between an internal enterprise’s operational domain (cloud or otherwise) and the public cloud.

The (in)ability to replicate capabilities exactly across these two models means that gaps arise — gaps that unfairly amplify the immaturity of cloud for certain things and it’s stellar capabilities in others.  It’s no wonder people get confused.  Things like security, networking, application intelligence…

NOTE: I make a wholesale differentiaton between a strategy that includes a structured hybrid cloud approach of controlled workload placement/execution versus  a purely overflow/capacity movement of workloads.*

There are many workloads that simply won’t or can’t *natively* “cloudburst” to public cloud due to a lack of supporting packaging and infrastructure.**  Some of them are pretty important.  Some of them are operationally mission critical. What then?  Without an appropriate way of understanding the implications and complexity associated with this issue and getting there from here, we’re left with a strategy of “…leave those tier-1 apps to die on the vine while we greenfield migrate new apps to public cloud.”  That doesn’t sound particularly sexy, useful, efficient or cost-effective.

There are overlay solutions that can allow an enterprise to leverage utility computing services as an abstracted delivery platform and fluidly interconnect an enterprise with a public cloud, but one must understand what’s involved architecturally as part of that hybrid model, what the benefits are and where the penalties lay.  Public cloud needs the same rigor in its due diligence.

[update] My colleague James Urquhart summarized well what I meant by describing the difference in DC-DC (cloud or otherwise) workload execution as what I see as either end of a spectrum: VM-centric package mobility or adopting a truly distributed application architecture.  If you’re somewhere in the middle, things like cloudbursting get really hairy.  As we move from IaaS -> PaaS, some of these issues may evaporate as the former (VM’s) becomes less relevant and the latter (Applications deployed directly to platforms) more prevalent.

Check out this zinger from JP Morgenthal which much better conveys what I meant:

If your Tier-1 workloads can run in a public cloud and satisfy all your requirements, THAT’S where they should run in the first place!  You maximize your investment internally by scaling down and ruthlessly squeezing efficiency out of what you have as quickly as possible — writing those investments off the books.

That’s the point, innit?

Cloud bursting — today — is simply a marketing term.

Thoughts?

/Hoff

* This may be the point that requires more clarity, especially in the wake of examples that were raised on Twitter after I posted this such as using eBay and Netflix as examples of successful “cloudbursting” applications.  My response is that these fine companies hardly resemble a typical enterprise but that they’re also investing in a model that fundamentally changes they way they operate.

** I should point out that I am referring to the use case of heterogeneous cloud platforms such as VMware to AWS (either using an import/conversion function and/or via VPC) versus a more homogeneous platform interlock such as when the enterprise runs vSphere internally and looks to migrate VMs over to a VMware vCloud-powered cloud provider using something like vCloud Director Connector, for example.  Either way, the point still stands, if you can run a workload and satisfy your requirements outright on someone else’s stack, why do it on yours?

Enhanced by Zemanta

FYI: New NIST Cloud Computing Reference Architecture

March 31st, 2011 No comments
logo of National Institute of Standards and Te...

Image via Wikipedia

In case you weren’t aware, NIST has a WIKI for collaboration on Cloud Computing.  You can find it here.

They also have a draft of their v1.0 Cloud Computing Reference Architecture, which builds upon the prior definitional work we’ve seen before and has pretty graphics.  You can find that, here (dated 3/30/2011)

/Hoff

Enhanced by Zemanta

The Curious Case Of the MBO Cloud

December 23rd, 2010 1 comment

I was speaking to an enterprise account manager the other day regarding strategic engagements in Cloud Computing in very large enterprises.

He remarked on the non-surprising parallelism occurring as these companies build and execute on cloud strategies that involve both public and private cloud initiatives.

Many of them are still trying to leverage the value of virtualization and are thus often conservative about their path forward.  Many are blazing new trails.

We talked about the usual barriers to entry for even small PoC’s: compliance, security, lack of expertise, budget, etc., and then he shook his head solemnly, stared at the ground and mumbled something about a new threat to the overall progress toward enterprise cloud adoption.

MBO Cloud.

We’ve all heard of public, private, virtual private, hybrid, and community clouds, right? But “MBO Cloud?”

I asked. He clarified:

Cloud computing is such a hot topic, especially with its promise of huge cost savings, agility, and the reduced time-to-market for services and goods that many large companies who might otherwise be unable or unwilling to be able to pilot using a public cloud provider and also don’t want to risk much if any capital outlay for software and infrastructure to test private cloud are taking an interesting turn.

They’re trying to replicate Amazon or Google but not for the right reasons or workloads. They just look at “cloud” as some generic low-cost infrastructure platform that requires some open source and a couple of consultants — or even a full-time team of “developers” assigned to make it tick.

They rush out, buy 10 off-the-shelf white-label commodity multi-CPU/multi-core servers, acquire a plain vanilla NAS or low-end SAN storage appliance, sprinkle on some Xen or KVM, load on some unremarkable random set of open source software packages to test with a tidy web front-end and call it “cloud.”  No provisioning, no orchestration, no self-service portals, no chargeback, no security, no real scale, no operational re-alignment, no core applications…

It costs them next to nothing and it delivers about the same because they’re not designing for business cases that are at all relevant, they’re simply trying to copy Amazon and point to a shiny new rack as “cloud.”

Why do they do this? To gain experience and expertise? To dabble cautiously in an emerging set of technological and operational models?  To offload critical workloads that scale up and down?

Nope.  They do this for two reasons:

  1. Now that they have proven they can “successfully” spin up a “cloud” — however useless it may be — that costs next to nothing, it gives them leverage to squeeze vendors on pricing when and if they are able to move beyond this pile of junk, and
  2. Management By Objectives (MBO) — or a fancy way of saying, “bonus.”  Many C-levels and their ops staff are compensated via bonus on hitting certain objectives. One of them (for all the reasons stated above) is “deliver on the strategy and execution for cloud computing.” This half-hearted effort sadly qualifies.

So here’s the problem…when these efforts flame out and don’t deliver, they will impact the success of cloud in general — everywhere from a private cloud vendor to even potentially public cloud offers like AWS.  Why?  Because as we already know, *anything* that smells at all like failure gets reflexively blamed on cloud these days, and as these craptastic “cloud” PoC’s fail to deliver — even on minimal cash outlay — it’s going to be hard to gain a second choice given the bad taste left in the mouths of the business and management.

The opposite point could also be made in regard to public cloud services — that these truly “false cloud*” trials based on poorly architected and executed bubble gum and bailing wire will drive companies to public cloud (however longer that may take as compliance and security catch up.)

It will be interesting to see which happens first.

Either way, beware the actual “false cloud” but realize that the motivation behind many of them isn’t the betterment of the business or evolution of IT, it’s the fattening of wallets.

/Hoff

* I’m leveraging “false cloud” here to truly illustrate a point; despite actually useful private cloud initiatives, this is a term unfortunately levied on all private cloud initiatives by certain public cloud providers.

Enhanced by Zemanta

Navigating PCI DSS (2.0) – Related to Virtualization/Cloud, May the Schwartz Be With You!

November 1st, 2010 3 comments

[Disclaimer: I’m not a QSA. I don’t even play one on the Internet. Those who are will generally react to posts like these with the stock “it depends” answer, to which I respond “you’re right, it does.  Not sure where that leaves us other than with a collective sigh, but…]

The Payment Card Industry (PCI) last week released version 2.0 of the Data Security Standard (DSS.) [Legal agreement required]  This is an update from v1.2.1 but strangely does not introduce any major new requirements but instead clarifies language.

Accompanying this latest revision is also a guidance document titled “Navigating PCI DSS: Understanding the Intent of the Requirements, v2.0” [PDF]

One of the more interesting additions in the guidance is the direct call-out of virtualization which, although late to the game given the importance of this technology and its operational impact, is a welcome edition to this reader.  I should mention I’ve sat in on three of the virtualization SIG calls which gives me an interesting perspective as I read through the document.  Let me just summarize by saying that “…you can’t please all the people, all of the time…” 😉

What I find profoundly interesting is that since virtualization is a such a prominent and enabling foundational technology in IaaS Cloud offerings, the guidance is still written as though the multi-tenant issues surrounding cloud computing (as an extension of virtualization) don’t exist and that shared infrastructure doesn’t complicate the picture.  Certainly there are “cloud” providers who don’t use infrastructure shared with other providers beyond themselves in order to deliver service to different customers (I think we call them SaaS providers,) but think about the context of people wanting to use AWS to deliver services that are in scope for PCI.

Here’s what the navigation document has to say specific to virtualization and ultimately how that maps to IaaS cloud offerings.  We’re going to cover just the introductory paragraph in this post with the guidance elements and the actual DSS in a follow-on.  However, since many people are going to use this navigation document as their first blush, let’s see where that gets us:

PCI DSS requirements apply to all system components. In the context of PCI DSS, “system components” are defined as any network component, server or application that is included in, or connected to, the cardholder data environment. System components” also include any virtualization components such as virtual machines, virtual switches/routers, virtual appliances, virtual applications/desktops, and hypervisors.

I would have liked to see specific mention of virtual storage here and although it’s likely included by implication in the management system/sub-system mentions above and below, the direct mention of APIs. Thanks to heavy levels of automation, the operational movements related to DevOps and with APIs becoming the interface of the integration and management planes, these are unexplored lands for many.

I’m also inclined to wonder about virtualization approaches that is not server-centric such as physical networking devices, databases, etc.

If virtualization is implemented, all components within the virtual environment will need to be identified and considered in scope for the review, including the individual virtual hosts or devices, guest machines, applications, management interfaces, central management consoles, hypervisors, etc. All intra-host communications and data flows must be identified and documented, as well as those between the virtual component and other system components.

It can be quite interesting to imagine the scoping exercises (or de-scoping more specifically) associated with this requirement in a cloud environment.  Even if the virtualized platforms are operated solely on behalf of a single customer (read: no shared infrastructure — private cloud,)  this is still an onerous task, so I wonder how — if at all — this could be accomplished in a public IaaS offering given the lack of transparency we see in today’s cloud operators.  Much of what is being asked for relating to infrastructure and “data flows” between the “virtual component and other system components” represents the CSP’s secret sauce.

The implementation of a virtualized environment must meet the intent of all requirements, such that the virtualized systems can effectively be regarded as separate hardware. For example, there must be a clear segmentation of functions and segregation of networks with different security levels; segmentation should prevent the sharing of production and test/development environments; the virtual configuration must be secured such that vulnerabilities in one function cannot impact the security of other functions; and attached devices, such as USB/serial devices, should not be accessible by all virtual instances.

“…clear segmentation of functions and segregation of networks with different security levels” and “the virtual configuration must be secured such that vulnerabilities in one function cannot impact the security of other functions,” eh? I don’t see how anyone can expect to meet this requirement in any system underpinned with a virtualized infrastructure stack (hardware or software) whether it’s multi-tenant or not.  One vulnerability in the hypervisor makes this an impossibility.  Add in management, storage, networking. This basically comes down to trusting in the sanctity of the hypervisor.

Additionally, all virtual management interface protocols should be included in system documentation, and roles and permissions should be defined for managing virtual networks and virtual system components. Virtualization platforms must have the ability to enforce separation of duties and least privilege, to separate virtual network management from virtual server management.

Special care is also needed when implementing authentication controls to ensure that users authenticate to the proper virtual system components, and distinguish between the guest VMs (virtual machines) and the hypervisor.

The rest is pretty standard stuff, but if you read the guidance sections (next post) it gets even more fun.  This is why the subjectivity, expertise and experience of the QSA is so related to the quality of the audit when virtualization and cloud are involved.  For example, let’s take a sneak peek at section 2.2.1, as it is a bit juicy:

2.2.1 Implement only one primary function per server to prevent functions that require different security levels from co-existing
on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)
Note: Where virtualization technologies are in use, implement only one primary function per virtual system component
.

I  acknowledge that there are “cloud” providers who are PCI certified at the highest tier.  Many of them are SaaS providers.  Many simply use their own server stacks in co-located facilities but due to their size and services merely call themselves cloud providers — many aren’t even virtualized per the description above.   Further, there are also methods of limiting scope and newer technologies such as tokenization that can assist in solving some of the information-centric issues with what would otherwise be in-scope data, but they offset many of the cost-driven efficiencies marketed by mass-market, low-cost cloud providers today.

Love to hear from an IaaS public cloud provider who is PCI certified (to the VM boundary) with customers that are in turn certified with in-scope applications and cardholder data or even a SaaS provider who sits atop an IaaS provider…

Just read this first before responding, please.

/Hoff

Enhanced by Zemanta

An Ode to Oracle’s Cloud…

September 22nd, 2010 2 comments
SAN FRANCISCO - SEPTEMBER 24:  Oracle CEO Larr...
Image by Getty Images via @daylife

Try not to be
such an Oracle Hater,
Build a big, honkin’ Cloud:
Exalogic &  -data

It’s fluffy & shiny
it’s new & fantastic
It scales like butta,
cos it’s so damned elastic

It may cost you millions,
but it’ll save you a buck.
Is it really a cloud?
Larry don’t give a f*ck.

It’ll castigate partners
and alienate friends
it’s got unbreakable linux
and it also self-mends

The kernel is magic,
OVM’s where it’s at
Some might disagree,
especially RedHat

Infiniband, ten Gig,
many Sun-powered cores
It’s got enough cycles
for HPC chores

The issue some have,
is Larry’s evil plot
It’s really quite simple,
a mortgage and yacht.

It’s like “War of the Roses,”
‘tween Big O, Salesforce
Gets ugly in the  Valley
when partners divorce

Some CEO’s chide Larry,
and others, they scoff.
Some fire back with venom
like Mark Benioff

It’s a False Cloud, a Non-Cloud
“We’re like A-W-S”
this marketing plan
is one freakin’ mess

Just one file to patch it,
it’s IT on demand.
It’s a mainframe with JBoss,
can’t you understand!?

It’ll take all you can give it,
all you can muster,
It scales from one
to an eight headed cluster

At the end of the day,
from morning to nox
take comfort that Cloud
now comes in a box.

P.S. You may be interested in other little ditties I have scratched into existence, here.

Related articles by Zemanta

Enhanced by Zemanta

Dear Verizon Business: I Have Some Questions About Your PCI-Compliant Cloud…

August 24th, 2010 5 comments

You’ll forgive my impertinence, but the last time I saw a similar claim of a PCI compliant Cloud offering, it turned out rather anti-climatically for RackSpace/Mosso, so I just want to make sure I understand what is really being said.  I may be mixing things up in asking my questions, so hopefully someone can shed some light.

This press release announces that:

“…Verizon’s On-Demand Cloud Computing Solution First to Achieve PCI Compliance” and the company’s cloud computing solution called Computing as a Service (CaaS) which is “…delivered from Verizon cloud centers in the U.S. and Europe, is the first cloud-based solution to successfully complete the Payment Card Industry Data Security Standard (PCI DSS) audit for storing, processing and transmitting credit card information.”

It’s unclear to me (at least) what’s considered in scope and what level/type of PCI certification we’re talking about here since it doesn’t appear that the underlying offering itself is merchant or transactional in nature, but rather Verizon is operating as a service provider that stores, processes, and transmits cardholder data on behalf of another entity.

Here’s what the article says about what Verizon undertook for DSS validation:

To become PCI DSS-validated, Verizon CaaS underwent a comprehensive third-party examination of its policies, procedures and technical systems, as well as an on-site assessment and systemwide vulnerability scan.

I’m interested in the underlying mechanicals of the CaaS offering.  Specifically, it would appear that the platform – compute, network, and storage — are virtualized.  What is unclear is if the [physical] resources allocated to a customer are dedicated or shared (multi-tenant,) regardless of virtualization.

According to this article in The Register (dated 2009,) the infrastructure is composed like this:

The CaaS offering from Verizon takes x64 server from Hewlett-Packard and slaps VMware’s ESX Server hypervisor and Red Hat Enterprise Linux instances atop it, allowing customers to set up and manage virtualized RHEL partitions and their applications. Based on the customer portal screen shots, the CaaS service also supports Microsoft’s Windows Server 2003 operating system.

Some details emerge from the Verizon website that describes the environment more:

Every virtual farm comes securely bundled with a virtual load balancer, a virtual firewall, and defined network space. Once the farm is designed, built, and named – all in a matter of minutes through the CaaS Customer Management Portal – you can then choose whether you want to manage the servers in-house or have us manage them for you.

If the customer chooses to manage the “servers…in-house (sic)” is the customer’s network, staff and practices now in-scope as part of Verizon’s CaaS validation? Where does the line start/stop?

I’m very interested in the virtual load balancer (Zeus ZXTM perhaps?) and the virtual firewall (vShield? Altor? Reflex? VMsafe-API enabled Virtual Appliance?)  What about other controls (preventitive or detective such as IDS, IPS, AV, etc.)

The reason for my interest is how, if these resources are indeed shared, they are partitioned/configured and kept isolated especially in light of the fact that:

Customers have the flexibility to connect to their CaaS environment through our global IP backbone or by leveraging the Verizon Private IP network (our Layer 3 MPLS VPN) for secure communication with mission critical and back office systems.

It’s clear that Verizon has no dominion over what’s contained in the VM’s atop the hypervisor, but what about the network to which these virtualized compute resources are connected?

So for me, all this all comes down to scope. I’m trying to figure out what is actually included in this certification, what components in the stack were audited and how.  It’s not clear I’m going to get answers, but I thought I’d ask any way.

Oh, by the way, transparency and auditability would be swell for an environment such as this. How about CloudAudit? We even have a PCI DSS CompliancePack 😉

Question for my QSA peeps: Are service providers required to also adhere to sections like 6.6 (WAF/Binary analysis) of their offerings even if they are not acting as a merchant?

/Hoff

Related articles by Zemanta

Enhanced by Zemanta