Archive

Posts Tagged ‘Virtualization’

Cloud Providers and Security “Edge” Services – Where’s The Beef?

September 30th, 2009 16 comments

usbhamburgerPreviously I wrote a post titled “Oh Great Security Spirit In the Cloud: Have You Seen My WAF, IPS, IDS, Firewall…” in which I described the challenges for enterprises moving applications and services to the Cloud while trying to ensure parity in compensating controls, some of which are either not available or suffer from the “virtual appliance” conundrum (see the Four Horsemen presentation on issues surrounding virtual appliances.)

Yesterday I had a lively discussion with Lori MacVittie about the notion of what she described as “edge” service placement of network-based WebApp firewalls in Cloud deployments.  I was curious about the notion of where the “edge” is in Cloud, but assuming it’s at the provider’s connection to the Internet as was suggested by Lori, this brought up the arguments in the post
above: how does one roll out compensating controls in Cloud?

The level of difficulty and need to integrate controls (or any “infrastructure” enhancement) definitely depends upon the Cloud delivery model (SaaS, PaaS, and IaaS) chosen and the business problem trying to be solved; SaaS offers the least amount of extensibility from the perspective of deploying controls (you don’t generally have any access to do so) whilst IaaS allows a lot of freedom at the guest level.  PaaS is somewhere in the middle.  None of the models are especially friendly to integrating network-based controls not otherwise supplied by the provider due to what should be pretty obvious reasons — the network is abstracted.

So here’s the rub, if MSSP’s/ISP’s/ASP’s-cum-Cloud operators want to woo mature enterprise customers to use their services, they are leaving money on the table and not fulfilling customer needs by failing to roll out complimentary security capabilities which lessen the compliance and security burdens of their prospective customers.

While many provide commoditized solutions such as anti-spam and anti-virus capabilities, more complex (but profoundly important) security services such as DLP (data loss/leakage prevention,) WAF, Intrusion Detection and Prevention (IDP,) XML Security, Application Delivery Controllers, VPN’s, etc. should also be considered for roadmaps by these suppliers.

Think about it, if the chief concern in Cloud environments is security around multi-tenancy and isolation, giving customers more comfort besides “trust us” has to be a good thing.  If I knew where and by whom my data is being accessed or used, I would feel more comfortable.

Yes, it’s difficult to do properly and in many cases means the Cloud provider has to make a substantial investment in delivery platforms and management/support integration to get there.  This is why niche players who target specific verticals (especially those heavily regulated) will ultimately have the upper hand in some of these scenarios – it’s not socialist security where “good enough” is spread around evenly.  Services like these need to be configurable (SELF-SERVICE!) by the consumer.

An example? How about Google: where’s DLP integrated into the messaging/apps platforms?  Amazon AWS: where’s IDP integrated into the VMM for introspection?

I wrote a couple of interesting posts about this (that may show up in the automated related posts lists below):

My customers in the Fortune 500 complain constantly that the biggest providers they are being pressured to consider for Cloud services aren’t listening to these requests — or aren’t in a position to respond.

That’s bad for everyone.

So how about it? Are services like DLP, IDP, WAF integrated into your Cloud providers’ offerings something you’d like to see rather than having to add additional providers as brokers and add complexity and cost back into Cloud?

/Hoff

The Emotion of VMotion…

September 29th, 2009 8 comments
VMotion - Here's Where We Are Today

VMotion - Here's Where We Are Today

A lot has been said about the wonders of workload VM portability.

Within the construct of virtualization, and especially VMware, an awful lot of time is spent on VM Mobility but as numerous polls and direct customer engagements have shown, the majority (50% and higher) do not use VMotion.  I talked about this in a post titled “The VM Mobility Myth:

…the capability to provide for integrated networking and virtualization coupled with governance and autonomics simply isn’t mature at this point. Most people are simply replicating existing zoned/perimertized non-virtualized network topologies in their consolidated virtualized environments and waiting for the platforms to catch up. We’re really still seeing the effects of what virtualization is doing to the classical core/distribution/access design methodology as it relates to how shackled much of this mobility is to critical components like DNS and IP addressing and layer 2 VLANs.  See Greg Ness and Lori Macvittie’s scribblings.

Furthermore, Workload distribution (Ed: today) is simply impractical for anything other than monolithic stacks because the virtualization platforms, the applications and the networks aren’t at a point where from a policy or intelligence perspective they can easily and reliably self-orchestrate.

That last point about “monolithic stacks” described what I talked about in my last post “Virtual Machines Are the Problem, Not the Solution” in which I bemoaned the bloat associated with VM’s and general purpose OS’s included within them and the fact that VMs continue to hinder the notion of being able to achieve true workload portability within the construct of how programmatically one might architect a distributed application using an SOA approach of loosely coupled services.

Combined with the VM bloat — which simply makes these “workloads” too large to practically move in real time — if one couples the annoying laws of physics and current constraints of virtualization driving the return to big, flat layer 2 network architecture — collapsing core/distribution/access designs and dissolving classical n-tier application architectures — one might argue that the proposition of VMotion really is a move backward, not forward, as it relates to true agility.

That’s a little contentious, but in discussions with customers and other Social Media venues, it’s important to think about other designs and options; the fact is that the Metastructure (as it pertains to supporting protocols/services such as DNS which are needed to support this “infrastructure 2.0”) still isn’t where it needs to be in regards to mobility and even with emerging solutions like long-distance VMotion between datacenters, we’re butting up against laws of physics (and costs of the associated bandwidth and infrastructure.)

While we do see advancements in network-driven policy stickiness with the development of elements such as distributed virtual switching, port profiles, software-based vSwitches and virtual appliances (most of which are good solutions in their own right,) this is a network-centric approach.  The policies really ought to be defined by the VM’s themselves (similar to SOA service contracts — see here) and enforced by the network, not the other way around.

Further, what isn’t talked about much is something that @joe_shonk brought up, which is that the SAN volumes/storage from which most of these virtual machines boot, upon which their data is stored and in some cases against which they are archived, don’t move, many times for the same reasons.  In many cases we’re waiting on the maturation of converged networking and advances in networked storage to deliver solutions to some of these challenges.

In the long term, the promise of mobility will be delivered by a split into three four camps which have overlapping and potentially competitive approaches depending upon who is doing the design:

  1. The quasi-realtime chunking approach of VMotion via the virtualization platform [virtualization architect,]
  2. Integration distribution and “mobility” at the application/OS layer [application architect,] or
  3. The more traditional network-based load balancing of traffic to replicated/distributed images [network architect.]
  4. Moving or redirecting pointers to large pools of storage where all the images/data(bases) live [Ed. forgot to include this from above]

Depending upon the need and capability of your application(s), virtualization/Cloud platform, and network infrastructure, you’ll likely need a mash-up of all three four.  This model really mimics the differences today in architectural approach between SaaS and IaaS models in Cloud and further suggests that folks need to take a more focused look at PaaS.

Don’t get me wrong, I think VMotion is fantastic and the options it can ultimately delivery intensely useful, but we’re hamstrung by what is really the requirement to forklift — network design, network architecture and the laws of physics.  In many cases we’re fascinated by VM Mobility, but a lot of that romanticization plays on emotion rather than utilization.

So what of it?  How do you use VM mobility today?  Do you?

/Hoff

Redux: Patching the Cloud

September 23rd, 2009 3 comments

Back in 2008 I wrote a piece titled “Patching the Cloud” in which I highlighted the issues associated with the black box ubiquity of Cloud and what that means to patching/upgrading processes:

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  How does an enterprise that has no insight into the “black box” model of the cloud operator, setup a dev/test/staging environment that acceptably mimics the operating environment?

What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)

How does one negotiate the process for determining when and how a patch is deployed?  Where does the cloud operator draw the line?   If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service?  If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.

I followed this up with a practical example when Microsoft’s Azure services experienced a hiccup due to this very thing.  We see wholesale changes that can be instantiated on a whim by Cloud providers that could alter service functionality and service availability such as this one from Google (Published Google Documents to appear in Google search) — have you thought this through?

So now as we witness ISP’s starting to build Cloud service offerings from common Cloud OS platforms and espouse the portability of workloads (*ahem* VM’s) from “internal” Clouds to Cloud Providers — and potentially multiple Cloud providers — what happens when the enterprise is at v3.1 of Cloud OS, ISP A is at version 2.1a and ISP B is at v2.9? Portability is a cruel mistress.

Pair that little nugget with the fact that even “global” Cloud providers such as Amazon Web Services have not maintained parity in terms of functionality/services across their regions*. The US has long had features/functions that the european region has not.  Today, in fact, AWS announced bringing infrastructure capabilities to parity for things like elastic load balancing and auto-scale…

It’s important to understand what happens when we squeeze the balloon.

/Hoff

*corrected – I originally said “availability zones” which was in error as pointed out by Shlomo in the comments. Thanks!

NESSessary Question: Will Virtualization Undermine Network Equipment Vendors?

August 30th, 2009 1 comment

Greg Ness touched off an interesting discussion when he asked “Will Virtualization Undermine Network Equipment Vendors?”  It’s a great read summarizing how virtualization (and Cloud) are really beginning to accelerate how classical networking equipment vendors are re-evaluating their portfolios in order to come to terms with these disruptive innovations.

I’ve written so much about this over the last three years and my response is short and sweet:

Virtualization has actually long been an enabler for network equipment vendors — not server virtualization, mind you, but network virtualization.  The same goes in the security space. The disruption caused by server virtualization is only acting as an accelerant — pushing the limits of scale, redefining organizational and operational boundaries, and acting as a forcing function causing wholesale reconsideration of archetypal network (and security) topologies.

The compressed timeframe associated with the disruption caused by virtualization and its adoption in conjunction with the arrival of Cloud Computing may seem unnatural given the relatively short window associated with its arrival, but when one takes the longer-term view, it’s quite natural.  We’ve seen it before in vignettes across the evolution of computing, but the convergence of economics, culture, technology and consumerism have amplified its relevance.

To answer Greg’s question, Virtualization will only undermine those network equipment vendors who were not prepared for it in the first place.  Those that were building highly virtualized, context-enabled routing, switching and security products will embrace this swing in the hardware/software pendulum and develop hybrid solutions that span the physical and virtual manifestations of what the “network” has become.

As I mentioned in my blog titled “Quick Bit: Virtual & Cloud Networking – Where It ISN’T Going…

Specifically, as it comes to understanding how the network plays in virtual and Cloud architectures, it’s not where the network *is* in the increasingly complex virtualized, converged and unified computing architectures, it’s where networking *isn’t.*

Where ISN'T The Network?

Where ISN'T The Network?

Take a look at your network equipment vendors.  Where do they play in that stack above?  Compare and contrast that with what is going on with vendors like Citrix/Xen with the Open vSwitch, VyattaArista with vEOS and Cisco with the Nexus 1000v*…interesting times for sure.

/Hoff

*Disclosure: I work for Cisco.

On Appirio’s Prediction: The Rise & Fall Of Private Clouds

August 18th, 2009 9 comments

I was invited to add my comments to Appirio’s corporate blog in response to my opinions of their 2009 prediction “Rise and Fall of the Private Cloud,” but as I mentioned in kind on Twitter, debating a corporate talking point on a company’s blog is like watch two monkeys trying to screw a football; it’s messy and nobody wins.

However, in light of the fact that I’ve been preaching about the realities of phased adoption of Cloud — with Private Cloud being a necessary step — I thought I’d add my $0.02.  Of course, I’m doing so while on vacation, sitting on an ancient lava flow with my feet in the ocean in Hawaii, so it’s likely to be tropical in nature.

Short and sweet, here’s Appirio’s stance on Private Cloud:

Here’s the rub: Private clouds are just an expensive data center with a fancy name. We predict that 2009 will represent the rise and fall of this over-hyped concept. Of course, virtualization, service-oriented architectures, and open standards are all great things for every company operating a data center to consider. But all this talk about “private clouds” is a distraction from the real news: the vast majority of companies shouldn’t need to worry about operating any sort of data center anymore, cloud-like or not.

It’s clear that we’re talking about very different sets of companies. If we’re referring to SME/SMB’s, then I think it’s fair to suggest the sentiment above is valid.

If we’re talking about a large, heavily-regulated enterprise (pick your industry/vertical) with sunk costs and the desire/need to leverage the investment they’ve made in the consolidation, virtualization and enterprise modernization of their global datacenter footprints and take it to the next level, leveraging capabilities like automation, elasticity, and chargeback, it’s poppycock.

Further, it’s pretty clear that the hybrid model of Cloud will ultimately win in this space with the adoption of BOTH Public and Private Clouds where and when appropriate.

The idea that somehow companies can use “private cloud” technology to offer their employees web services similar to Google, Amazon, or salesforce.com will lead to massive disappointment.

So now the definition of “Cloud” is limited to “web services” and is defined by “Google, Amazon, or Salesforce.com?”

I call this MyopiCloud.  If this is the only measure of Cloud success, I’d be massively disappointed, also.

Onto the salient points:

Here’s why:

  • Private clouds are sub-scale: There’s a reason why most innovative cloud computing providers have their roots in powering consumer web technology—that’s where the numbers are. Very few corporate data centers will see anything close to the type of volume seen by these vendors. And volume drives cost—the world has yet to see a truly “at scale” data center.

Interesting. If we hang the definition of “at scale” solely on Internet-based volume, I can see how this rings true.  However, large enterprises with LANs and WANs with multi-gigabit connectivity feeding server farms and client bases of internal constituents (not to mention extranet connections) need to be accounted for in that assessment, especially if we’re going to be honest about volume.  Limiting connectivity to only the Internet is unreasonable.

Certainly most enterprises are not autonomically elastic (neither are most Cloud providers today) but that’s why comparing apples to elephants is a bit silly, even with the benefits that virtualization is beginning to deliver in the compute, network and storage realms.

I know of an eCommerce provider who reports trafficing in (on average) 15 Gb/s of sustained HTTP traffic via its Internet feeds.  Want to guess what the internal traffic levels are inside what amounts to it’s Private Cloud at that level of ingress/egress?  Oh, did I just suggest that this “enterprise” is already running a “Private Cloud?”  Why yes, yes I did.  See James Watter’s interesting blog on something similar titled “Not So Fast Public Cloud: Big Players Still Run Privately.

  • There’s no secret sauce: There’s no simple set of tricks that an operator of a data center can borrow from Amazon or Google. These companies make their living operating the world’s largest data centers. They are constantly optimizing how they operate based on real-time performance feedback from millions of transactions. (check out this presentation from Jeff Barr and Peter Coffee at the Architecture and Integration Summit). Can other operators of data centers learn something from this experience? Of course. But the rate of innovation will never be the same—private data centers will always be many, many steps behind the cloud.
  • Really? So technology such as Eucalyptus or VMware’s vCloud/Project Redwood doesn’t play here?  Certainly leveraging the operational models and technology underpinnings (regardless of volume) should allow an enterprise to scale massively, even it it’s not at the same levels, no?  The ability to scale to the needs of the business are important, even if you never do so at the scale of an AWS.  I don’t really understand this point.  My bandwidth is bigger than your bandwidth?

  • You can’t teach an old dog new tricks: What do you get when you move legacy applications as-is to a new and improved data center? Marginal improvements on your legacy applications. There’s only so much you can achieve without truly re-platforming your applications to a cloud infrastructure… you can’t teach an old dog new tricks. Now that’s not entirely fair…. You can certainly teach an old dog to be better behaved. But it’s still an old dog.
  • Woof! It’s really silly to suggest that the only thing an enterprise will do is simply move “legacy applications as-is to a new and improved data center” without any enterprise modernization, any optimization or the ability to more efficiently migrate to new and improved applications as the agility, flexibility and mobility issues are tackled.  Talk about pissing on fire hydrants!

  • On-premise does not equal secure: the biggest driver towards private clouds has been fear, uncertainty, and doubt about security. For many, it just feels more secure to have your data in a data center that you control. But is it? Unless your company spends more money and energy thinking about security than Amazon, Google, and Salesforce, the answer is probably “no.” (Read Craig Balding walk through “7 Technical Security Benefits of Cloud Computing”)
  • I’ve got news for you, just as on-premise does “…not equal secure,” neither does off-premise assure such.  I offer you this post as an example with all it’s related posts for color.

    Please show me empirically that Amazon, Google or Salesforce spends “…more money and energy thinking about security” than, say, a Fortune 100 company.  Better yet, please show me how I can be, say, PCI compliant using AWS?  Oh, right…Please see the aforementioned posts…especially the one that demonstrates how the most public security gaffes thus far in Cloud are related to the providers you cite in your example.

    May I suggest that being myopic and mixing metaphors broadly by combining the needs and business drivers of the SME/SMB and representing them as that of large enterprises is intellectually dishonest.

    Let’s be real, Appirio is in the business of “Enabling enterprise adoption of on-demand for Salesforce.com and Google Enterprise” — two examples of externally hosted SaaS offerings that clearly aren’t aimed at enterprises who would otherwise be thinking about Private Cloud.

    Oops, the luau drums are sounding.

    Aloha.

    Cloud Computing [Security] Architectural Framework

    July 19th, 2009 3 comments

    CSA-LogoFor those of you who are not in the security space and may not have read the Cloud Security Alliance’s “Guidance for Critical Areas of Focus,” you may have missed the “Cloud Architectural Framework” section I wrote as a contribution.

    We are working on improving the entire guide, but I thought I would re-publish the Cloud Architectural Framework section and solicit comments here as well as “set it free” as a stand-alone reference document.

    Please keep in mind, I wrote this before many of the other papers such as NIST’s were officially published, so the normal churn in the blogosphere and general Cloud space may mean that  some of the terms and definitions have settled down.

    I hope it proves useful, even in its current form (I have many updates to make as part of the v2 Guidance document.)

    /Hoff


    Problem Statement

    Cloud Computing (“Cloud”) is a catch-all term that describes the evolutionary development of many existing technologies and approaches to computing that at its most basic, separates application and information resources from the underlying infrastructure and mechanisms used to deliver them with the addition of elastic scale and the utility model of allocation.  Cloud computing enhances collaboration, agility, scale, availability and provides the potential for cost reduction through optimized and efficient computing.

    More specifically, Cloud describes the use of a collection of distributed services, applications, information and infrastructure comprised of pools of compute, network, information and storage resources.  These components can be rapidly orchestrated, provisioned, implemented and decommissioned using an on-demand utility-like model of allocation and consumption.  Cloud services are most often, but not always, utilized in conjunction with and enabled by virtualization technologies to provide dynamic integration, provisioning, orchestration, mobility and scale.

    While the very definition of Cloud suggests the decoupling of resources from the physical affinity to and location of the infrastructure that delivers them, many descriptions of Cloud go to one extreme or another by either exaggerating or artificially limiting the many attributes of Cloud.  This is often purposely done in an attempt to inflate or marginalize its scope.  Some examples include the suggestions that for a service to be Cloud-based, that the Internet must be used as a transport, a web browser must be used as an access modality or that the resources are always shared in a multi-tenant environment outside of the “perimeter.”  What is missing in these definitions is context.

    From an architectural perspective given this abstracted evolution of technology, there is much confusion surrounding how Cloud is both similar and differs from existing models and how these similarities and differences might impact the organizational, operational and technological approaches to Cloud adoption as it relates to traditional network and information security practices.  There are those who say Cloud is a novel sea-change and technical revolution while others suggest it is a natural evolution and coalescence of technology, economy, and culture.  The truth is somewhere in between.

    There are many models available today which attempt to address Cloud from the perspective of academicians, architects, engineers, developers, managers and even consumers. We will focus on a model and methodology that is specifically tailored to the unique perspectives of IT network and security professionals.

    The keys to understanding how Cloud architecture impacts security architecture are a common and concise lexicon coupled with a consistent taxonomy of offerings by which Cloud services and architecture can be deconstructed, mapped to a model of compensating security and operational controls, risk assessment and management frameworks and in turn, compliance standards.

    Setting the Context: Cloud Computing Defined

    Understanding how Cloud Computing architecture impacts security architecture requires an understanding of Cloud’s principal characteristics, the manner in which cloud providers deliver and deploy services, how they are consumed, and ultimately how they need to be safeguarded.

    The scope of this area of focus is not to define the specific security benefits or challenges presented by Cloud Computing as these are covered in depth in the other 14 domains of concern:

    • Information lifecycle management
    • Governance and Enterprise Risk Management
    • Compliance & Audit
    • General Legal
    • eDiscovery
    • Encryption and Key Management
    • Identity and Access Management
    • Storage
    • Virtualization
    • Application Security
    • Portability & Interoperability
    • Data Center Operations Management
    • Incident Response, Notification, Remediation
    • “Traditional” Security impact (business continuity, disaster recovery, physical security)

    We will discuss the various approaches and derivative offerings of Cloud and how they impact security from an architectural perspective using an in-process model developed as a community effort associated with the Cloud Security Alliance.

    Principal Characteristics of Cloud Computing

    Cloud services are based upon five principal characteristics that demonstrate their relation to, and differences from, traditional computing approaches:

    1. Abstraction of Infrastructure
      The compute, network and storage infrastructure resources are abstracted from the application and information resources as a function of service delivery. Where and by what physical resource that data is processed, transmitted and stored on becomes largely opaque from the perspective of an application or services’ ability to deliver it.  Infrastructure resources are generally pooled in order to deliver service regardless of the tenancy model employed – shared or dedicated.  This abstraction is generally provided by means of high levels of virtualization at the chipset and operating system levels or enabled at the higher levels by heavily customized filesystems, operating systems or communication protocols.
    2. Resource Democratization
      The abstraction of infrastructure yields the notion of resource democratization – whether infrastructure, applications, or information – and provides the capability for pooled resources to be made available and accessible to anyone or anything authorized to utilize them using standardized methods for doing so.
    3. Services Oriented Architecture
      As the abstraction of infrastructure from application and information yields well-defined and loosely-coupled resource democratization, the notion of utilizing these components in whole or part, alone or with integration, provides a services oriented architecture where resources may be accessed and utilized in a standard way.  In this model, the focus is on the delivery of service and not the management of infrastructure.
    4. Elasticity/Dynamism
      The on-demand model of Cloud provisioning coupled with high levels of automation, virtualization, and ubiquitous, reliable and high-speed connectivity provides for the capability to rapidly expand or contract resource allocation to service definition and requirements using a self-service model that scales to as-needed capacity.  Since resources are pooled, better utilization and service levels can be achieved.
    5. Utility Model Of Consumption & Allocation
      The abstracted, democratized, service-oriented and elastic nature of Cloud combined with tight automation, orchestration, provisioning and self-service then allows for dynamic allocation of resources based on any number of governing input parameters.  Given the visibility at an atomic level, the consumption of resources can then be used to provide an “all-you-can-eat” but “pay-by-the-bite” metered utility-cost and usage model. This facilitates greater cost efficiencies and scale as well as manageable and predictive costs.

    Cloud Service Delivery Models

    Three archetypal models and the derivative combinations thereof generally describe cloud service delivery.  The three individual models are often referred to as the “SPI Model,” where “SPI” refers to Software, Platform and Infrastructure (as a service) respectively and are defined thusly[1]:

    1. Software as a Service (SaaS)
      The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
    2. Platform as a Service (PaaS)
      The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., java, python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations.
    3. Infrastructure as a Service (IaaS)
      The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

    Understanding the relationship and dependencies between these models is critical.  IaaS is the foundation of all Cloud services with PaaS building upon IaaS, and SaaS – in turn – building upon PaaS.  We will cover this in more detail later in the document.

    The OpenCrowd Cloud Solutions Taxonomy shown in Figure 1 provides an excellent reference that demonstrates the swelling ranks of solutions available today in each of the models above.

    Narrowing the scope or specific capabilities and functionality within each of the *aaS offerings or employing the functional coupling of services and capabilities across them may yield derivative classifications.  For example “Storage as a Service” is a specific sub-offering with the IaaS “family,”  “Database as a Service” may be seen as a derivative of PaaS, etc.

    Each of these models yields significant trade-offs in the areas of integrated features, openness (extensibility) and security.  We will address these later in the document.

    Figure 1 - The OpenCrowd Cloud Taxonomy

    Figure 1 - The OpenCrowd Cloud Taxonomy

    Cloud Service Deployment and Consumption Modalities

    Regardless of the delivery model utilized (SaaS, PaaS, IaaS,) there are four primary ways in which Cloud services are deployed and are characterized:

    1. Private
      Private Clouds are provided by an organization or their designated service provider and offer a single-tenant (dedicated) operating environment with all the benefits and functionality of elasticity and the accountability/utility model of Cloud.The physical infrastructure may be owned by and/or physically located in the organization’s datacenters (on-premise) or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated service provider respectively.

      The consumers of the service are considered “trusted.”  Trusted consumers of service are those who are considered part of an organization’s legal/contractual
      umbrella including employees, contractors, & business partners.  Untrusted consumers are those that may be authorized to consume some/all services but are not logical extensions of the organization.

    2. Public
      Public Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.
      The physical infrastructure is generally owned by and managed by the designated service provider and located within the provider’s datacenters (off-premise.)  Consumers of Public Cloud services are considered to be untrusted.
    3. Managed
      Managed Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.The physical infrastructure is owned by and/or physically located in the organization’s datacenters with an extension of management and security control planes controlled by the designated service provider.  Consumers of Managed Clouds may be trusted or untrusted.

    4. Hybrid
      Hybrid Clouds are a combination of public and private cloud offerings that allow for transitive information exchange and possibly application compatibility and portability across disparate Cloud service offerings and providers utilizing standard or proprietary methodologies regardless of ownership or location.  This model provides for an extension of management and security control planes.  Consumers of Hybrid Clouds may be trusted or untrusted.

    The difficulty in using a single label to describe an entire service/offering is that it actually attempts to describe the following elements:

    • Who manages it
    • Who owns it
    • Where it’s located
    • Who has access to it
    • How it’s accessed

    The notion of Public, Private, Managed and Hybrid when describing Cloud services really denotes the attribution of management and the availability of service to specific consumers of the service.

    It is important to note that often the characterizations that describe how Cloud services are deployed are often used interchangeably with the notion of where they are provided; as such, you may often see public and private clouds referred to as “external” or “internal” clouds.  This can be very confusing.

    The manner in which Cloud services are offered and ultimately consumed is then often described relative to the location of the asset/resource/service owner’s management or security “perimeter” which is usually defined by the presence of a “firewall.”

    While it is important to understand where within the context of an enforceable security boundary an asset lives, the problem with interchanging or substituting these definitions is that the notion of a well-demarcated perimeter separating the “outside” from the “inside” is an anachronistic concept.

    It is clear that the impact of the re-perimeterization and the erosion of trust boundaries we have seen in the enterprise is amplified and accelerated due to Cloud.  This is thanks to ubiquitous connectivity provided to devices, the amorphous nature of information interchange, the ineffectiveness of traditional static security controls which cannot deal with the dynamic nature of Cloud services and the mobility and velocity at which Cloud services operate.

    Thus the deployment and consumption modalities of Cloud should be thought of not only within the construct of “internal” or “external” as it relates to asset/resource/service physical location, but also by whom they are being consumed and who is responsible for their governance, security and compliance to policies and standards.

    This is not to suggest that the on- or off-premise location of an asset/resource/information does not affect the security and risk posture of an organization, because it does, but it also depends upon the following:

    • The types of application/information/services being managed
    • Who manages them and how
    • How controls are integrated
    • Regulatory issues

    Table 1 illustrates the summarization of these points:

    Table 1 - Cloud Computing Service Deployment

    Table 1 - Cloud Computing Service Deployment

    As an example, one could classify a service as IaaS/Public/External (Amazon’s AWS/EC2 offering is a good example) as well as SaaS/Managed/Internal (an internally-hosted, but third party-managed custom SaaS stack using Eucalyptus, as an example.)

    Thus when assessing the impact a particular Cloud service may have on one’s security posture and overall security architecture, it is necessary to classify the asset/resource/service within the context of not only its location but also its criticality and business impact as it relates to management and security.  This means that an appropriate level of risk assessment is performed prior to entrusting it to the vagaries of “The Cloud.”

    Which Cloud service deployment and consumption model is used depends upon the nature of the service and the requirements that govern it.  As we demonstrate later in this document, there are significant trade-offs in each of the models in terms of integrated features, extensibility, cost, administrative involvement and security.

    Figure 2 - Cloud Reference Model

    Figure 2 - Cloud Reference Model

    It is therefore important to be able to classify a Cloud service quickly and accurately and compare it to a reference model that is familiar to an IT networking or security professional.

    Reference models such as that shown in Figure 2 allows one to visualize the boundaries of *aaS definitions, how and where a particular Cloud service fits, and also how the discrete *aaS models align and interact with one another.  This is presented in an OSI-like layered structure with which security and network professionals should be familiar.

    Considering each of the *aaS models as a self-contained “solution stack” of integrated functionality with IaaS providing the foundation, it becomes clear that the other two models – PaaS and SaaS – in turn build upon it.

    Each of the abstract layers in the reference model represents elements which when combined, comprise the services offerings in each class.

    IaaS includes the entire infrastructure resource stack from the facilities to the hardware platforms that reside in them. Further, IaaS incorporates the capability to abstract resources (or not) as well as deliver physical and logical connectivity to those resources.  Ultimately, IaaS provides a set of API’s which allows for management and other forms of interaction with the infrastructure by the consumer of the service.

    Amazon’s AWS Elastic Compute Cloud (EC2) is a good example of an IaaS offering.

    PaaS sits atop IaaS and adds an additional layer of integration with application development frameworks, middleware capabilities and functions such as database, messaging, and queuing that allows developers to build applications which are coupled to the platform and whose programming languages and tools are supported by the stack.  Google’s AppEngine is a good example of PaaS.

    SaaS in turn is built upon the underlying IaaS and PaaS stacks and provides a self-contained operating environment used to deliver the entire user experience including the content, how it is presented, the application(s) and management capabilities.

    SalesForce.com is a good example of SaaS.

    It should therefore be clear that there are significant trade-offs in each of the models in terms of features, openness (extensibility) and security.

    Figure 3 - Trade-off’s Across *aaS Offerings

    Figure 3 - Trade-off’s Across *aaS Offerings

    Figure 3 demonstrates the interplay and trade-offs between the three *aaS models:

    • Generally, SaaS provides a large amount of integrated features built directly into the offering with the least amount of extensibility and a relatively high level of security.
    • PaaS generally offers less integrated features since it is designed to enable developers to build their own applications on top of the platform and is therefore more extensible than SaaS by nature, but due to this balance trades off on security features and capabilities.
    • IaaS provides few, if any, application-like features, provides for enormous extensibility but generally less security capabilities and functionality beyond protecting the infrastructure itself since it expects operating systems, applications and content to be managed and secured by the consumer.

    The key takeaway from a security architecture perspective in comparing these models is that the lower down the stack the Cloud service provider stops, the more security capabilities and management the consumer is responsible for implementing and managing themselves.

    This is critical because once a Cloud service can be classified and referenced against the model, mapping the security architecture, business and regulatory or other compliance requirements against it becomes a gap-analysis exercise to determine the general “security” posture of a service and how it relates to the assurance and protection requirements of an asset.

    Figure 4 below shows an example of how mapping a Cloud service can be compared to a catalog of compensating controls to determine what existing controls exist and which do not as provided by either the consumer, the Cloud service provider or another third party.

    Figure 4 - Mapping the Cloud Model to the Security Model
    Figure 4 – Mapping the Cloud Model to the Security Model

    Once this gap analysis is complete as governed by the requirements of any regulatory or other compliance mandates, it becomes much easier to determine what needs to be done in order to feed back into a risk assessment framework to determine how the gaps and ultimately how the risk should be addressed: accept, transfer, mitigate or ignore.

    Conclusion

    Understanding how architecture, technology, process and human capital requirements change or remain the same when deploying Cloud Computing services is critical.   Without a clear understanding of the higher-level architectural implications of Cloud services, it is impossible to address more detailed issues in a rational way.

    The keys to understanding how Cloud architecture impacts security architecture are a common and concise lexicon coupled with a consistent taxonomy of offerings by which Cloud services and architecture can be deconstructed, mapped to a model of compensating security and operational controls, risk assessment and management frameworks and in turn, compliance standards.


    [1] Credit: Peter M. Mell, NIST

    Cloud Security: Waiting For Godot & His Silver Bullet

    July 15th, 2009 No comments

    It’s that time again.  I am compelled after witnessing certain behaviors to play anthropologist and softly whisper my observations in your ear.godot

    You may be familiar with Beckett’s “Waiting For Godot”*:

    Waiting for Godot follows two days in the lives of a pair of men who divert themselves while they wait expectantly and unsuccessfully for someone named Godot to arrive. They claim him as an acquaintance but in fact hardly know him, admitting that they would not recognise him were they to see him. To occupy themselves, they eat, sleep, converse, argue, sing, play games, exercise, swap hats, and contemplate suicide — anything “to hold the terrible silence at bay”

    Referencing my prior post about the state of Cloud security, I’m reminded of the fact that as a community of providers and consumers, we continue to wait for the security equivalent of Godot to arrive and solve all of our attendant Cloud security challenges with the offer of some mythical silver bullet.  We wait and wait for our security Godot as I mix metaphors and butcher Beckett’s opus to pass the time.

    Here’s a classic illustration of hoping our way to Cloud security from a ComputerWeekly post titled “Cryptography breakthrough paves way to secure cloud services:

    A research student who had a summer job at IBM, has cracked a cryptography problem that has baffled experts for over 30 years. The breakthrough may pave the way to secure cloud computing services.

    This sounds fantastic and much has been written about this “homomorphic encryption,” with many people espousing how encryption will “solve our Cloud security problems.”

    It’s a very interesting concept, but as to paving the “…path to secure cloud computing,” the reality is that it won’t.  At least not in isolation and not without some serious scale in ancillary support mechanisms including non-trivial issues like federated identity.

    Bruce Schneier wades in with his assessment:

    Unfortunately — you knew that was coming, right? — Gentry’s scheme is completely impractical…Despite this, IBM’s PR machine has been in overdrive about the discovery. Its press release makes it sound like this new homomorphic scheme is going to rewrite the business of computing: not just cloud computing, but “enabling filters to identify spam, even in encrypted email, or protection information contained in electronic medical records.” Maybe someday, but not in my lifetime.

    The reality is that in addition to utilizing encryption — both existing and new approaches — we still continue to need all the usual suspects as they deal with the fact that fundamentally we’re still in a cycle of constructing insecure code in infostructure sitting atop infrastructure and metastructure that has its own fair share of growing up to do.

    As a security architect, engineer, or manager, you need to continue to invest in understanding how what you have does or does not work within the context of Cloud.

    You will likely find that you will need to continue to invest in threat and trust models analysis, risk management, vulnerability assessment, (id)entity management, compensating controls implemented as hardware and software technology solutions such as firewalls, IDP, DLP, and policy instantiation, etc. as well as host of modified and new approaches to dealing with Cloud-specific implementation challenges, especially those based on virtualization and massive scale with multitenancy.

    These problems don’t solve themselves and we are simply not changing our behavior.  We wait and wait for our Godot.

    So here’s the obligatory grumpy statement of the obvious as providers of solutions and services churn to deliver more capable solutions to put in your hands:

    There is no silver bullet, just a lot of silver buckshot.  Use it all.  You’re going to have to deal with the cards we are dealt for the foreseeable future whilst we retool our approach in the longer term and technology equalizes some of our shortfalls.

    Godot is not coming and you likely wouldn’t recognize him if he showed up anyway because he’d be dressed in homomorphic invisible hotpants…

    Get on with it.  Treat security as the enterprise architecture element it is and use Cloud as the excuse to make things better by working on the things that matter.

    If Godot does happen to show up, tell him I want my weed whacker back that he borrowed last summer.

    /Hoff

    * Wikipedia

    These Apocalyptic Assessments Of Cloud Security Readiness Are Irrelevant…

    July 7th, 2009 4 comments

    angel-devilThere are voices raging in my head thanks to the battling angel and devil sitting on my shoulders.

    These voices echo the security-focused protagonist and antagonist perspectives of Cloud Computing adoption.

    The devil urges immediate adoption and suggests the Cloud is as (in)secure as it needs to be while still providing value.

    The angel maintains that the Cloud, whilst a delightful place to vacation, is ready only for those who are pure of heart and traffic in non-sensitive, non-mission-critical data.

    To whom do I (or we) listen?

    The answer is a measured and practical one that we know already because we’ve given it many times before.

    Is the Cloud Secure?  That’s  a silly question.  Is the Cloud “secure enough” is really the question that should be asked, and of course,  the answer is entirely contextual.

    My co-worker, James Urquhart, wrote a great post today in which he summarized quite a few healthy debates that are good for Cloud Computing as they encourage discourse and debate.  One of them relates to the difference between the consumer and small/midsize business versus enterprise as it relates to Cloud adoption.  This is quite relevant to my point about “context” above, so for the purpose of this discussion, I’m referring to the enterprise.

    To wit, enterprises aren’t as dumb as (we) vendors want them to be; they seize opportunity as it befits them and most times apply a reasonable amount of due care, diligence and evaluation before they leap headlong into course corrections offered by magical disruptive innovation.  There are market dynamics at play that are predictable and yet so many times we collectively gasp at the patterns of behaviors of technology adoption as though we’ve never witnessed them before.

    Cloud is no different in that regard.  See my post regarding this behavior titled “Most CIO’s Not Sold On Cloud?  Good, They Shouldn’t Be.

    When I see commentary from CEO’s of leading security companies (such as RSA’s Art Coviello and even my own, John Chambers) that highlight security as an enormous concern in Cloud, I urge people to reflect back on any of the major shifts they’ve seen in IT the last 15 years and consider which shoulder-chirper they listened to and why.

    Suggesting that enterprises aren’t already conscious of what the Cloud means to their operational and security models is intellectually dishonest, really.

    We’ve all seen convenience, agility and economics stomp all over security before and here’s how this movie will play out:

    Cloud will reach a critical mass wherein the technology and operational models mature to a good-enough point, enough time passes without a significant number of material breaches or outages that disrupt confidence and then it becomes “accepted.”  Security, based upon how, where, why and when we invest will always play catch-up.  How much depends on how good a job we do to push the agenda.

    The reality is that broad warnings about security in the Cloud are fine; they help remind and reinforce the fact that we need to do better, and quite frankly, I think we are.  So we can either chirp about how bad things are, or we can do something about it.

    The good news is that even with the froth and churn, there is such a groundswell of activity by many groups (like the Cloud Security Alliance and the Jericho Forum) that we’re seeing an unprecedented attempt by both suppliers and consumers to do a better job of baking security in earlier.  The problem is that many people can’t see the forest for the trees; expectations of how quickly things can change are distorted and so everything appears to be an instant failure.  That’s sad.

    Of course Cloud Security is not perfect, but in measure, the dialog, push for standards and recognition of need (as well as many roadmapped solutions I’m privy to) shows me that our overall response is a heck of a lot better that I’ve seen it in the past.

    We’re certainly still playing catch up on the technology front and working toward better ways of dealing with instantiating business process on top of it all, but I’m quite optimistic that we’re compressing the timeframe of defining and ultimately delivering improved security capabilities in Cloud computing.

    In the meantime, the compelling market forces of Cloud continue to steamroll onward, and so these apocalyptic assessments of Cloud Security readiness are irrelevant as we continue to see companies large and small utilize Cloud Computing to do things faster, better, more efficiently, cost effectively and with a level of security that meets their needs, which in the end is all that matters.

    At the same point — and this is where the devil will prove out in the details — execution is what matters.

    /Hoff

    Incomplete Thought: The Opportunity For Desktop As a Service – The Client Cloud?

    June 16th, 2009 8 comments

    Please excuse me if I’m late to the party bringing this up…

    We talk a lot about the utility of Public Clouds to enable the cost-effective and scalable implementation of “server” functionality, whether that’s SaaS, PaaS, or IaaS model, the concept is pretty well understood: use someone else’s infratructure to host your applications and information.

    As it relates to the desktop/client side of Cloud, we normally think about hosting the desktop/client capabilities as a function of Private Cloud capabilities; behind the firewall.  Whether we’re talking about terminal service-like capabilities and VDI, it seems to me people continue to think of this as a predominantly “internal” opportunity.

    I don’t think people are talking enough about the client side of Cloud and desktop as a service (DaaS) and what this means:

    If the physical access methods continue to get skinnier (smart phones, thin clients, client hypervisors, virtual machines, etc.) is there an opportunity for providers of Infrastructure as a Service to host desktop instances outside a corporate firewall?  If I can take advantage of all of the evolving technology in the space and couple it with the same sorts of policy advancements, networking and VPN functionality to connect me to IaaS server resources running in Private or Public Clouds, isn’t that a huge opportunity for further cost savings, distributed availability and potentially better security?

    There are companies such as Desktone looking to do this very thing in a way to offset the costs of VDI and further the efforts of consolidation.  It makes a lot of sense for lots of reasons and despite my lack of hands-on exposure to the technology, it sure looks like we have the technical capability to do this today.   Dana Gardner wrote about this back in 2007 and it’s as valid a set of points then as it is now — albeit with a much bigger uptake in Cloud:

    The stars and planets finally appear to be aligning in a way that makes utility-oriented delivery of a full slate of client-side computing and resources an alternative worth serious consideration. As more organizations are set up as service bureaus — due to such  IT industry developments as ITIL and shared services — the advent of off the wire everything seems more likely in many more places

    I could totally see how Amazon could offer the same sorts of workstation utility as they do for server instances.

    Will DaaS be the next frontier of consolidation in the enterprise?

    If you’re considering hosting your service instances elsewhere, why not your desktops?  Citrix and VMware (as examples) seem to think you might…

    /Hoff

    Hey, Uh, Someone Just Powered Off Our Firewall Virtual Appliance…

    June 11th, 2009 11 comments

    onoffswitchI’ve covered this before in more complex terms, but I thought I’d reintroduce the topic due to a very relevant discussion I just had recently (*cough cough*)

    So here’s an interesting scenario in virtualized and/or Cloud environments that make use of virtual appliances to provide security capabilities*:

    Since virtual appliances (VAs) are just virtual machines (VMs) what happens when a SysAdmin spins down or moves one that happens to be your shiny new firewall protecting your production VMs behind it, accidentally or maliciously?  Brings new meaning to the phrase “failing closed.”

    Without getting into the vagaries of vendor specific mobility-enabled/enabling technologies, one of the issues with VMs/VAs is that there’s not really a good way of designating one as being “more important” or functionally differentiated such as “security” or “critical application” that would otherwise ensure a higher priority for service availability (read: don’t spin this down unless…) or provide a topological dependency hierarchy in virtualized network constructs.

    Unlike physical environments where system administrators (servers) are segregated from access to network and security appliances, this isn’t the case in virtual environments. In Cloud environments (especially public, multi-tenant) where we are often reliant only upon virtual security capabilities since we have no option for physical alternatives, this is an interesting corner case.

    We’ve talked a lot about visibility, audit and policy management in virtual environments and this is a poignant example.

    /Hoff

    *Despite the silly notion that the Google dudes tried to suggest I equated virtualization with Cloud as one-in-the-same, I don’t.