Search Results

Keyword: ‘private cloud’

From the X-Files – The Cloud in Context: Evolution from Gadgetry to Popular Culture

November 27th, 2009 4 comments

apple1984

[This post was originally authored November 27, 2009.  I pushed it back to the top of the stack because I think it’s an interesting re-visitation of the benefits and challenges we are experiencing in Cloud today]

Below is an article I wrote many months ago prior to all the Nicholas Carr “electricity ain’t Cloud” discussions.  The piece was one from a collection that was distributed to “…the Intelligence Community, the DoD, and Congress” with the purpose of giving a high-level overview of Cloud security issues.

The Cloud in Context: Evolution from Gadgetry to Popular Culture

It is very likely that should one develop any interest in Cloud Computing (“Cloud”) and wish to investigate its provenance, one would be pointed to Nicholas Carr’s treatise “The Big Switch” for enlightenment. Carr offers a metaphoric genealogy of Cloud Computing, mapped to, and illustrated by, a keenly patterned set of observations from one of the most important catalysts of a critical inflection point in modern history: the generation and distribution of electricity.

Carr offers an uncannily prescient perspective on the evolution and adaptation of computing by way of this electric metaphor, describing how the scale of technology, socioeconomic, and cultural advances were all directly linked to the disruptive innovation of a shift from dedicated power generation in individual factories to a metered utility of interconnected generators powering distribution grids feeding all. He predicts a similar shift from insular, centralized, private single-function computational gadgetry to globally-networked, distributed, public service-centric collaborative fabrics of information interchange.

This phenomenon will not occur overnight nor has any other paradigm shift in computing occurred overnight; bursts of disruptive innovation have a long tail of adoption. Cloud is not the product or invocation of some singular technology, but rather an operational model that describes how computing will mature.

There is no box with blinking lights that can be simply pointed to as “Cloud” and yet it is clearly more than just timesharing with Internet connectivity. As corporations seek to drive down cost and gain efficiency force-multipliers, they have ruthlessly focused on divining what is core to their businesses, and expensive IT cost-centers are squarely in the crosshairs for rigorous valuation.

To that end, Carr wrote another piece on this very topic titled “IT Doesn’t matter” in which he argued that IT was no longer a strategic differentiator due to commoditization, standardization, and cost. This was followed by “The End of Corporate Computing” wherein he suggested that IT will simply subscribe to IT services as an outsourced function. Based upon these themes, Cloud seems a natural evolutionary outcome motivated primarily by economics as companies pare down their IT investment — outsourcing what they can and optimizing what is left.

Enter Cloud Computing

The emergence of Cloud as cult-status popular culture also has its muse anchored firmly in the little machines nestled in the hands of those who might not realize that they’ve helped create the IT revolution at all: the consumer. The consumer’s shift to an always-on, many-to-many communication model with unbridled collaboration and unfettered access to resources, sharply contrasts with traditional IT — constrained, siloed, well-demarcated, communication-restricted, and infrastructure-heavy.

Regardless of any value judgment on the fate of Man, we are evolving to a society dedicated to convenience, where we are not tied to the machine, but rather the machine is tied to us, and always on. Your applications and data are always there, consumed according to business and pricing models that are based upon what you use while the magic serving it up remains transparent.

This is Cloud in a nutshell; the computing equivalent to classical Greek theater’s Deus Ex Machina.

For the purpose of this paper, it is important that I point out that I refer mainly to so-called “Public Cloud” offerings; those services provided by parties external to the data owner who provides an “outsourced” service capability on behalf of the consumer.

This graceful surrender of control is the focus of my discussion. Private Clouds — those services that may operate on the corporation’s infrastructure or those of a provider but managed under said corporation’s control and policies, offers a different set of benefits and challenges but not to the degree of Public Cloud.

There are also hybrid and brokered models, but to keep focused, I shall not address these directly.

Cloud Reference Model

Cloud Reference Model

A service is generally considered to be “Cloud-based” should it meet the following characteristics and provide for:

  • The abstraction of infrastructure from the resources that deliver them
  • The democratization of those resources as an elastic pool to be consumed
  • Services-oriented, rather than infrastructure or application-centric
  • Enabling self-service, scale on-demand elasticity and dynamism
  • Employs a utility-like model of consumption and allocation

Cloud exacerbates the issues we have faced for years in the information security, assurance, and survivability spaces and introduces new challenges associated with extreme levels of abstraction, mobility, scale, dynamism and multi-tenancy. It is important that one contemplate the “big picture” of how Cloud impacts the IT landscape and how given this “service- centric” view, certain things change whilst others remain firmly status quo.

Cloud also provides numerous challenges to the way in which computing and resources are organized, operated, governed and secured, given the focus on:

  • Automated and autonomic resource provisioning and orchestration
  • Massively interconnected and mashed-up data sources, conduits and results
  • Virtualized layers of software-driven, service-centric capability rather than infrastructure or application- specific monoliths
  • Dynamic infrastructure that is aware of and adjusts to the information, applications and services (workloads) running over it, supporting dynamism and abstraction in terms of scale, policy, agility, security and mobility

As a matter of correctness, virtualization as a form of abstraction may exist in many forms and at many layers, but it is not required for Cloud. Many Cloud services do utilize virtualization to achieve scale and I make liberal use of this assumptive case in this paper. As we grapple with the tradeoffs between convenience, collaboration, and control, we find that existing products, solutions and services are quickly being re-branded and adapted as “Cloud” to the confusion of all.keep focused, I shall not address these directly.

Modeling the Cloud

There exist numerous deployment, service delivery models and use cases for Cloud, each offering a specific balance of integrated features, extensibility/ openness and security hinged on high levels of automation for workload distribution.

Three archetypal models generally describe cloud service delivery, popularly referred to as the “SPI Model,” where “SPI” refers to Software, Platform and Infrastructure (as a service) respectively.

NIST - Visual Cloud Model

NIST – Visual Cloud Model

Using the National Institute of Standards and Technology’s (NIST) draft working definition as the basis for the model:

Software as a Service (SaaS)

The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email).

The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Platform as a Service (PaaS)

The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., Java, Python, .Net). The consumer does not manage or control the underlying cloud infrastructure,

Infrastructure as a Service (IaaS)

The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

Understanding the relationship and dependencies between these models is critical. IaaS is the foundation of all Cloud services with PaaS building upon IaaS, and SaaS — in turn — building upon PaaS. We will cover this in more detail later in the document.

Peanut Butter & Jelly — Making the Perfect Cloud Sandwich

Infostructure/Metastructure/Infrastructure

Infostructure/Metastructure/Infrastructure

To understand how Cloud will affect security, visualize its functional structure in three layers:

  • The Infrastructure layer represents the traditional compute, network and storage hardware and operating systems familiar to us all. Virtualization platforms also exist at this layer and expose their capabilities northbound.
  • The Infostructure layer represents the programmatic components such as applications and service objects that produce, operate on or interact with the content, information and metadata.
  • Sitting in between Infrastructure and Infostructure is the Metastructure layer. This layer represents the underlying set of protocols and functions with layers such as DNS, BGP, and IP address management, which “glue” together and enable the applications and content at the Infostructure layer to in turn be delivered by the Infrastructure.

Certain areas of Cloud Computing’s technology underpinnings are making progress, but those things that will ultimately make Cloud the ubiquitous and transparent platform for our entire computing experience remain lacking.

Unsurprisingly, most of the deficient categories of technology or capabilities are those that need to be delivered from standards and consensus-driven action; things that have always posed challenges such as management, governance, provisioning, orchestration, automation, portability, interoperability and security. As security solutions specific to Cloud are generally slow in coming while fast innovating attackers are unconstrained by rules of engagement, it will come as no surprise that we are constantly playing catch up.

Cloud is a gradual adaptation rather than a wholesale re-tooling, and represents another cycle of investment which leaves us to consider where to invest our security dollars to most appropriately mitigate threat and vulnerability:

Typically, we react by cycling between investing in host-based controls > application controls > information controls > user controls > network controls and back again. While our security tools tend to be out of phase and less innovative than the tools of our opposition, virtualization and Cloud may act as much needed security forcing functions that get us beyond solving just the problem du jour.

The need to apply policy to workloads throughout their lifecycle, regardless of state, physical location, or infrastructure from which they are delivered, is paramount. Collapsing the atomic unit of the datacenter to the virtual machine boundary may allow for a simpler set of policy expressions that travel with the VM instance. At the same time, Cloud’s illusion of ubiquity and infinite scale means that we will not know where our data is stored, processed, or used.

Combine mobility, encryption, distributed resources with multiple providers, and a lack of open standards with economic cost pressure and even basic security capabilities seem daunting. Cloud simultaneously re-centralizes some resources while de-perimeterizing trust boundaries and distributing data. Understanding how the various layers map to traditional non-Cloud architecture is important, especially in relation to the Cloud deployment model used; there are significant trade-offs in integration, extensibility, cost, management, governance, compliance, and security.

Live by the Cloud, Die by the Cloud

Despite a tremendous amount of interest and momentum, Cloud is still very immature — pockets of innovation spread out across a long-tail of mostly-proprietary infrastructure-, platform-, and software-as-a-service offerings that do not provide for much in the way of or workload portability or interoperability.

Cloud is not limited to lower cost “server” functionality. With the fevered adoption of netbooks, virtualization, low-cost storage services, fixed/mobile convergence, the proliferation of “social networks,” and applications built to take advantage of all of this, Cloud becomes a single pane of glass for our combined computing experience. N.B., these powers are not inherently ours alone; the same upside can be used for wrongdoing.

In an attempt to whet the reader’s appetite in regards to how Cloud dramatically impacts the risk modeling, assumptions, and security postures of today, I will provide a reasonably crisp set of examples, chosen to bring pause:

Organizational and Operational Misalignment

The way in which most enterprise IT organizations are structured — in functional silos optimized to specialized, isolated functions — is diametrically opposed to the operational abstraction provided by Cloud.

The on-demand, elastic and self-service capabilities through simple interfaces and automated service layers abstract away core technology and support staff alike.

Few IT departments are prepared for what it means to apply controls, manage service levels, implement and manage security capabilities, and address compliance when the IT department is operationally irrelevant in that process. This leaves huge gaps in both identifying and managing risk, especially in outsourced models where ultimately the operational responsibility is “Cloudsourced” but the accountability is not.

The ability to apply specific security controls and measure compliance in mass-marketed Public Cloud services presents very real barriers to entry for enterprises who are heavily regulated, especially when balanced against the human capital (expertise) built-up by organizations.

Monoculture of Operating Systems, Virtualized Components, and Platforms

The standardization (de facto and de jure) on common interfaces to Cloud resources can expose uniform attack vectors that could affect one consumer, or, in the case of multi-tenant Public Cloud offerings, affect many. This is especially true in IaaS offerings where common sets of abstraction layers (such as hypervisors,) prototyped OS/application bundles (usually in the form of virtual machines) and common sets of management functions are used — and used to extend and connect the walled garden internal assets of enterprises to the public or semi-public Cloud environments of service providers operating infrastructure in proxy.

While most attack vectors target applications and information at the Infostructure layer or abuse operating systems and assorted hardware at the Infrastructure layer, the Metastructure layer is beginning to show signs of stress also. Recent attacks against key Metastructure elements such as BGP and DNS indicate that aging protocols do not fare well.

Segmentation and Isolation In Multi-tenant environments

Multi-tenancy in the Cloud (whether in the Public or Private Cloud contexts) brings new challenges to trust, privacy, resiliency and reliability model assertions by providers.  Many of these assertions are based upon the premise that that we should trust — without reliably provable models or evidence — that in the absence of relevant illustration, Cloud is simply trustworthy in all of these dimensions, despite its immaturity. Vendors claim “airtight” information, process, application, and service, but short of service level agreements, there is little to demonstrate or substantiate the claims that software-enabled Cloud Computing — however skinny the codebase may be — is any more (or less) secure than what we have today, especially with commercialized and proprietary implementations.

In multi-tenant Cloud offerings, exposures can affect millions, and placing some types of information in the care of others without effective compensating controls may erode the ROI valuation offered by Cloud in the first place, and especially so as the trust boundaries used to demarcate and segregate workloads of different consumers are provided by the same monoculture operating system and virtualization platforms described above.

Privacy of Data/Metadata, Exfiltration, and Leakage

With increased adoption of Cloud for sensitive workloads, we should expect innovative attacks against Cloud assets, providers, operators, and end users, especially around the outsourcing and storage of confidential information. The uptake is that solutions focused on encryption, at rest and in motion, will have the side effect of more and more tools (legitimate or otherwise) losing visibility into file systems, application/process execution, information and network traffic. Key management becomes remarkably relevant once again — on a massive scale.

Recent proof-of-concepts such as so-called side- channel attacks demonstrate how it is possible to determine where a specific virtual instance is likely to reside in a Public multi-tenant Cloud and allow an attacker to instantiate their own instance and cause it to be located such that it is co-resident with the target. This would potentially allow for sniffing and exfiltration of confidential data — or worse, potentially exploit vulnerabilities which would violate the sanctity of isolated workloads within the Cloud itself.

Further, given workload mobility — where the OS, applications and information are contained in an instance represented by a single atomic unit such as a virtual machine image — the potential for accidental or malicious leakage and exfiltration is real. Legal intercept, monitoring, forensics, and attack detection/incident response are heavily impacted, especially at the volume and levels of traffic envisioned by large Cloud providers, creating blind spots in ways we can’t fathom today.

Inability to Deploy Compensating or Detective Controls

The architecture of Cloud services — as abstract as they ought to be — means that in many cases the security of workloads up and down the stack are still dependent upon the underlying platform for enforcement. This is problematic inasmuch as the constructs representing compute, networking and storage resources — and security — are in many cases themselves virtualized.

Further we are faced with more stealthy and evasive malware that is able to potentially evade detection while co-opting (or rootkitting) not only software and hypervisors, but exploiting vulnerabilities in firmware and hardware such as CPU chipsets.

These sorts of attack vectors are extremely difficult to detect let alone defend against. Referring back to the monoculture issue above, a so-called blue- pilled hypervisor, uniform across tens of thousands of compute nodes providing multi-tenant Cloud services could be catastrophic. It is simply not yet feasible to provide parity in security capabilities between physical and Cloud environments; the maturity of solutions just isn’t there.

These are heady issues and should not be taken lightly when considering what workloads and services are candidates for various Cloud offerings.

What’s old is news again…

Perhaps it is worth adapting familiar attack taxonomies to Cloud.

Botnets that previously required massive malware- originated endpoint compromise in order to function can easily activate in standardized fashion, in apparently legitimate form, and in large numbers by criminals who wish to harness the organized capabilities of Bots without the effort. Simply use stolen credit cards to establish fake accounts using a provider’s Infrastructure-as-a-Service and hundreds or thousands of distributed images could be activated in a very short timeframe.

Existing security threats such as DoS/DDoS attacks, SPAM and phishing will continue to be a prime set of tools for the criminal ecosystem to leverage the distributed and well-connected Cloud as well as targeted attacks against telecommuters using both corporate and consumerized versions of Cloud services.

Consider a new take on an old problem based on ecommerce: Click-fraud. I frame this new embodiment as something called EDoS — economic denial of sustainability. Distributed Denial of Service (DDoS) attacks are blunt force trauma. The goal, regardless of motive, is to overwhelm infrastructure and remove from service a networked target by employing a distributed number of attackers. An example of DDoS is where a traditional botnet is activated to swarm/overwhelm an Internet connected website using an asynchronous attack which makes the site unavailable due to an exhaustion of resources (compute, network, or storage.)

EDoS attacks, however, are death by a thousand cuts. EDoS can also utilize distributed attack sources as well as single entities, but works by making legitimate web requests at volumes that may appear to be “normal” but are done so to drive compute, network, and storage utility billings in a cloud model abnormally high.

An example of EDoS as a variant of click fraud is where a botnet is activated to visit a website whose income results from ecommerce purchases. The requests are all legitimate but purchases are never made. The vendor has to pay the cloud provider for increased elastic use of resources but revenue is never recognized to offset them.

We have anti-DDoS capabilities today with tools that are quite mature. DDoS is generally easy to spot given huge increases in traffic. EDoS attacks are not necessarily easy to detect, because the instrumentation and business logic is not present in most applications or stacks of applications and infrastructure to provide the correlation between “requests” and “ successful transactions.” In theexample above, increased requests may look like normal activity. Many customers do not invest in this sort of integration and Cloud providers generally will not have visibility into applications that they do not own.

Ultimately the most serious Cloud concern is presented by way of the “stacked turtles” analogy: layer upon layer of complex interdependencies at the Infastructure, Metastructure and Infostructure layers, predicated upon fragile trust models framed upon nothing more than politeness. Without re-engineering these models, strengthening the notion of (id)entity management, authentication and implementing secure protocols, we run the risk of Cloud simply obfuscating the fragility of the supporting layers until something catastrophic occurs.

Combined with where and how our data is created, processed, accessed, stored, and backed up — and by whom and using whose infrastructure — Cloud yields significant concerns related to on-going security, privacy, compliance and resiliency.

Moving Forward – Critical Areas of Focus

The Cloud Security Alliance (http://www. cloudsecurityalliance.org) issued its “Guidance for Critical Areas of Focus” related to Cloud Computing Security and defined fifteen domains of concern:

  • Cloud Architecture
  • Information lifecycle management
  • Governance and Enterprise Risk Management
  • Compliance & Audit
  • General Legal
  • eDiscovery
  • Encryption and Key Management
  • Identity and Access Management
  • Storage
  • Virtualization
  • Application Security
  • Portability & Interoperability
  • Data Center Operations Management
  • Incident Response, Notification, Remediation
  • “Traditional” Security impact (business continuity, disaster recovery, physical security)

The sheer complexity of the interdependencies between the Infrastructure, Metastructure and Infostructure layers makes it almost impossible to recommend focusing on only a select subset of these items since all are relevant and important.

Nevertheless, those items in boldface most deserve initial focus just to retain existing levels of security, resilience, and compliance while information and applications are moved from the walled gardens of the private enterprise into the care of others.

Attempting to retain existing levels of security will consume the majority of Cloud transition effort.  Until we see an expansion of available solutions to bridge the gaps between “traditional” IT and dynamic infrastructure 2.0 capabilities, any company can only focus on the traditional security elements of sound design, encryption, identity, storage, virtualization and application security. Similarly, until a standardized set of methods allow well-defined interaction between the Infrastructure, Metastructure and Infostructure layers, companies will be at the mercy of industry for instrumenting, much less auditing,

Cloud elements — yet, as was already stated, the very sameness of standardization creates shared risk. As with any change of this magnitude, the potential of Cloud lies between its trade-offs. In security terms, this “big switch” surrenders visibility and control so as to gain agility and efficiency. The question is, how to achieve a net positive result?

Well-established enterprise security teams who optimize their security spend on managing risk versus purely threat, should not be surprised by Cloud. To these organizations, adapting their security programs to the challenges and opportunities provided by Cloud is business as usual. For organizations unprepared for Cloud, the maturity of security programs they can buy will quickly be outmoded.

Summary

The benefits of Cloud are many. The challenges are substantial. How we deal with these challenges and their organizational, operational, architectural, and technical impacts will fundamentally change the way in which we think about assessing and assuring the security of our assets.

The Cloud & eHarmony’s 29 Dimensions Of Compatability…

November 23rd, 2009 7 comments

I speak to many customers — large companies in numerous verticals and service providers —  who are for the reasons we are all very well aware of, engaging in projects large and small focused on Cloud adoption.

On the enterprise side, the dialog almost inevitably goes like this:

We’re working on taking applications and data which are not heavily regulated/compliance scoped, business critical or contain sensitive information and move them to a public cloud provider like AWS — we’re also considering virtual private clouds to use public cloud infrastructure in private ways.

We’ve had great success with low-hanging fruit and grid-like utility offerings, but we’re having a bear of a time with real “applications” — taking them as they run today internally and making them run the same way on someone elses’ kit.  It’s not always the application, either, but rather the attendant dependencies on other critical IT-centric functions that cause the issues (Ed: “metastructure”)

In parallel we’re engaging in building private clouds for critical applications that either have complex development and support/integration issues that are not ready for running on others’ infrastructure and/or have compliance and regulatory requirements that prevent us from moving them off our infrastructure.

We’re continuing to invest and optimize our internal virtualization deployments; we’re reducing footprint but really increasing compute, network and storage density.  Don’t let the smaller physical space fool you, we’re getting bigger in more efficient floor plans.  We’ve standardized on VMware. We’re figuring out how vSphere and vCloud intersect and what that means in the long term and how that impacts our choice of Cloud providers.

We understand that using the same vendor we use for virtualization to ultimately deliver our private cloud should yield easier portability and workload interoperability, but we’re worried about vendor lock-in…sort of.

We’d really like to be able to move workloads/applications/information in and out of private clouds to public/virtual public offerings and support workloads/applications/information that were born in the cloud on our private cloud, too.  These present a whole host of security and lifecycle management issues.

In the long term, what we want to do is build a self-service portal (not unlike apps.gov) that depending upon business logic and security/compliance requirements, etc. will allow a business constituent consumer to deploy packaged or bespoke workloads/application/information and not have to care about where it runs.

That would be nice.  We’d like to be able to do that with the thousands of applications we already support today.

We’re investigating cloud brokers currently, but most don’t do what they advertise they do or have severe limitations. While they often plug the gaps between the various cloud providers, we trade one vendor lock-in problem for another with custom orchestration and provisioning frameworks.  We’re trying to roll our own — cobbling together bits and pieces — but it’s an integration nightmare.

The lack of standard APIs and competing implementation semantics with immature sets of management, security, provisioning, orchestration and governance solutions really makes this all very, very difficult.

What should we do?

This story is the same over and over.

It’s literally the Cloud equivalent of eHarmony.com’s 29 dimensions of compatibility; it’s such a multidimensional problem in large enterprises that have a huge number of applications (thousands) and a ton of sunk infrastructure, mature decades-old operational practices, cultural dispositions, and economic pressures that it’s hard to figure out what to do.

For large enterprises (and the service providers who cater to them) Cloud is not a simple undertaking, at least not to those who have to deal with bridging the gap between the “old world” and the new shiny bits glimmering off in the distance.

Consider that the next time you hear a story of cloud successes and scrutinize what that really means.

/Hoff

Quick Question: Any Public Cloud Providers Using Intel TXT?

September 15th, 2009 3 comments

Does anyone know of any Public Cloud Provider (or Private for that matter) that utilizes Intel’s TXT?

Specifically, does anyone know if Amazon makes use of Intel’s TXT via their Xen-derivative VMM?

Anyone care to share whether they know of any Cloud provider that PLANS to?

Thanks in advance.

Email responses welcome also [hoff @ packetfilter .com]

/Hoff

A Note On Multitenancy As A ‘Defining’ Cloud Attribute…

August 30th, 2009 6 comments

Balakrishna Narasimh and I were discussing the recent hoohaa on Public and Private Clouds when he made an observation on Twitter:

Starting to think public vs private clouds is misleading terminology. more meaningful distinction is single-tenant vs multi-tenant clouds.

I suggested that multitenancy can certainly be an attribute of Cloud deployment, but that I don’t see it as being a differentiator.  I responded thusly:

So different business units in an enterprise don’t represent different “tenants?” They can be governed w/ diff. SLA, policy, $

My point here was that trying to use multitenancy as a way to distinguish between Public and Private Cloud deployments ignores the reality that in many large enterprises — many of whom who are beginning to architect and deploy Private Clouds — they think of their business constituencies as individual “tenants.”  Each of these “tenants” often have different business requirements, service level requirements, cost structure and chargeback rates, policies, etc.

Food for thought.

/Hoff

Do We Need CloudNAPs? It’s A Virtually Certain Maybe.

August 16th, 2009 10 comments

Allan Leinwand from GigaOm wrote a really interesting blog the other day titled: “Do Enterprises Need a Toll Road to the Cloud?” in which he suggested that perhaps what is needed to guarantee high performance and high security Cloud connectivity is essentially a middleman that maintains dedicated aggregate connectivity between “…each of the public cloud providers:”

One solution would be for cloud services providers to offer dedicated leased line connections to their clouds. Though for many enterprises the cost of these leased lines over large geographies would be enough to eat into any savings they’d be getting by using the cloud in the first place. Another solution would come in the form of a service provider that aggregated dedicated connections to each of the public cloud providers.

This new provider — let’s call it CloudNAP (Cloud Network Access Point) — would solely be in the business of providing a toll road between the enterprise and the public cloud providers. The business of selling connectivity to the Internet, or transit, is a common ISP offering.  The CloudNAP transit service would be different, however, in that it would be focused on delivering connectivity solely between enterprises and cloud services providers and not between enterprises or between clouds.

The CloudNAP network could guarantee  performance between the enterprise and the cloud by working with the service providers to enable the use of quality-of-service techniques that are not available over the public Internet such a Multiprotocol Label Switching (MPLS) classes for WAN connections or IEEE 802.1p priorities for LAN connections. Perhaps CloudNAP could even restrict the use of connections to cloud service protocols and services like REST (representational state transfer) or HTTPS (Hypertext Transfer Protocol Secure) -– thus preserving the network for its intended use by the enterprise.

While I have many opinions on multiple points within the article, I’ll focus briefly on just a couple, starting with the boldfaced section (emphasis is mine) above.  Specifically, monetizing connectivity between providers as a sole value add seems quite limited in terms of a business model.  Furthermore, I really see that this is just another feature of what the emerging class of service brokers will offer.

As to the notion of privatizing transport for the purpose of applying QoS, that’s really just a fancy way of describing private Cloud peering and interconnects on the backside of Public Cloud service providers.  The challenge will come when these service providers (with the SP’s directly or brokers) end up managing what amounts to massive numbers of “extranet” connections in current-day parlance; it’s simply taking the overlay architectures of DMZ’s as we know it today and flipping it outward.  I’m not going to tackle the issue of Net Neutrality in this piece because, well, I’m on vacation in Hawaii and I want to keep my blood pressure down 😉

The blog mentioned many times about the lack of a “…standard products that allow enterprises to install private network connections (either paid, dedicated leased lines or VPNs) that would provide predictable network performance and security,” but I’d suggest that’s wholly inaccurate — depending upon your definition of a “standard product.”

In the long term the notion of an open market for hybrid Cloud connectivity — the Inter-Cloud — will take form, and much of the evolving work being done with open protocols and those in the works by loose federations of suppliers with common goals and technology underpinnings will emerge.

In the long term do we need CloudNAP’s? No. Will we get something similar by virtue of what we already do today? Probably.

/Hoff

The Cloud For Clunkers Program…Security, Portability, Interoperability and the Economics of Cloud Providers

August 8th, 2009 1 comment

Introducing the “Cloud For Clunkers Program”cash-for-clunkers

Cloud providers are advertising the equivalent of the U.S. Government’s “Cash for Clunkers” program:

“You give up your tired, inefficient, polluting, hard to maintain and costly data centers and we’ll give you PFM in the form of a global, seamless, elastic computing capability for less money and with free undercoating.  The value proposition is fantastic: cost-savings, agility, the illusion of infinite scale, flexibility, reliability, and “green.”

There are some truly amazing Cloud offerings making their way to market and it’s interesting to see that the parallels offered up by the economic incentives in both examples are generating a tremendous amount of interest.

The case remains to be seen as to whether or not this increase in interest is a short-term burst that’s simply shortening the cycle for early adopters or if it will deliver sustainable attention over time and drive people to the “showroom floor” that weren’t considering kicking the tires in the first place.

As compelling as the offer of Cloud may be, in order to pull off incentivizing large enterprises to think differently, it requires an awful lot going on under the covers to provide this level of abstracted awesomeness; a ton of heavy lifting and the equipment and facilities to go with it.

To get ready for the gold rush, most of the top-tier IaaS/PaaS Cloud providers are building data processing MegaCenters around the globe in order to provide these services, investing billions of dollars to do so…all supposedly so you don’t have to.

Remember, however, that service providers make money by squeezing the most out of you while providing as little as they need to in order to ensure the circle of life continues.  Note, this is not an indictment of that practice, as $deity knows I’ve done enough of that myself, but just because it has the word “Cloud” in front of it does not make it any different from a business case.  Live by the ARPU, die by the ARPU.

Cloudiness Is Next To Godliness…

What happens then, when something outside of the providers’ control changes the ability or desire to operate from one of these billion-dollar Cloud centers?  No, I don’t mean like a natural disaster or an infrastructure failure.  I mean something far more insidious.

Like what you say?  Funny you should ask.  The Data Center Knowledge blog details how Microsoft is employing the teleportation equivalent of vMotion by pMotioning (physically) an entire Azure Cloud data center to deal with changing tax codes thanks to a game of chicken with a local state government:

“Due to a change in local tax laws, we’ve decided to migrate Windows Azure applications out of our northwest data center prior to our commercial launch this November,” Microsoft say on its Windows Azure blog (link via OakLeaf Systems). ” This means that all applications and storage accounts in the “USA – Northwest” region will need to move to another region in the next few months, or they will be deleted.” Azure applications will shift to the USA – Southwest region, which is housed in Microsoft’s 470,000 square foot San Antonio data center, which opened last September.

The move underscores how the economics of data center site location can change quickly – and how huge companies are able to rapidly shift operations to chase the lowest operating costs

Did you see the part that said “…all applications and storage accounts in the “USA – Northwest” region will need to move to another region in the next few months, or they will be deleted.”  Sounds rather Un-Cloudlike, no?  Remember the Coghead shutdown?

Large scale providers and their MegaCenters face some amazing challenges such as the one presented above.  As these issues become public and exposed to due diligence, they are in turn causing enterprises to take stock in how they evaluate their migration to Cloud.  They aren’t particularly new issues, it’s just that people are having a hard time reconciling reality from the confusing anecdote of Cloudy goodness that requires zero-touch and just works…always.

Om Malik chronicled some of these challenges:

And while cloud computing is all the rage in Washington D.C., it seems the state of Washington doesn’t much care for cloud computing. Instead of buying cloud computing services from home-grown cloud computing giant Amazon, (or newly emergent cloud player, Microsoft), the state has opted to build a brand-new, $180 million data center, despite reservations from some state representatives. Microsoft is moving the data center that houses its Azure cloud services to San Antonio, Texas, from Quincy, Wash. — mostly because of unfavorable tax policies. Apparently, the data centers are no longer covered by sales tax rebates — a costly proposition for Microsoft, which plans to spend many millions on new hardware for the Azure-focused data center.

By the way, Washington is the second state that has decided to build its own data center. In June, Massachusetts decided that it was going to build a $100 million data center. The Sox Nation is home to Nick Carr, author of “The Big Switch,” arguably the most influential book on cloud computing and its revolutionary capabilities.

These aforementioned states are examples of a bigger trend: Most large organizations are still hesitant to go all in when it comes to cloud computing. That’s partly because the cloud revolution still has a long way to go. But much of it is fear of the unknown.

Some of that “unknown” is more about being “unsolved” since we understand many of the challenges but simply don’t have solutions to them yet.

But I Don’t Want My Data In Hoboken!

I’ve spoken about this before, but while a provider may be pressured to move an entire datacenter (or even workloads within it) for their own selfish needs, what might that mean to customers in terms of privacy, security, SLA and compliance requirements?

We have no doubt all heard of requirements that prevent certain data from leaving geographic boundaries.  What if one of these moves came into conflict with regulations such as these?  What happens if the location chosen to replace the existing once causes a legal exception?

This is clearly an inflection point for Cloud and underscores the need to drive for policy-driven portability and interoperability sooner than later.

Even if we have the technical capability to make portable our workloads, we’re not in a position to instantiate policy as an expression of business logic need to govern whether they should, can, or ought to be moved.

If we can’t/dont’/won’t work to implement open standards to provide for workload security, portability & interoperability with the functionality for “consumers” to assert requirements and “providers” to attest to their capabilities based upon a common expression of such, this will surely add to the drive for large enterprises to consider either wholly-private or virtual private Clouds in order to satisfy their needs under an umbrella they can control.

I’ll Take “Go With What You Know” For $200, Alex

In the short term, customers who are mature in their consolidation, virtualization, optimization and automation practices and are looking to move to utilize IaaS/PaaS services from third party providers will likely demand homogeneity from 1-2 key providers with a global footprint in potential combination with their own footprint to pull this off whilst they play the waiting game for open standards.

The reason for the narrowing of providers and platforms is simple: continuity of service across all dimensions and the ability to control one’s fate, even if it means vendor lock-in driven by feature/function maturity.

Randy Bias alluded to this in a recent post titled “Bifurcating Clouds” in which he highlighted some of the differences in the spectrum of Cloud providers and the platforms they operate from.  There are many choices when it comes to virtualization and Cloud operating platforms, but customers are becoming much more educated about what those choices entail and often times arrive at the fact that cost isn’t always the most pressing driver.  The Total Cloud Ownership* calculation is a multi-dimensional problem…

This poses an interesting set of challenges for service providers looking to offer IaaS/PaaS Cloud services: build your own or re-craft available OSS platforms and drive for truly open standards or latch on to a market leader’s investment and roadmap and adopt it as such.

Ah, Lock-In.  Smells Like Teen Spirit…

From the enterprises’ perspective,  many are simply placing bets that the provider they chose for their “internal” virtualization and consolidation platform will also be the one to lead them to Cloud as service providers adopt the same solution.

This would at least — in the absence of an “open standard” — give customers the ability to provide for portability should a preferred provider decide to move operations to somewhere which may or may not satisfy business requirements; they could simply pick another provider that runs on the same platform instead.  You get De Facto portability…and the ever-present “threat” of vendor lock-in.

It’s what happens when you play spin the bottle with your data, I’m afraid.

So before you trade in your clunker, it may make sense to evaluate whether it’s simply cheaper in the short term to keep on paying the higher gas tax and drive it into the ground, pull the motor for a rebuild and get another 100,000 miles out of the old family truckster or go for broke and get the short term cash back without knowing what it might really cost you down the road.

This is why private Cloud and virtual private Clouds make sense.  It’s not about location, it’s about control.

Both hands on the wheel…10 and 2, kids….10 and 2.

/Hoff

*I forgot to credit Vinnie Mirchandani from Deal Architect and his blog entry here for the Total Cloud Ownership coolness. Thanks to @randybias for the reminder.

Categories: Cloud Computing, Microsoft Tags:

Cloud Computing [Security] Architectural Framework

July 19th, 2009 3 comments

CSA-LogoFor those of you who are not in the security space and may not have read the Cloud Security Alliance’s “Guidance for Critical Areas of Focus,” you may have missed the “Cloud Architectural Framework” section I wrote as a contribution.

We are working on improving the entire guide, but I thought I would re-publish the Cloud Architectural Framework section and solicit comments here as well as “set it free” as a stand-alone reference document.

Please keep in mind, I wrote this before many of the other papers such as NIST’s were officially published, so the normal churn in the blogosphere and general Cloud space may mean that  some of the terms and definitions have settled down.

I hope it proves useful, even in its current form (I have many updates to make as part of the v2 Guidance document.)

/Hoff


Problem Statement

Cloud Computing (“Cloud”) is a catch-all term that describes the evolutionary development of many existing technologies and approaches to computing that at its most basic, separates application and information resources from the underlying infrastructure and mechanisms used to deliver them with the addition of elastic scale and the utility model of allocation.  Cloud computing enhances collaboration, agility, scale, availability and provides the potential for cost reduction through optimized and efficient computing.

More specifically, Cloud describes the use of a collection of distributed services, applications, information and infrastructure comprised of pools of compute, network, information and storage resources.  These components can be rapidly orchestrated, provisioned, implemented and decommissioned using an on-demand utility-like model of allocation and consumption.  Cloud services are most often, but not always, utilized in conjunction with and enabled by virtualization technologies to provide dynamic integration, provisioning, orchestration, mobility and scale.

While the very definition of Cloud suggests the decoupling of resources from the physical affinity to and location of the infrastructure that delivers them, many descriptions of Cloud go to one extreme or another by either exaggerating or artificially limiting the many attributes of Cloud.  This is often purposely done in an attempt to inflate or marginalize its scope.  Some examples include the suggestions that for a service to be Cloud-based, that the Internet must be used as a transport, a web browser must be used as an access modality or that the resources are always shared in a multi-tenant environment outside of the “perimeter.”  What is missing in these definitions is context.

From an architectural perspective given this abstracted evolution of technology, there is much confusion surrounding how Cloud is both similar and differs from existing models and how these similarities and differences might impact the organizational, operational and technological approaches to Cloud adoption as it relates to traditional network and information security practices.  There are those who say Cloud is a novel sea-change and technical revolution while others suggest it is a natural evolution and coalescence of technology, economy, and culture.  The truth is somewhere in between.

There are many models available today which attempt to address Cloud from the perspective of academicians, architects, engineers, developers, managers and even consumers. We will focus on a model and methodology that is specifically tailored to the unique perspectives of IT network and security professionals.

The keys to understanding how Cloud architecture impacts security architecture are a common and concise lexicon coupled with a consistent taxonomy of offerings by which Cloud services and architecture can be deconstructed, mapped to a model of compensating security and operational controls, risk assessment and management frameworks and in turn, compliance standards.

Setting the Context: Cloud Computing Defined

Understanding how Cloud Computing architecture impacts security architecture requires an understanding of Cloud’s principal characteristics, the manner in which cloud providers deliver and deploy services, how they are consumed, and ultimately how they need to be safeguarded.

The scope of this area of focus is not to define the specific security benefits or challenges presented by Cloud Computing as these are covered in depth in the other 14 domains of concern:

  • Information lifecycle management
  • Governance and Enterprise Risk Management
  • Compliance & Audit
  • General Legal
  • eDiscovery
  • Encryption and Key Management
  • Identity and Access Management
  • Storage
  • Virtualization
  • Application Security
  • Portability & Interoperability
  • Data Center Operations Management
  • Incident Response, Notification, Remediation
  • “Traditional” Security impact (business continuity, disaster recovery, physical security)

We will discuss the various approaches and derivative offerings of Cloud and how they impact security from an architectural perspective using an in-process model developed as a community effort associated with the Cloud Security Alliance.

Principal Characteristics of Cloud Computing

Cloud services are based upon five principal characteristics that demonstrate their relation to, and differences from, traditional computing approaches:

  1. Abstraction of Infrastructure
    The compute, network and storage infrastructure resources are abstracted from the application and information resources as a function of service delivery. Where and by what physical resource that data is processed, transmitted and stored on becomes largely opaque from the perspective of an application or services’ ability to deliver it.  Infrastructure resources are generally pooled in order to deliver service regardless of the tenancy model employed – shared or dedicated.  This abstraction is generally provided by means of high levels of virtualization at the chipset and operating system levels or enabled at the higher levels by heavily customized filesystems, operating systems or communication protocols.
  2. Resource Democratization
    The abstraction of infrastructure yields the notion of resource democratization – whether infrastructure, applications, or information – and provides the capability for pooled resources to be made available and accessible to anyone or anything authorized to utilize them using standardized methods for doing so.
  3. Services Oriented Architecture
    As the abstraction of infrastructure from application and information yields well-defined and loosely-coupled resource democratization, the notion of utilizing these components in whole or part, alone or with integration, provides a services oriented architecture where resources may be accessed and utilized in a standard way.  In this model, the focus is on the delivery of service and not the management of infrastructure.
  4. Elasticity/Dynamism
    The on-demand model of Cloud provisioning coupled with high levels of automation, virtualization, and ubiquitous, reliable and high-speed connectivity provides for the capability to rapidly expand or contract resource allocation to service definition and requirements using a self-service model that scales to as-needed capacity.  Since resources are pooled, better utilization and service levels can be achieved.
  5. Utility Model Of Consumption & Allocation
    The abstracted, democratized, service-oriented and elastic nature of Cloud combined with tight automation, orchestration, provisioning and self-service then allows for dynamic allocation of resources based on any number of governing input parameters.  Given the visibility at an atomic level, the consumption of resources can then be used to provide an “all-you-can-eat” but “pay-by-the-bite” metered utility-cost and usage model. This facilitates greater cost efficiencies and scale as well as manageable and predictive costs.

Cloud Service Delivery Models

Three archetypal models and the derivative combinations thereof generally describe cloud service delivery.  The three individual models are often referred to as the “SPI Model,” where “SPI” refers to Software, Platform and Infrastructure (as a service) respectively and are defined thusly[1]:

  1. Software as a Service (SaaS)
    The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
  2. Platform as a Service (PaaS)
    The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., java, python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations.
  3. Infrastructure as a Service (IaaS)
    The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

Understanding the relationship and dependencies between these models is critical.  IaaS is the foundation of all Cloud services with PaaS building upon IaaS, and SaaS – in turn – building upon PaaS.  We will cover this in more detail later in the document.

The OpenCrowd Cloud Solutions Taxonomy shown in Figure 1 provides an excellent reference that demonstrates the swelling ranks of solutions available today in each of the models above.

Narrowing the scope or specific capabilities and functionality within each of the *aaS offerings or employing the functional coupling of services and capabilities across them may yield derivative classifications.  For example “Storage as a Service” is a specific sub-offering with the IaaS “family,”  “Database as a Service” may be seen as a derivative of PaaS, etc.

Each of these models yields significant trade-offs in the areas of integrated features, openness (extensibility) and security.  We will address these later in the document.

Figure 1 - The OpenCrowd Cloud Taxonomy

Figure 1 - The OpenCrowd Cloud Taxonomy

Cloud Service Deployment and Consumption Modalities

Regardless of the delivery model utilized (SaaS, PaaS, IaaS,) there are four primary ways in which Cloud services are deployed and are characterized:

  1. Private
    Private Clouds are provided by an organization or their designated service provider and offer a single-tenant (dedicated) operating environment with all the benefits and functionality of elasticity and the accountability/utility model of Cloud.The physical infrastructure may be owned by and/or physically located in the organization’s datacenters (on-premise) or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated service provider respectively.

    The consumers of the service are considered “trusted.”  Trusted consumers of service are those who are considered part of an organization’s legal/contractual
    umbrella including employees, contractors, & business partners.  Untrusted consumers are those that may be authorized to consume some/all services but are not logical extensions of the organization.

  2. Public
    Public Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.
    The physical infrastructure is generally owned by and managed by the designated service provider and located within the provider’s datacenters (off-premise.)  Consumers of Public Cloud services are considered to be untrusted.
  3. Managed
    Managed Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.The physical infrastructure is owned by and/or physically located in the organization’s datacenters with an extension of management and security control planes controlled by the designated service provider.  Consumers of Managed Clouds may be trusted or untrusted.

  4. Hybrid
    Hybrid Clouds are a combination of public and private cloud offerings that allow for transitive information exchange and possibly application compatibility and portability across disparate Cloud service offerings and providers utilizing standard or proprietary methodologies regardless of ownership or location.  This model provides for an extension of management and security control planes.  Consumers of Hybrid Clouds may be trusted or untrusted.

The difficulty in using a single label to describe an entire service/offering is that it actually attempts to describe the following elements:

  • Who manages it
  • Who owns it
  • Where it’s located
  • Who has access to it
  • How it’s accessed

The notion of Public, Private, Managed and Hybrid when describing Cloud services really denotes the attribution of management and the availability of service to specific consumers of the service.

It is important to note that often the characterizations that describe how Cloud services are deployed are often used interchangeably with the notion of where they are provided; as such, you may often see public and private clouds referred to as “external” or “internal” clouds.  This can be very confusing.

The manner in which Cloud services are offered and ultimately consumed is then often described relative to the location of the asset/resource/service owner’s management or security “perimeter” which is usually defined by the presence of a “firewall.”

While it is important to understand where within the context of an enforceable security boundary an asset lives, the problem with interchanging or substituting these definitions is that the notion of a well-demarcated perimeter separating the “outside” from the “inside” is an anachronistic concept.

It is clear that the impact of the re-perimeterization and the erosion of trust boundaries we have seen in the enterprise is amplified and accelerated due to Cloud.  This is thanks to ubiquitous connectivity provided to devices, the amorphous nature of information interchange, the ineffectiveness of traditional static security controls which cannot deal with the dynamic nature of Cloud services and the mobility and velocity at which Cloud services operate.

Thus the deployment and consumption modalities of Cloud should be thought of not only within the construct of “internal” or “external” as it relates to asset/resource/service physical location, but also by whom they are being consumed and who is responsible for their governance, security and compliance to policies and standards.

This is not to suggest that the on- or off-premise location of an asset/resource/information does not affect the security and risk posture of an organization, because it does, but it also depends upon the following:

  • The types of application/information/services being managed
  • Who manages them and how
  • How controls are integrated
  • Regulatory issues

Table 1 illustrates the summarization of these points:

Table 1 - Cloud Computing Service Deployment

Table 1 - Cloud Computing Service Deployment

As an example, one could classify a service as IaaS/Public/External (Amazon’s AWS/EC2 offering is a good example) as well as SaaS/Managed/Internal (an internally-hosted, but third party-managed custom SaaS stack using Eucalyptus, as an example.)

Thus when assessing the impact a particular Cloud service may have on one’s security posture and overall security architecture, it is necessary to classify the asset/resource/service within the context of not only its location but also its criticality and business impact as it relates to management and security.  This means that an appropriate level of risk assessment is performed prior to entrusting it to the vagaries of “The Cloud.”

Which Cloud service deployment and consumption model is used depends upon the nature of the service and the requirements that govern it.  As we demonstrate later in this document, there are significant trade-offs in each of the models in terms of integrated features, extensibility, cost, administrative involvement and security.

Figure 2 - Cloud Reference Model

Figure 2 - Cloud Reference Model

It is therefore important to be able to classify a Cloud service quickly and accurately and compare it to a reference model that is familiar to an IT networking or security professional.

Reference models such as that shown in Figure 2 allows one to visualize the boundaries of *aaS definitions, how and where a particular Cloud service fits, and also how the discrete *aaS models align and interact with one another.  This is presented in an OSI-like layered structure with which security and network professionals should be familiar.

Considering each of the *aaS models as a self-contained “solution stack” of integrated functionality with IaaS providing the foundation, it becomes clear that the other two models – PaaS and SaaS – in turn build upon it.

Each of the abstract layers in the reference model represents elements which when combined, comprise the services offerings in each class.

IaaS includes the entire infrastructure resource stack from the facilities to the hardware platforms that reside in them. Further, IaaS incorporates the capability to abstract resources (or not) as well as deliver physical and logical connectivity to those resources.  Ultimately, IaaS provides a set of API’s which allows for management and other forms of interaction with the infrastructure by the consumer of the service.

Amazon’s AWS Elastic Compute Cloud (EC2) is a good example of an IaaS offering.

PaaS sits atop IaaS and adds an additional layer of integration with application development frameworks, middleware capabilities and functions such as database, messaging, and queuing that allows developers to build applications which are coupled to the platform and whose programming languages and tools are supported by the stack.  Google’s AppEngine is a good example of PaaS.

SaaS in turn is built upon the underlying IaaS and PaaS stacks and provides a self-contained operating environment used to deliver the entire user experience including the content, how it is presented, the application(s) and management capabilities.

SalesForce.com is a good example of SaaS.

It should therefore be clear that there are significant trade-offs in each of the models in terms of features, openness (extensibility) and security.

Figure 3 - Trade-off’s Across *aaS Offerings

Figure 3 - Trade-off’s Across *aaS Offerings

Figure 3 demonstrates the interplay and trade-offs between the three *aaS models:

  • Generally, SaaS provides a large amount of integrated features built directly into the offering with the least amount of extensibility and a relatively high level of security.
  • PaaS generally offers less integrated features since it is designed to enable developers to build their own applications on top of the platform and is therefore more extensible than SaaS by nature, but due to this balance trades off on security features and capabilities.
  • IaaS provides few, if any, application-like features, provides for enormous extensibility but generally less security capabilities and functionality beyond protecting the infrastructure itself since it expects operating systems, applications and content to be managed and secured by the consumer.

The key takeaway from a security architecture perspective in comparing these models is that the lower down the stack the Cloud service provider stops, the more security capabilities and management the consumer is responsible for implementing and managing themselves.

This is critical because once a Cloud service can be classified and referenced against the model, mapping the security architecture, business and regulatory or other compliance requirements against it becomes a gap-analysis exercise to determine the general “security” posture of a service and how it relates to the assurance and protection requirements of an asset.

Figure 4 below shows an example of how mapping a Cloud service can be compared to a catalog of compensating controls to determine what existing controls exist and which do not as provided by either the consumer, the Cloud service provider or another third party.

Figure 4 - Mapping the Cloud Model to the Security Model
Figure 4 – Mapping the Cloud Model to the Security Model

Once this gap analysis is complete as governed by the requirements of any regulatory or other compliance mandates, it becomes much easier to determine what needs to be done in order to feed back into a risk assessment framework to determine how the gaps and ultimately how the risk should be addressed: accept, transfer, mitigate or ignore.

Conclusion

Understanding how architecture, technology, process and human capital requirements change or remain the same when deploying Cloud Computing services is critical.   Without a clear understanding of the higher-level architectural implications of Cloud services, it is impossible to address more detailed issues in a rational way.

The keys to understanding how Cloud architecture impacts security architecture are a common and concise lexicon coupled with a consistent taxonomy of offerings by which Cloud services and architecture can be deconstructed, mapped to a model of compensating security and operational controls, risk assessment and management frameworks and in turn, compliance standards.


[1] Credit: Peter M. Mell, NIST

Incomplete Thought: The Opportunity For Desktop As a Service – The Client Cloud?

June 16th, 2009 8 comments

Please excuse me if I’m late to the party bringing this up…

We talk a lot about the utility of Public Clouds to enable the cost-effective and scalable implementation of “server” functionality, whether that’s SaaS, PaaS, or IaaS model, the concept is pretty well understood: use someone else’s infratructure to host your applications and information.

As it relates to the desktop/client side of Cloud, we normally think about hosting the desktop/client capabilities as a function of Private Cloud capabilities; behind the firewall.  Whether we’re talking about terminal service-like capabilities and VDI, it seems to me people continue to think of this as a predominantly “internal” opportunity.

I don’t think people are talking enough about the client side of Cloud and desktop as a service (DaaS) and what this means:

If the physical access methods continue to get skinnier (smart phones, thin clients, client hypervisors, virtual machines, etc.) is there an opportunity for providers of Infrastructure as a Service to host desktop instances outside a corporate firewall?  If I can take advantage of all of the evolving technology in the space and couple it with the same sorts of policy advancements, networking and VPN functionality to connect me to IaaS server resources running in Private or Public Clouds, isn’t that a huge opportunity for further cost savings, distributed availability and potentially better security?

There are companies such as Desktone looking to do this very thing in a way to offset the costs of VDI and further the efforts of consolidation.  It makes a lot of sense for lots of reasons and despite my lack of hands-on exposure to the technology, it sure looks like we have the technical capability to do this today.   Dana Gardner wrote about this back in 2007 and it’s as valid a set of points then as it is now — albeit with a much bigger uptake in Cloud:

The stars and planets finally appear to be aligning in a way that makes utility-oriented delivery of a full slate of client-side computing and resources an alternative worth serious consideration. As more organizations are set up as service bureaus — due to such  IT industry developments as ITIL and shared services — the advent of off the wire everything seems more likely in many more places

I could totally see how Amazon could offer the same sorts of workstation utility as they do for server instances.

Will DaaS be the next frontier of consolidation in the enterprise?

If you’re considering hosting your service instances elsewhere, why not your desktops?  Citrix and VMware (as examples) seem to think you might…

/Hoff

Cloud Computing Security: (Orchestral) Maneuvers In the Dark?

June 14th, 2009 8 comments

OMDLast week Kevin L. Jackson wrote an insightful article titled: Cloud Computing: The Dawn of Maneuver Warfare in IT Security.  I enjoyed Kevin’s piece but struggled with how I might respond: cheerleader or pundit.  I tried for a bit of both while I found witty references to OMD.*

Kevin’s essay is an interesting — if not hope-filled — glimpse into what IT Security could be as enabled by Cloud Computing and virtualization, were one to be able to suspend disbelief due to the realities of hefty dependencies on archaic protocols, broken trust models and huge gaps in technology and operational culture.  Readers of my blog will certainly recognize this from “The Four Horsemen of the Virtualization Security Apocalypse” and “The Frogs Who Desired a King: A Virtualization and Cloud Computing Security Fable

To the converse, I’ve certainly also done my fair share of trying to change the world both by thought and action in the stance of “cheerleader”; I’ve been involved in everything from massive sensornet deployments to developing AI/Neural Networking based security technologies, so I think I’ve got a fair idea of what the balance looks like.  The salty pragmatist often triumphs, however…

Kevin’s article represents a futurist’s view, which is in no way a bad thing, but I fear it is too far disconnected from the realities of security and operational maturity outside of the navel:

The lead topic of every information technology (IT) conversation today is cloud computing. The key point within each of those conversations is inevitably cloud computing security.  Although this trend is understandable, the sad part is that these conversations will tend to focus on all the standard security pros, cons and requirements. While protecting data from corruption, loss, unauthorized access, etc. are all still required characteristics of any IT infrastructure, cloud computing changes the game in a much more profound way.

Certainly Cloud is a game changer, but just because the rules change does not mean the players do.  We haven’t solved those issues as they pertain to non-virtualized or Cloud infrastructure, so while sad, it’s a crushing truth we have to address.  Further, to get from “here” to “there,” we do need to focus on these issues because that is how we are measured today; most of us don’t get to start from scratch.

To that point, check out “Incomplete Thought: Cloud Security IS Host-Based…At The Moment” for why this gap exists in the first place.

I should make it clear that this does not mean I necessarily disagree with the exploration of Kevin’s future state, in fact I’ve written about it in various forms several times, but it’s important to separate what Cloud will deliver from a security perspective in the short term from the potential of what it can possibly deliver in the long term; this applies to both the cultural and technical perspectives.

I think the most significant challenges I had in reading Kevin’s article revolved around three things:

  1. Mixing tenses in some key spots seemed to imply that out of the box today, Cloud Computing can deliver on the promises Kevin is describing now.  Given the audience, this can lead to unachievable expectations
  2. The disconnect between the public, private and military sectors with an over-reliance on military analogies as a model representing an ideal state of security operations and strategy can be startling
  3. Unrealistic portrayals of where we are with the maturity of Cloud/virtualization mobility, portability, interoperability and security capabilities

In the short term, there are certainly incremental improvements will occur with respect to security thanks to the “lubricant-like” functionality provided by virtualization and Cloud.

These “improvements” however represent gains mostly in automation of manual processes and a resultant increase in efficiency rather than a dramatic improvement in survivability or security given what we have to work with today.

The lack of heterogeneous closed-loop autonomics, governance and orchestration in conjunction with the fact that a huge amount of infrastructure and applications are not virtualization- or Cloud-ready means this picture a vision, not a mission.

Kevin juxtaposes the last few decades of static, Maginot Line IT/Information Security “defense-in-depth” strategy with the unpredictable and “agile, hostile and mobile” notions of military warfighter maneuvers to compare and contrast what he suggests Cloud will deliver with an enlightened state of security capabilities:

Until now, IT security has been akin to early 20th century warfare.  After surveying and carefully cataloging all possible threats, the line of business (LOB) manager and IT professional would debate and eventually settle on appropriate and proportional risk mitigation strategies. The resulting IT security infrastructures and procedures typically reflected a “defense in depth” strategy, eerily reminiscent of the French WWII Maginot line . Although new threats led to updated capabilities, the strategy of extending and enhancing the protective barrier remained. Often describe as an “arms race”, the IT security landscape has settled into ever escalating levels of sophisticated attack versus defense techniques and technologies. Current debate around cloud computing security has seemed to continue without the realization that there is a fundamental change now occurring. Although technologically, cloud computing represents an evolution, strategically it represents the introduction of maneuver warfare into the IT security dictionary.

The concepts of attrition warfare and maneuver warfare dominate strategic options within the military. In attrition warfare, masses of men and material are moved against enemy strongpoints, with the emphasis on the destruction of the enemy’s physical assets. Maneuver warfare, on the other hand, advocates that strategic movement can bring about the defeat of an opposing force more efficiently than by simply contacting and destroying enemy forces until they can no longer fight.

The US Marine Corps concept of maneuver is a “warfighting philosophy that seeks to shatter the enemy’s cohesion through a variety of rapid, focused, and unexpected actions which create a turbulent and rapidly deteriorating situation with which the enemy cannot cope.”   It is important to note, however, that neither is used in isolation.  Balanced strategies combine attrition and maneuver techniques in order to be successful on the battlefield.

The reality is that outside of the military, “shock and awe” doesn’t really work when you’re mostly limited to “compliance and three analysts with a firewall.”  Check out “Security & the Cloud — What Does That Even Mean?

Here’s where the reality distortion fields trumps the rainbows and unicorns:

With cloud computing, IT security can now use maneuver concepts for enhance defense. By leveraging virtualization, high speed wide area networks and broad industry standardization, new and enhanced security strategies can now be implemented. Defensive options can now include the virtual repositioning of entire datacenters. Through “cloudbursting”, additional compute and storage resources can also be brought to bear in a defensive, forensic or counter-offensive manner. The IT team can now actively “fight through an attack” and not just observe an intrusion, merely hoping that the in-place defenses are deep enough. The military analogy continues in that maneuver concepts must be combined with “defense in depth” techniques into holistic IT security strategies.

Allow me to suggest that “fight[ing] through an attack” by simply redirecting/re-positioning the $victim isn’t really an effective definition of an “active countermeasure” anymore than waiting the attack out because there’s no offense, only defense.  There is no elimination of threat.  I’ve written about that a bit: Incomplete Thought: Offensive Computing – The Empire Strikes BackThinning the Herd & Chlorinating the Malware Gene Pool… and Everybody Wing Chun Tonight & “ISPs Providing Defense By Engaging In Offensive Computing” For $100, Alex. Mobility does not imply security.

To wit:

A theoretical example of how maneuver IT security strategies could be use would be in responding to a  denial of service attack launched on DISA datacenter hosted DoD applications. After picking up a grossly abnormal spike in inbound traffic, targeted applications could be immediately transferred to virtual machines hosted in another datacenter. Router automation would immediately re-route operational network links to the new location (IT defense by maneuver). Forensic and counter-cyber attack applications, normally dormant and hosted by a commercial infrastructure-as-a-service (IaaS) provider (a cloudburst), are immediately launched, collecting information on the attack and sequentially blocking zombie machines. The rapid counter would allow for the immediate, and automated, detection and elimination of the attack source.

To pick on this specific example, even given the relatively mature anti-DDoS capabilities we have today without virtualization or Cloud, simply moving resources around in response to an attack does nothing if the assets are bound to the same IP addresses and hostnames. Fundamentally, the static underpinnings holding the infrastructure together hinder this lofty goal.  You can Cloudburst till the cows come home, but the attacks will simply follow.  You transfer all those assets to a new virtual datacenter and for the most part, the bad traffic goes with it. Distributed intelligence can certainly reduce the pain, but with distributed botnets whose node counts can number in the millions, you’re not going to provide for the “…elimination of the attack source.”

With these large scale botnets as an example, the excess capacity and mobility of the $victim could even have unintended worse ramifications such as what I wrote about here: Economic Denial Of Sustainability (EDoS)

In closing, we’ve got two parallel paths of advancing technology: the autonomics of the datacenter and the evolution of security.  I’ll wager we’ll certainly see improvements in the former that are well out-of-phase and timing with the latter, not the least of which is due to what Kevin closed with:

This revolution, of course, doesn’t come without its challenges.  This is truly a cultural shift. Cloud computing provides choice, and in the context of active defense strategies, these choices must be made in real-time.  While the cloud computing advantages of self-service, automation, visibility and rapid provisioning can enable maneuver security strategies, successful implementation requires cooperation and collaboration across multiple entities, both within and without.
The cloud computing era is also the dawning of a new day in IT security.  In the not to distant future, network and IT security training will include both static and active IT security techniques. Maneuver warfare in IT security is here to stay.

It’s absolutely a cultural issue, but we must strive to be realistic about where we are with Cloud and security technology and capabilities as aligned.  As someone who’s spent the last 15 years in IT/Security, I can say that this is NOT the “…dawning of a new day in IT security,” rather it’s still dark out and will be for quite some time.  There is indeed opportunity to utilize Cloud and virtualization to react better, faster and more efficiently, but let’s not pretend we’re treating the problem when what we’re doing is making the symptoms less noticeable.

I am absolutely bullish on Cloud, but not Cloud Security as it stands, at least not until we make headway toward fundamentally fixing the foundational problems we have that allow the problems to occur in the first place.

/Hoff

* I thought that out of all of OMD’s tracks, the most apropos titles to match to this blog post would be “Pandora’s Box,” “Dreaming,” or “The New Stone Age” 😉  Thanks for the motivation, @csoandy

The Six Worst Cloud Security Mistakes? I Can Do You One Better…

June 6th, 2009 2 comments

I recently read a story from Kelly Jackson Higgins of Dark Reading outlining what are described as the “Six Worst Cloud Security Mistakes:

  1. Assuming the cloud is less secure than your data
  2. Not verifying, testing, or auditing the security of your cloud-based service provider.
  3. Failing to vet your cloud provider’s viability as a business.
  4. Assuming you’re no longer responsible for securing data once it’s in the cloud.
  5. Putting insecure apps in the cloud and expecting that to make them more secure.
  6. Having no clue that your business units are already using some cloud-based services.

A very interesting list, for sure, and a reasonable set of potential “mistakes” to ponder, but I’m really having trouble with one in particular.

The one that’s getting my goose honking is #1: Assuming the cloud is less secure than your data.

Really? I maintain that this generalization about Cloud being more or less secure (in regards to one’s own capabilities) is a silly thing to argue; let’s see why.

We start off with what I think is a strange bit of contradiction:

It’s only natural for security pros to be control freaks. Being charged with securing a company’s data and intellectual property requires a healthy dose of paranoia and protectionism. But sometimes that leads to false impressions about cloud security. “One common mistake is that as soon as you talk about the cloud, [organizations] assume it’s less secure than their own IT security operation,” says Chenxi Wang, principal analyst at Forrester Research. “More control does not necessarily lead to more security.”

Assuming that one of the reasons a company might consider outsourcing their IT security operations to a third party [Cloud] provider IS the fact that they have more control or at least equal to what a company can provide themselves, it occurs to me this sort of statement can be interpreted many ways.  Here’s one, for example.

I find myself confused by the highlighted sentence regarding control and security within the context of what is written.  In fact, if you read the next paragraph, it seems to imply that the because a Cloud provider has more control they can offer better security:

In fact, with services such as Google’s SaaS, data loss is less likely because the information is accessible from anywhere and anytime without saving it to an easily lost or stolen USB stick or CD, according to Eran Feigenbaum, director of security for Google Apps. And Google’s security-patching process is more streamlined than a typical enterprise because its server architecture is homogeneous, he says. “Many attacks [come from a] lack of patch management and server misconfiguration…For Google, when the time comes to patch, we can do so across the entire platform in a uniform fashion,” he said.

I’ll say it again: SaaS is a convenient way of dumbing down “Cloud Computing” to a singular instance/application/service but it completely obviates Platform and Infrastructure as a Service offerings, which are wildly different animals, especially from a security perspective.  Please see my latest commentary about this in my response to Bruce Schneier’s equation of SaaS with Cloud Computing to the exclusion of PaaS/IaaS.

I’ve made the point before that comparing managing/patching a single application and its supporting infrastructure in a SaaS offering to an enterprise that would otherwise have to support not only that service but potentially hundreds more is a completely unfair comparison.  If you want to compare apples to apples, I’d maintain that any organization with a mature security program whose only charter was to support (securely) a single application could do it just as well as a SaaS provider, all other things being equal.

The differences here become scale and multi-tenancy in the case of the Cloud provider, I think these issues actually make a Cloud environment more difficult to secure.

Also, suggesting with the Google example that “data loss is less likely” because it’s “accessible from anywhere” and doesn’t involve “…lost or stolen USB stick(s) or CD(s)” seems an awfully arbitrary one given the fact that one of the most interesting data loss/leakage incidents in recent Cloud history came from Google’s Docs offering due to an operator (Google) system misconfiguration.  USB sticks and CDs are also a very narrow definition of data loss/leakage.

Then there’s the more global view SaaS and other cloud providers have, Feigenbaum says. “As an enterprise, you only see a small slice of what’s affecting you [threat-wise],” Feigenbaum said during a panel on cloud security at the RSA Conference in April. “A cloud provider can have the economy of scale for a holistic vision…the cloud shifts security and also makes it better,” he said.

I don’t have anything to argue about here; a wider perspective and better visibility is a good thing.  Again, however, this depends upon the type of service, what is being monitored and protected, on behalf of whom and from whom.

But that doesn’t mean you should blindly trust your cloud provider, though the larger ones do tend to have a better handle on threats due to their size, Forrester’s Wang says. “These people deal with security issues at more complex levels than your own IT team sees on a daily basis,” Wang says. “It’s a misconception to say cloud security is definitely less capable or more problematic.”

No, you shouldn’t blindly trust your providers but that last statement suggests we should similary trust that providers do a better job and deal with security issues at more complex levels?  What does that even mean? Please do NOT tell me that a SAS70 Type II is your answer.  Just as “It’s a misconception to say cloud security is definitely less capable or more problematic,” I can just as easily suggest the converse is true without evidence.

I would like to see the empirical data that backs that set of statements up and the common metrics I can use to measure across providers and enterprises alike.  Thought so.

Thus far, security has been one of the main hurdles to adoption of cloud-based services, says Michelle Dennedy, chief governance officer for cloud computing at Sun Microsystems. “Trust in the cloud, more than technical abilities, has been hindering adoption,” Dennedy says. “But the cloud can be more secure than a private environment in many cases.”

Michelle is definitely correct; trust represents a fundamental issue with Cloud adoption, and it rolls both ways.  Asking us to “trust but verify” when what we’re being asked to verify can’t easily be trusted poses a very difficult scenario indeed.

By the way, I think the worst Cloud Security mistake is not knowing what Cloud Security even means.

/Hoff