Search Results

Keyword: ‘taxonomy’

Update on the Cloud (Ontology/Taxonomy) Model…

March 28th, 2009 3 comments

A couple of months ago I kicked off a community-enabled project to build an infrastructure-centric ontology/taxonomy model of Cloud Computing.

You can see the original work with all the comments here.  Despite the distracting haggling over the use of the words “ontology and taxonomy,”  the model (as I now call it) has been well received by those for whom it was created.

Specifically, my goal was to be able to help a network or security professional do these things:

  1. Settle on a relevant and contextual technology-focused definition of Cloud Computing and its various foundational elements beyond the existing academic & 30,000 foot-view models
  2. Understand how this definition maps to the classical SPI (SaaS, PaaS, IaaS) models with which many people are aware
  3. Deconstruct the SPI model and present it in a layered format similar to the OSI model showing how each of the SPI components interact with and build atop one another
  4. Provide a further relevant mapping of infrastructure, applications, etc. at each model so as to relate well-understood solutions experiences to each
  5. Map a set of generally-available solutions from a catalog of compensating controls (from the security perspective) to each layer
  6. Ultimately map the SPI layers to the compensating controls and in turn to a set of governanance and regulatory requirements (SoX, PCI, HIPAA, etc.)

This is very much, and unapologetically so, a controls-based model.  I assume that there exists no utopic state of proper architectural design, secure development lifecycle, etc. Just like the real world.  So rather than assume that we’re going to have universal goodness thanks to proper architecture, design and execution, I thought it more reasonable to think about plugging the holes (sadly) and going from there.

At the end of the day, I wanted an IT/Security professional to use the model like an “Annie Oakley Secret Decoder Ring” in order to help rapidly assess offerings, map them to the specific model layers, understand what controls they or a vendor needs to have in place by mapping that, in turn, to compliance requirements.  This would allow for a quick and accurate manner by which to perform a gap analysis which in turn can be used to feed into a risk assessment/analysis.

We went through 5 versions in a relatively short period of time and arrived at a solid fundamental model based upon feedback from the target audience:

cloudtaxonomyontology_v15

The model is CLEARLY not complete.  The next three steps for improving it are:

  1. Reference other solution taxonomies to complete the rest of the examples and expand upon the various layers with key elements and offerings from vendors/solutions providers.  See here.
  2. Polish up the catalog of compensating controls
  3. Start mapping to various regulatory/compliance requirements
  4. Find a better way of interactively presenting this whole mess.

For my Frogs presentation, I presented the first stab at the example controls mapping and it seemed to make sense given the uptake/interest in the model. Here’s an example:
frogs-cc_sc0621

Frogs: Cloud Model Aligned to Security Controls Model

This still has a ways to go, but I’ve been able to present this to C-levels, bankers, technologists and lay people (with explanation) and it’s gone over well.

I look forward to making more progress on this shortly and would welcome the help, commentary, critique and collaboration.

I’ll start adding more definition to each of the layers so people can feedback appropriately.

Thanks,

/Hoff

P.S. A couple of days ago I discovered that Kevin Jackson had published an extrapolation of the UCSB/IBM version titled “A Tactical Cloud Computing Ontology.

Kevin’s “ontology” is at the 20,000 foot view compared to the original 30,000 foot UCSB/IBM model but is worth looking at.

Categories: Cloud Computing, Cloud Security Tags:

Cloud Computing Taxonomy & Ontology :: Please Review

January 28th, 2009 36 comments

NOTE: Please see the continued discussion in the post titled “Update on the Cloud (Ontology/Taxonomy) Model…

Updated: 3/28/09 v1.5

There have been some excellent discussions of late regarding how to classify and explain the relationships between the many Cloud Computing models floating about.

I was inspired by John Willis’ blog post this morning titled “Unified Ontology of Cloud Computing” in which he scraped together many ideas on the subject.
I’m building a number of presentations for discussing Cloud Security and I’ve also been working on how to show both the the taxonomy and ontology of various Cloud components and models.  I think it’s really a blind mash-up of many of the things John points to, but the others I’ve seen don’t serve my needs completely.  My goal is to gain consensus on the model and the explore each layer and its security requirements and impacts on the model as a whole.
Here’s my first second third draft based on the awesome feedback I’ve received so far.
I’m not going to explain the layers/levels or groupings because I want people’s reactions and feedback to what they get from the diagram without color from me first.  There will likely be things that aren’t clear enough or even inaccuracies and missing elements.
If you could kindly give me your feedback on your first (unabashed) impressions, I’d really appreciate it.
Thanks!

NOTE: TypePad’s comment subsystem is having problems.  I’m going to close the comments until it’s resolved as the excellent (16 or so) comments are not showing up and I don’t want people adding comments using the old system… Please send me comments via email (choff @ packetfilter.com or via Twitter @beaker) in the meantime.  Thanks SO much.

The comments are working again.  I’ve had 30-40 comments via email/twitter, so if something you wanted to communicate isn’t addressed, fire away below in the comments!

Version 1.5 Diagram (click to expand):

In v1.5 I highlighted the Integration/Middleware layer in a separate color, removed Coghead from the PaaS offering example and made a few other cosmetic alignment changes.

In v1.4 I added the API layer above ‘Applications’ in the SaaS grouping. I split out “data, metadata and content” as three separate elements and added structured/unstructured to the right.  I also separated the presentation layer into “modality and platform.”  Added some examples of layers to the very right.

The v1.4 diagram is here.
The v1.3 diagram is here.
The v1.2 diagram is here.
The v1.1 diagram is here.
The original v1.0 diagram is here.

When A FAIL Is A WIN – How NIST Got Dissed As The Point Is Missed

January 2nd, 2012 4 comments

Randy Bias (@randybias) wrote a very interesting blog titled Cloud Computing Came To a Head In 2011, sharing his year-end perspective on the emergence of Cloud Computing and most interestingly the “…gap between ‘enterprise clouds’ and ‘web-scale clouds’”

Given that I very much agree with the dichotomy between “web-scale” and “enterprise” clouds and the very different sets of respective requirements and motivations, Randy’s post left me confused when he basically skewered the early works of NIST and their Definition of Cloud Computing:

This is why I think the NIST definition of cloud computing is such a huge FAIL. It’s focus is on the superficial aspects of ‘clouds’ without looking at the true underlying patterns of how large Internet businesses had to rethink the IT stack.  They essentially fall into the error of staying at my ‘Phase 2: VMs and VDCs’ (above).  No mention of CAP theorem, understanding the fallacies of distributed computing that lead to successful scale out architectures and strategies, the core socio-economics that are crucial to meeting certain capital and operational cost points, or really any acknowledgement of this very clear divide between clouds built using existing ‘enterprise computing’ techniques and those using emergent ‘cloud computing’ technologies and thinking.

Nope. No mention of CAP theorem or socio-economic drivers, yet strangely the context of what the document was designed to convey renders this rant moot.

Frankly, NIST’s early work did more to further the discussion of *WHAT* Cloud Computing meant than almost any person or group evangelizing Cloud Computing…especially to a world of users whose most difficult challenges is trying to understand the differences between traditional enterprise IT and Cloud Computing

As I reacted to this point on Twitter, Simon Wardley (@swardley) commented in agreement with Randy’s assertions, but strangely what I found odd again was the misplaced angst by which the criterion of “success” vs “FAIL” that both Simon and Randy were measuring the NIST document against:

Both Randy and Simon seem to be judging NIST’s efforts against their lack of extolling the virtues, or “WHY” versus the “WHAT” of Cloud, and as such, were basically doing a disservice by perpetuating aged concepts rooted in archaic enterprise practices rather than boundary stretch, trailblaze and promote the enlightened stance of “web-scale” cloud.

Well…

The thing is, as NIST stated in both the purpose and audience sections of their document, the “WHY” of Cloud was not the main intent (and frankly best left to those who make a living talking about it…)

From the NIST document preface:

1.2 Purpose and Scope

Cloud computing is an evolving paradigm. The NIST definition characterizes important aspects of cloud computing and is intended to serve as a means for broad comparisons of cloud services and deployment strategies, and to provide a baseline for discussion from what is cloud computing to how to best use cloud computing. The service and deployment models defined form a simple taxonomy that is not intended to prescribe or constrain any particular method of deployment, service delivery, or business operation.

1.3 Audience

The intended audience of this document is system planners, program managers, technologists, and others adopting cloud computing as consumers or providers of cloud services.

This was an early work (the initial draft was released in 2009, final in late 2011,) and when it was written, many people — Randy, Simon and myself included — we still finding the best route, words and methodology to verbalize the “Why.” And now it’s skewered as “mechanistic drivel.”*

At the time NIST was developing their document, I was working in parallel writing the “Architecture” chapter of the first edition of the Cloud Security Alliance’s Guidance for Cloud Computing.  I started out including my own definitional work in the document but later aligned to the NIST definitions because it was generally in line with mine and was well on the way to engendering a good deal of conversation around standard vocabulary.

This blog post summarized the work at the time (prior to the NIST inclusion).  I think you might find the author of the second comment on that post rather interesting, especially given how much of a FAIL this was all supposed to be… 🙂

It’s only now with the clarity of hindsight that it’s easier to take the “WHY,” and utilize the “WHAT” (from NIST and others, especially practitioners like Randy) in a manner that is complementary so we can talk less about “what and why” and rather “HOW.”

So while the NIST document wasn’t, isn’t and likely never will be “perfect,” and does not address every use case or even eloquently frame the “WHY,” it still serves as a very useful resource upon which many people can start a conversation regarding Cloud Computing.

It’s funny really…the first tenet for “web-scale” cloud that AWS — the “Kings of Cloud” Randy speaks about constantly —  is “PLAN FOR FAIL.”  So if the NIST document truly meets this seal of disapproval and is a FAIL, then I guess it’s a win ;p

Your opinion?

/Hoff

*N.B. I’m not suggesting that critiquing a document is somehow verboten or that NIST is somehow infallible or sacrosanct — far from it.  In fact, I’ve been quite critical and vocal in my feedback with regard to both this document and efforts like FedRAMP.  However, this is during the documents’ construction and with the intent to make it better within the context within which they were designed versus the rear view mirror.

Enhanced by Zemanta

FYI: New NIST Cloud Computing Reference Architecture

March 31st, 2011 No comments
logo of National Institute of Standards and Te...

Image via Wikipedia

In case you weren’t aware, NIST has a WIKI for collaboration on Cloud Computing.  You can find it here.

They also have a draft of their v1.0 Cloud Computing Reference Architecture, which builds upon the prior definitional work we’ve seen before and has pretty graphics.  You can find that, here (dated 3/30/2011)

/Hoff

Enhanced by Zemanta

Dedicated AWS VPC Compute Instances – Strategically Defensive or Offensive?

March 28th, 2011 9 comments

Chugging right along on the feature enhancement locomotive, following the extension of networking capabilities of their Virtual Private Cloud (VPC) offerings last week (see: AWS’ New Networking Capabilities – Sucking Less 😉 ,) Amazon Web Services today announced the availability of dedicated (both on-demand and dedicated) compute instances within a VPC:

Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer. Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level.

That’s interesting, isn’t it?  I remember writing this post ” Calling All Private Cloud Haters: Amazon Just Peed On Your Fire Hydrant… and chuckling when AWS announced VPC back in 2009 in which I suggested that VPC:

  • Legitimized Private Cloud as a reasonable, needed, and prudent step toward Cloud adoption for enterprises,
  • Substantiated the value proposition of Private Cloud as a way of removing a barrier to Cloud entry for enterprises, and
  • Validated the ultimate vision toward hybrid Clouds and Inter-Cloud

That got some hackles up.

So this morning, people immediately started squawking on Twitter about how this looked remarkably like (or didn’t) private cloud or dedicated hosting.  This is why, about two years ago, I generated this taxonomy that pointed out the gray area of “private cloud” — the notion of who manages it, who owns the infrastructure, where it’s located and who it’s consumed by:

I did a lot of this work well before I utilized it in the original Cloud Security Alliance Guidance architecture chapter I wrote, but that experienced refined what I meant a little more clearly and this version was produced PRIOR to the NIST guidance which is why you don’t see mention of “community cloud”:

  1. Private
    Private Clouds are provided by an organization or their designated service provider and offer a single-tenant (dedicated) operating environment with all the benefits and functionality of elasticity* and the accountability/utility model of Cloud.  The physical infrastructure may be owned by and/or physically located in the organization’s datacenters (on-premise) or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated service provider respectively.
    The consumers of the service are considered “trusted.”  Trusted consumers of service are those who are considered part of an organization’s legal/contractual umbrella including employees, contractors, & business partners.  Untrusted consumers are those that may be authorized to consume some/all services but are not logical extensions of the organization.
  2. Public
    Public Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.
    The physical infrastructure is generally owned by and managed by the designated service provider and located within the provider’s datacenters (off-premise.)  Consumers of Public Cloud services are considered to be untrusted.
  3. Managed
    Managed Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.The physical infrastructure is owned by and/or physically located in the organization’s datacenters with an extension of management and security control planes controlled by the designated service provider.  Consumers of Managed Clouds may be trusted or untrusted.
  4. Hybrid
    Hybrid Clouds are a combination of public and private cloud offerings that allow for transitive information exchange and possibly application compatibility and portability across disparate Cloud service offerings and providers utilizing standard or proprietary methodologies regardless of ownership or location.  This model provides for an extension of management and security control planes.  Consumers of Hybrid Clouds may be trusted or untrusted.

* Note: the benefits of elasticity don’t imply massive scale, which in many cases is not a relevant requirement for an enterprise.  Also, ultimately I deprecated the “managed” designation because it was a variation on a theme, but you can tell that ultimately the distinction I was going for between private and hybrid is the notion of OR versus AND designations in the various criteria.

AWS’ dedicated VPC options now give you another ‘OR’ option when thinking about who manages, owns the infrastructure your workloads run on, and more importantly where.  More specifically, the notion of ‘virtual’ cloud becomes less and less important as the hybrid nature of interconnectedness of resources starts to make more sense — regardless of whether you use overlay solutions like CloudSwitch, “integrated” solutions from vendors like VMware or Citrix or from AWS.  In the long term, the answer will probably be “D) all of the above.”

Providing dedicated compute atop a hypervisor for which you are the only tenant will be attractive to many enterprises who have trouble coming to terms with sharing memory/cpu resources with other customers.  This dedicated functionality costs a pretty penny – $87,600 a year, and as Simon Wardley pointed out that this has an interesting effect inasmuch as it puts a price tag on isolation:

Here’s the interesting thing that goes to the title of this post:

Is this a capability that AWS really expects to be utilized as they further blur the lines between public, private and hybrid cloud models OR is it a defensive strategy hinged on the exorbitant costs to further push enterprises into shared compute and overlay security models?

Specifically, one wonders if this is a strategically defensive or offensive move?

A single tenant atop a hypervisor atop dedicated hardware — that will go a long way toward addressing one concern: noisy (and nosy) neighbors.

Now, keep in mind that if an enterprise’s threat modeling and risk management frameworks are reasonably rational, they’ll realize that this is compute/memory isolation only.  Clearly the network and storage infrastructure is still shared, but the “state of the art” in today’s cloud of overlay encryption (file systems and SSL/IPSec VPNs) will likely address those issues.  Shared underlying cloud management/provisioning/orchestration is still an interesting area of concern.

So this will be an interesting play for AWS. Whether they’re using this to take a hammer to the existing private cloud models or just to add another dimension in service offering (logical, either way) I think in many cases enterprises will pay this tax to further satisfy compliance requirements by removing the compute multi-tenancy boogeyman.

/Hoff

Enhanced by Zemanta

Dear Santa: All I Want For Christmas On My Amazon Wishlist Is a Straight Answer…

October 31st, 2009 4 comments

A couple of weeks ago amidst another interesting Amazon Web Services announcement featuring the newly-arrived Relational Database Service, Werner Vogels (Amazon CTO) jokingly retweeted a remark that someone made suggesting he was like “…Santa for nerds.”

All I want for Christmas is my elastic IP...

All I want for Christmas is my elastic IP...

So, now that I have Werner following me on Twitter and a confirmed mailing address (clearly the North Pole) I thought I’d make my Christmas wish early this year.  I’ve put a lot of thought into this.

Just when I had settled on a shiny new gadget from the bookstore side of the house, I saw Amazon’s response to Eran Tromer’s (et al) research on Cloud Cartography featured in this Computerworld article written by my old friend Jaikumar Vijayan titled “Amazon downplays report highlighting vulnerabilities in its cloud service.”

I feature Eran and his team’s work in my Cloudifornication presentation.  You can read more about it on Craig’s blog here.

I quickly cast aside my yuletyde treasure list and instead decided to ask Santa (Werner/AWS) for a most important present: a straight answer from AWS that isn’t delivered by a PR spokeshole that instead speaks openly, transparently and in an engaging fashion with customers and the security community.

Here’s what torqued me (emphasis is mine):

In response, Amazon spokeswoman Kay Kinton said today that the report describes cloud cartography methods that could increase an attacker’s probability of launching a rogue virtual machine (VM) on the same physical server as another specific target VM.

What remains unclear, however, is how exactly attackers would be able to use that presence on the same physical server to then attack the target VM, Kinton told Computerworld via e-mail.

The research paper itself described how potential attackers could use so-called “side-channel” attacks to try and try and steal information from a target VM. The researchers had argued that a VM sitting on the same physical server as a target VM, could monitor shared resources on the server to make highly educated inferences about the target VM.

By monitoring CPU and memory cache utilization on the shared server, an attacker could determine periods of high-activity on the target servers, estimate high-traffic rates and even launch keystroke timing attacks to gather passwords and other data from the target server, the researchers had noted.

Such side-channel attacks have proved highly successful in non-cloud contexts, so there’s no reason why they shouldn’t work in a cloud environment, the researchers postulated.

However, Kinton characterized the attack described in the report as “hypothetical,” and one that would be “significantly more difficult in practice.”

“The side channel techniques presented are based on testing results from a carefully controlled lab environment with configurations that do not match the actual Amazon EC2 environment,” Kinton said.

“As the researchers point out, there are a number of factors that would make such an attack significantly more difficult in practice,” she said.

So while the Amazon spokesperson admits the vulnerability/capability exists, rather than rationally address that issue, thank the researchers for pointing this out and provide customers some level of detail regarding how this vulnerability is mitigated, we get handwaving that attempts to have us not focus on the vulnerability, but rather the difficulty of a hypothetical exploit.  That example isn’t the point of the paper. The fact that I could deliver a targeted attack is.

Earth to Amazon: this sort of thing doesn’t work. It’s a lousy tactic.  It simply says that either you think we’re all stupid or you’re suffering from a very bad case of incident handling immaturity. Take a look around you, there are plenty of companies doing this right. You’re not one of them.  Consistently.

Tromer and crew gave a single example of how this vulnerability might be exploited that was latched on to by the AWS spokesperson as a way of defusing the seriousness of the underlying vulnerability by downplaying this sample exploit.  There are potentially dozens of avenues to be explored here.  Craig talked about many of them in his blog (above.)  What we got instead was this:

At the same time, Amazon takes all reports of vulnerabilities in its cloud infrastructure very seriously, she said. The company will continue to investigate potential exploits thoroughly and continue to develop features bolster security for users of its cloud service, she said.

Amazon Web Services has already rolled out safeguards that prevent potential attackers from using the cartography techniques described in the paper, Kinton said without offering any details.

She also pointed to the recently launched Amazon Web Service Multi-Factor Authentication (AWS MFA) as another example of the company’s continuing effort to bolster cloud security. AWS MFA is designed to provide an extra layer access control to a customer’s Web services account, Kinton said.

Did you catch “…without offering any details” or were you simply overwhelmed by the fact that you can use a token to authenticate your single-key driven AWS console instead?

I’m not interested in getting into a “full disclosure” battle here, but being dismissive, not providing clear-cut answers and being evasive without regard for transparency about issues like this or the DDoS attacks we saw with Bitbucket, etc. are going to backfire.  I posted about this before in previous blogs here and here.

If you want to be taken seriously by large enterprises and government agencies that require real answers to issues like this, you can engage with the security community or ignore us and get focused on by it (and me) until you decide that it’s a much better idea to do the former.  You’ll gain much more credibility and an eagerness to work with you instead of against you if you choose to use the force wisely 😉

Until then, may I suggest this?  I found it in the Amazon.com bookstore:

beinghonest…you can download it to your Kindle in under a minute.

/Hoff

There’s A Difference Between Application/OS Multitenancy and Data(base) Multitenancy

August 8th, 2009 2 comments

ninjasquirrelThere I was in the middle of a half moon yoga pose when the thought hit…

I was on a Telepresence the other day with @jamesurquhart and a couple of other colleagues and we were discussing the notion of Cloud services and multitenancy again.

I brought up a well-known Cloud provider who serves thousands (if not tens of thousands) of unique customers.  I argued that based upon what I was told by system architects, the service was never really designed with multitenancy in mind.  James argued to the contrary maintaining that he has had numerous discussions with the same architects and was convinced my point was invalid.

This got me thinking as to how, if we were talking to the same architects, we came away with a diametrically opposed understanding.

It should be noted that this vendor does not use server/OS virtualization in their offering and since multitenancy is often (improperly) associated directly with server/OS virtualization, we recognized that this wasn’t our disconnect.

Then it dawned on me (well today, during Yoga.)  I was talking about the notion of application multitenancy and James was talking about the database/datastore aspects of multitenancy!  The front-end versus the back-end versus the entire stack…

So of course from James’ perspective, the architects definitely built the database, schemas and table structures to support isolated, discrete and “secure” multitenancy.

However from my perspective, the application itself — a single application — isn’t “multitenant” insomuch as it is multi-user.  The application provides a common programmatic entry point (however customized in presentation) to a specific dataset to which James was referring.

Aha!  Seems simple and somewhat silly, but it never occurred to me that we were just thinking from different ends of the stack; this time I was top-down and James was bottoms-up.  Funny as James is the app. guy and I am the Infrastructure bobblehead.  Stupid siloed thinking on my part distracted me from what I know is a larger system architecture artifact that is easy to spot if I had only taken the goggles off.

This is important because when we apply Cloud definitions to SaaS providers wherein the required characteristics “require” multitenancy (see my post here,) many if not most SaaS offerings fail to meet the criterion.  If we think along the lines of not just qualifying the ‘application’ but expand ‘software’ in SaaS to more broadly include the entire stack including the database, it passes the sniff test.

I have to tell you that this was, despite my own taxonomy diagrams which point out this very fact, a block in my vision which was causing me angst.

So, remember, when we’re talking about SaaS, just because the application front-end may not smell of multitenancy, the underlying platform and database probably will — especially if it’s going to scale to elastic cloud levels.

Silly little lightbulbs go off in the most interesting of times.

/Hoff

Cloud Computing [Security] Architectural Framework

July 19th, 2009 3 comments

CSA-LogoFor those of you who are not in the security space and may not have read the Cloud Security Alliance’s “Guidance for Critical Areas of Focus,” you may have missed the “Cloud Architectural Framework” section I wrote as a contribution.

We are working on improving the entire guide, but I thought I would re-publish the Cloud Architectural Framework section and solicit comments here as well as “set it free” as a stand-alone reference document.

Please keep in mind, I wrote this before many of the other papers such as NIST’s were officially published, so the normal churn in the blogosphere and general Cloud space may mean that  some of the terms and definitions have settled down.

I hope it proves useful, even in its current form (I have many updates to make as part of the v2 Guidance document.)

/Hoff


Problem Statement

Cloud Computing (“Cloud”) is a catch-all term that describes the evolutionary development of many existing technologies and approaches to computing that at its most basic, separates application and information resources from the underlying infrastructure and mechanisms used to deliver them with the addition of elastic scale and the utility model of allocation.  Cloud computing enhances collaboration, agility, scale, availability and provides the potential for cost reduction through optimized and efficient computing.

More specifically, Cloud describes the use of a collection of distributed services, applications, information and infrastructure comprised of pools of compute, network, information and storage resources.  These components can be rapidly orchestrated, provisioned, implemented and decommissioned using an on-demand utility-like model of allocation and consumption.  Cloud services are most often, but not always, utilized in conjunction with and enabled by virtualization technologies to provide dynamic integration, provisioning, orchestration, mobility and scale.

While the very definition of Cloud suggests the decoupling of resources from the physical affinity to and location of the infrastructure that delivers them, many descriptions of Cloud go to one extreme or another by either exaggerating or artificially limiting the many attributes of Cloud.  This is often purposely done in an attempt to inflate or marginalize its scope.  Some examples include the suggestions that for a service to be Cloud-based, that the Internet must be used as a transport, a web browser must be used as an access modality or that the resources are always shared in a multi-tenant environment outside of the “perimeter.”  What is missing in these definitions is context.

From an architectural perspective given this abstracted evolution of technology, there is much confusion surrounding how Cloud is both similar and differs from existing models and how these similarities and differences might impact the organizational, operational and technological approaches to Cloud adoption as it relates to traditional network and information security practices.  There are those who say Cloud is a novel sea-change and technical revolution while others suggest it is a natural evolution and coalescence of technology, economy, and culture.  The truth is somewhere in between.

There are many models available today which attempt to address Cloud from the perspective of academicians, architects, engineers, developers, managers and even consumers. We will focus on a model and methodology that is specifically tailored to the unique perspectives of IT network and security professionals.

The keys to understanding how Cloud architecture impacts security architecture are a common and concise lexicon coupled with a consistent taxonomy of offerings by which Cloud services and architecture can be deconstructed, mapped to a model of compensating security and operational controls, risk assessment and management frameworks and in turn, compliance standards.

Setting the Context: Cloud Computing Defined

Understanding how Cloud Computing architecture impacts security architecture requires an understanding of Cloud’s principal characteristics, the manner in which cloud providers deliver and deploy services, how they are consumed, and ultimately how they need to be safeguarded.

The scope of this area of focus is not to define the specific security benefits or challenges presented by Cloud Computing as these are covered in depth in the other 14 domains of concern:

  • Information lifecycle management
  • Governance and Enterprise Risk Management
  • Compliance & Audit
  • General Legal
  • eDiscovery
  • Encryption and Key Management
  • Identity and Access Management
  • Storage
  • Virtualization
  • Application Security
  • Portability & Interoperability
  • Data Center Operations Management
  • Incident Response, Notification, Remediation
  • “Traditional” Security impact (business continuity, disaster recovery, physical security)

We will discuss the various approaches and derivative offerings of Cloud and how they impact security from an architectural perspective using an in-process model developed as a community effort associated with the Cloud Security Alliance.

Principal Characteristics of Cloud Computing

Cloud services are based upon five principal characteristics that demonstrate their relation to, and differences from, traditional computing approaches:

  1. Abstraction of Infrastructure
    The compute, network and storage infrastructure resources are abstracted from the application and information resources as a function of service delivery. Where and by what physical resource that data is processed, transmitted and stored on becomes largely opaque from the perspective of an application or services’ ability to deliver it.  Infrastructure resources are generally pooled in order to deliver service regardless of the tenancy model employed – shared or dedicated.  This abstraction is generally provided by means of high levels of virtualization at the chipset and operating system levels or enabled at the higher levels by heavily customized filesystems, operating systems or communication protocols.
  2. Resource Democratization
    The abstraction of infrastructure yields the notion of resource democratization – whether infrastructure, applications, or information – and provides the capability for pooled resources to be made available and accessible to anyone or anything authorized to utilize them using standardized methods for doing so.
  3. Services Oriented Architecture
    As the abstraction of infrastructure from application and information yields well-defined and loosely-coupled resource democratization, the notion of utilizing these components in whole or part, alone or with integration, provides a services oriented architecture where resources may be accessed and utilized in a standard way.  In this model, the focus is on the delivery of service and not the management of infrastructure.
  4. Elasticity/Dynamism
    The on-demand model of Cloud provisioning coupled with high levels of automation, virtualization, and ubiquitous, reliable and high-speed connectivity provides for the capability to rapidly expand or contract resource allocation to service definition and requirements using a self-service model that scales to as-needed capacity.  Since resources are pooled, better utilization and service levels can be achieved.
  5. Utility Model Of Consumption & Allocation
    The abstracted, democratized, service-oriented and elastic nature of Cloud combined with tight automation, orchestration, provisioning and self-service then allows for dynamic allocation of resources based on any number of governing input parameters.  Given the visibility at an atomic level, the consumption of resources can then be used to provide an “all-you-can-eat” but “pay-by-the-bite” metered utility-cost and usage model. This facilitates greater cost efficiencies and scale as well as manageable and predictive costs.

Cloud Service Delivery Models

Three archetypal models and the derivative combinations thereof generally describe cloud service delivery.  The three individual models are often referred to as the “SPI Model,” where “SPI” refers to Software, Platform and Infrastructure (as a service) respectively and are defined thusly[1]:

  1. Software as a Service (SaaS)
    The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
  2. Platform as a Service (PaaS)
    The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., java, python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations.
  3. Infrastructure as a Service (IaaS)
    The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

Understanding the relationship and dependencies between these models is critical.  IaaS is the foundation of all Cloud services with PaaS building upon IaaS, and SaaS – in turn – building upon PaaS.  We will cover this in more detail later in the document.

The OpenCrowd Cloud Solutions Taxonomy shown in Figure 1 provides an excellent reference that demonstrates the swelling ranks of solutions available today in each of the models above.

Narrowing the scope or specific capabilities and functionality within each of the *aaS offerings or employing the functional coupling of services and capabilities across them may yield derivative classifications.  For example “Storage as a Service” is a specific sub-offering with the IaaS “family,”  “Database as a Service” may be seen as a derivative of PaaS, etc.

Each of these models yields significant trade-offs in the areas of integrated features, openness (extensibility) and security.  We will address these later in the document.

Figure 1 - The OpenCrowd Cloud Taxonomy

Figure 1 - The OpenCrowd Cloud Taxonomy

Cloud Service Deployment and Consumption Modalities

Regardless of the delivery model utilized (SaaS, PaaS, IaaS,) there are four primary ways in which Cloud services are deployed and are characterized:

  1. Private
    Private Clouds are provided by an organization or their designated service provider and offer a single-tenant (dedicated) operating environment with all the benefits and functionality of elasticity and the accountability/utility model of Cloud.The physical infrastructure may be owned by and/or physically located in the organization’s datacenters (on-premise) or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated service provider respectively.

    The consumers of the service are considered “trusted.”  Trusted consumers of service are those who are considered part of an organization’s legal/contractual
    umbrella including employees, contractors, & business partners.  Untrusted consumers are those that may be authorized to consume some/all services but are not logical extensions of the organization.

  2. Public
    Public Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.
    The physical infrastructure is generally owned by and managed by the designated service provider and located within the provider’s datacenters (off-premise.)  Consumers of Public Cloud services are considered to be untrusted.
  3. Managed
    Managed Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.The physical infrastructure is owned by and/or physically located in the organization’s datacenters with an extension of management and security control planes controlled by the designated service provider.  Consumers of Managed Clouds may be trusted or untrusted.

  4. Hybrid
    Hybrid Clouds are a combination of public and private cloud offerings that allow for transitive information exchange and possibly application compatibility and portability across disparate Cloud service offerings and providers utilizing standard or proprietary methodologies regardless of ownership or location.  This model provides for an extension of management and security control planes.  Consumers of Hybrid Clouds may be trusted or untrusted.

The difficulty in using a single label to describe an entire service/offering is that it actually attempts to describe the following elements:

  • Who manages it
  • Who owns it
  • Where it’s located
  • Who has access to it
  • How it’s accessed

The notion of Public, Private, Managed and Hybrid when describing Cloud services really denotes the attribution of management and the availability of service to specific consumers of the service.

It is important to note that often the characterizations that describe how Cloud services are deployed are often used interchangeably with the notion of where they are provided; as such, you may often see public and private clouds referred to as “external” or “internal” clouds.  This can be very confusing.

The manner in which Cloud services are offered and ultimately consumed is then often described relative to the location of the asset/resource/service owner’s management or security “perimeter” which is usually defined by the presence of a “firewall.”

While it is important to understand where within the context of an enforceable security boundary an asset lives, the problem with interchanging or substituting these definitions is that the notion of a well-demarcated perimeter separating the “outside” from the “inside” is an anachronistic concept.

It is clear that the impact of the re-perimeterization and the erosion of trust boundaries we have seen in the enterprise is amplified and accelerated due to Cloud.  This is thanks to ubiquitous connectivity provided to devices, the amorphous nature of information interchange, the ineffectiveness of traditional static security controls which cannot deal with the dynamic nature of Cloud services and the mobility and velocity at which Cloud services operate.

Thus the deployment and consumption modalities of Cloud should be thought of not only within the construct of “internal” or “external” as it relates to asset/resource/service physical location, but also by whom they are being consumed and who is responsible for their governance, security and compliance to policies and standards.

This is not to suggest that the on- or off-premise location of an asset/resource/information does not affect the security and risk posture of an organization, because it does, but it also depends upon the following:

  • The types of application/information/services being managed
  • Who manages them and how
  • How controls are integrated
  • Regulatory issues

Table 1 illustrates the summarization of these points:

Table 1 - Cloud Computing Service Deployment

Table 1 - Cloud Computing Service Deployment

As an example, one could classify a service as IaaS/Public/External (Amazon’s AWS/EC2 offering is a good example) as well as SaaS/Managed/Internal (an internally-hosted, but third party-managed custom SaaS stack using Eucalyptus, as an example.)

Thus when assessing the impact a particular Cloud service may have on one’s security posture and overall security architecture, it is necessary to classify the asset/resource/service within the context of not only its location but also its criticality and business impact as it relates to management and security.  This means that an appropriate level of risk assessment is performed prior to entrusting it to the vagaries of “The Cloud.”

Which Cloud service deployment and consumption model is used depends upon the nature of the service and the requirements that govern it.  As we demonstrate later in this document, there are significant trade-offs in each of the models in terms of integrated features, extensibility, cost, administrative involvement and security.

Figure 2 - Cloud Reference Model

Figure 2 - Cloud Reference Model

It is therefore important to be able to classify a Cloud service quickly and accurately and compare it to a reference model that is familiar to an IT networking or security professional.

Reference models such as that shown in Figure 2 allows one to visualize the boundaries of *aaS definitions, how and where a particular Cloud service fits, and also how the discrete *aaS models align and interact with one another.  This is presented in an OSI-like layered structure with which security and network professionals should be familiar.

Considering each of the *aaS models as a self-contained “solution stack” of integrated functionality with IaaS providing the foundation, it becomes clear that the other two models – PaaS and SaaS – in turn build upon it.

Each of the abstract layers in the reference model represents elements which when combined, comprise the services offerings in each class.

IaaS includes the entire infrastructure resource stack from the facilities to the hardware platforms that reside in them. Further, IaaS incorporates the capability to abstract resources (or not) as well as deliver physical and logical connectivity to those resources.  Ultimately, IaaS provides a set of API’s which allows for management and other forms of interaction with the infrastructure by the consumer of the service.

Amazon’s AWS Elastic Compute Cloud (EC2) is a good example of an IaaS offering.

PaaS sits atop IaaS and adds an additional layer of integration with application development frameworks, middleware capabilities and functions such as database, messaging, and queuing that allows developers to build applications which are coupled to the platform and whose programming languages and tools are supported by the stack.  Google’s AppEngine is a good example of PaaS.

SaaS in turn is built upon the underlying IaaS and PaaS stacks and provides a self-contained operating environment used to deliver the entire user experience including the content, how it is presented, the application(s) and management capabilities.

SalesForce.com is a good example of SaaS.

It should therefore be clear that there are significant trade-offs in each of the models in terms of features, openness (extensibility) and security.

Figure 3 - Trade-off’s Across *aaS Offerings

Figure 3 - Trade-off’s Across *aaS Offerings

Figure 3 demonstrates the interplay and trade-offs between the three *aaS models:

  • Generally, SaaS provides a large amount of integrated features built directly into the offering with the least amount of extensibility and a relatively high level of security.
  • PaaS generally offers less integrated features since it is designed to enable developers to build their own applications on top of the platform and is therefore more extensible than SaaS by nature, but due to this balance trades off on security features and capabilities.
  • IaaS provides few, if any, application-like features, provides for enormous extensibility but generally less security capabilities and functionality beyond protecting the infrastructure itself since it expects operating systems, applications and content to be managed and secured by the consumer.

The key takeaway from a security architecture perspective in comparing these models is that the lower down the stack the Cloud service provider stops, the more security capabilities and management the consumer is responsible for implementing and managing themselves.

This is critical because once a Cloud service can be classified and referenced against the model, mapping the security architecture, business and regulatory or other compliance requirements against it becomes a gap-analysis exercise to determine the general “security” posture of a service and how it relates to the assurance and protection requirements of an asset.

Figure 4 below shows an example of how mapping a Cloud service can be compared to a catalog of compensating controls to determine what existing controls exist and which do not as provided by either the consumer, the Cloud service provider or another third party.

Figure 4 - Mapping the Cloud Model to the Security Model
Figure 4 – Mapping the Cloud Model to the Security Model

Once this gap analysis is complete as governed by the requirements of any regulatory or other compliance mandates, it becomes much easier to determine what needs to be done in order to feed back into a risk assessment framework to determine how the gaps and ultimately how the risk should be addressed: accept, transfer, mitigate or ignore.

Conclusion

Understanding how architecture, technology, process and human capital requirements change or remain the same when deploying Cloud Computing services is critical.   Without a clear understanding of the higher-level architectural implications of Cloud services, it is impossible to address more detailed issues in a rational way.

The keys to understanding how Cloud architecture impacts security architecture are a common and concise lexicon coupled with a consistent taxonomy of offerings by which Cloud services and architecture can be deconstructed, mapped to a model of compensating security and operational controls, risk assessment and management frameworks and in turn, compliance standards.


[1] Credit: Peter M. Mell, NIST

Dear Mr. Schneier, If Cloud Is Nothing New, Why Are You Talking So Much About It?

June 3rd, 2009 13 comments

squidly

Update: Please see this post if you’re wondering why I edited this piece.

I read a recent story in the Guardian from Bruce Schneier titled “Be Careful When You Come To Put Your Trust In the Clouds” in which he suggests that Cloud Computing is “…nothing new.”

Fundamentally it’s hard to argue with that title as clearly we’ve got issues with security and trust models as it relates to Cloud Computing, but the byline seems to be at odds with Schneier’s ever-grumpy dismissal of Cloud Computing in the first place.  We need transparency and trust: got it.

Many of the things Schneier says make perfect sense whilst others just make me scratch my head in abstract.  Let’s look at a couple of them:

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

Clearly there is a lot of hype around Cloud Computing, but I believe it’s important — especially as someone who spends a lot of time educating and evangelizing — that people like myself and Schneier effectively separate the hype from the hope and try and paint a clearer picture of things.

To that point, Schneier does his audience a disservice by dumbing down Cloud Computing to nothing more than outsourcing via SaaS.  Throwing the baby out with the rainwater seems a little odd to me and while it’s important to relate to one’s audience, I keep sensing a strange cognitive dissonance whilst reading Schneier’s opining on Cloud.

Firstly, and as I’ve said many times, Cloud Computing is more than just Software as a Service (SaaS.)  SaaS is clearly the more mature and visible set of offerings in the evolving Cloud Computing taxonomy today, but one could argue that players like Amazon with their Infrastructure as a Service (IaaS) or even the aforementioned Google and Salesforce.com with the Platform as a Service (PaaS) offerings might take umbrage with Schneier’s suggestion that Cloud is simply some “…software over the internet” accessed “…via a browser.”

Overlooking IaaS and PaaS is clearly a huge miss here and it calls into question the point Schneier makes when he says:

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing – network infrastructure, security monitoring, remote hosting – is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

<sigh> Welcome to the evolution of technology and disruptive innovation.  What’s the point?

Fundamentally, as we look beyond speeds and feeds, Cloud Computing — at all layers and offering types — is driving huge headway and innovation in the evolution of automation, autonomics and the applied theories of dealing with massive scale in compute, network and storage realms.  Sure, the underlying problems — and even some of the approaches — aren’t new in theory, but they are in practice.  The end result may very well be that a consumer of service may not see elements that are new technologically as they are abstracted, but the economic, cultural, business and operational differences are startling.

If we look at what makes up Cloud Computing, the five elements I always point to are:

cloud-keyingredients018

Certainly the first three are present today — and have been for some while — in many different offerings.  However, combining the last two: on-demand, self-service scale and dynamism with new economic models of consumption and allocation are quite different, especially when doing so at extreme levels of scale with multi-tenancy.

So let’s get to the meat of the matter: security and trust.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors – and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further – you now have to also trust your software service vendors – but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

Fair enough.  So let’s chalk one up here to “Cloud is nothing new — we still have to put our faith and trust in someone else.”  Got it.  However, by again excluding the notion of PaaS and IaaS, Bruce fails to recognize the differences in both responsibility and accountability that these differing models brings; limiting Cloud to SaaS while simple for cute argument does not a complete case make:

cloud-lower030

To what level you are required to and/or feel comfortable transferring responsibility depends upon the provider and the deployment model; the risks associated with an IaaS-based service can be radically different than that of one from a SaaS vendor. With SaaS, security can be thought of from a monolithic perspective — that of the provider; they are responsible for it.  In the case of PaaS and IaaS, this trade-off’s become more apparent and you’ll find that this “outsourcing” of responsibility is diminished whilst the mantle of accountability is not.  This is pretty important if you want ot be generic in your definition of “Cloud.”

Here’s where I see Bruce going off the rails from his “Cloud is nothing new” rant, much in the same way I’d expect he would suggest that virtualization is nothing new, either:

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.


Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

So therefore I see a huge contradiction.  How we secure — or allow others to — our data is very different in Cloud, it *is* something new in its practical application.   There are profound operational, business and technical (let alone regulatory, legal, governance, etc.) differences that do pose new challenges. Yes, we should take our best practices related to “outsourcing” that we’ve built over time and apply them to Cloud.  However, the collision course of virtualization, converged fabrics and Cloud Computing are pushing the boundaries of all we know.

Per the examples above, our challenges are significant.  The tech industry thrives on the ebb and flow of evolutionary punctuated equilibrium; what’s old is always new again, so it’s important to remember a couple of things:

  1. Harking back (a whopping 60 years) to the “dawn of time” in the IT/Computing industry making the case that things “aren’t new” is sort of silly and simply proves you’re the tallest and loudest guy in a room full of midgets.  Here’s your sign.
  2. I don’t see any suggestions for how to make this better in all these rants about mainframes, only FUD
  3. If “outsourcing is the future of computing” and we are to see both evolutionary and revolutionary disruptive innovation, shouldn’t we do more than simply hope that “…eventually we’ll get this right?”

The past certainly repeats itself, which explains why every 20 years bell-bottoms come back in style…but ignoring the differences in application, however incremental, is a bad idea.  In many regards we have not learned from our mistakes or fail to recognize patterns, but you can’t drive forward by only looking in the rear view mirror, either.

Regards,

/Hoff

On the Overcast Podcast with Geva Perry and James Urquhart

March 13th, 2009 No comments

Overcastlogo
Geva and James were kind (foolish?) enough to invite me onto their Overcast podcast today:

In this podcast we talk to Christopher Hoff, renowned information security expert, and especially security in the context of virtualization and cloud computing. Chris is the author of the Rational Survivability blog, and can be followed as @Beaker on Twitter.
Show Notes:

    • Chris talks about some of the myths and misconceptions about security in the cloud. He addresses the claim that Cloud Providers Are Better At Securing Your Data Than You Are and the benefits and shortcomings of security in the cloud.
    • We talk about Chris's Taxonomy of Cloud Computing (excuse me, model of cloud computing)
    • Chris goes through some specific challenges and solutions for PCI-compliance in the cloud
    • Chris examines some of the security issues associated with multi-tenant architecture and virtualization
Check it out here.

/Hoff