Archive

Archive for April, 2009

The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…

April 5th, 2009 19 comments

Updated again at 13:43pm EST – Please see bottom of post

Hybrid, Public, Private, Internal and External.

The HPPIE model; you’ve heard these terms used to describe and define the various types of Cloud.

What’s always disturbed me about using these terms singularly is that separetely they actually address scenarios that are orthogonal and yet are often used to compare and contrast one service/offering to another.

The short story: Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location.

The longer story: Hybrid, Public, Private, Internal and External seek to summarily describe no less than five different issues and categorize a cloud service/offering into one dumbed-down term for convenience.  In terms of a Cloud service/offering, using one of the HPPIE labels actually attempts to address in one word:

  1. Who manages it
  2. Who owns it
  3. Where it’s located
  4. Who has access to it
  5. How it’s accessed

That’s a pretty tall order.  I know we’re aiming for simplicity in description by using a label analogous to LAN, WAN, Intranet or Internet , but unfortunately what we’re often describing here is evolving to be much more complex.

Don’t get me wrong, I’m not aiming for precision but instead  accuracy.  I don’t find that these labels do a good enough job when used by themselves.

Further, you’ll find most people using the service deployment models (Hybrid, Public, Private) in absence of the service delivery models (SPI – Saas/PaaS/IaaS) while at the same time intertwining the location of the asset (internal, external) usually relative to a perimeter firewall (more on this in another post.)

This really lends itself to confusion.

I’m not looking to rename the HPPIE terms.  I am looking to use them more accurately.

Here’s a contentious example.  I maintain you can have an IaaS service that is Public and Internal.  WHAT!?  HOW!?

Let’s take a look at a summary table I built to think through use cases by looking at the three service deployment models (Hybrid, Public and Private):

The HPPIE Table

THIS TABLE IS DEPRECATED – PLEASE SEE UPDATE BELOW!

The blue separators in the table designate derivative service offerings and not just a simple and/or; they represent an actual branching of the offering.

Back to my contentious example wherein I maintain you can have an IaaS offering which is Public and yet also Internal. Again How?

Remember how I said “Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location?” That location refers to both the physical location of the asset as well as the logical location relative to an organization’s management umbrella which includes operations, security, compliance, etc.

Thus if you look at a managed infrastructure service (name one) that utilizes Cloud Computing principles, there’s no reason that a third party MSP could not deploy said service internally on customer premises equipment which the third party owns but operates and manages on behalf of an organization with the scale and pay-by-use model of Cloud internally that can include access from trusted OR untrusted sources, is there?

Some might call it a perversion of the term “Public.” I highlight it to illustrate that “Public” is a crappy word for the example because just as it’s valid in this example, it’s equally as valid to suggest that Amazon’s EC2 can also share the “Public” moniker, despite being External.

In the same light, one can easily derive examples of SaaS:Private:Internal offerings…You see my problem with these terms?

Moreover, the “consumer” focus of the traditional HPPIE models means that using broad terms like these generally implies that people are describing access to a service/offering by a human operating a web browser, and do not take into account access to services/offerings via things like API’s or programmatic interfaces.

This is a little goofy, too. I don’t generally use a web browser  (directly) to access Amazon’s S3 Storage-as-a-Service offering just like I don’t use a web browser to make API calls in GoogleGears.  Other non-interactive elements of the AppStack do that.

I don’t expect people to stop using these dumbed down definitions, but this is why it makes me nuts when people compare “Private” Cloud offerings with “Internal” ones. It’s like comparing apples and buffalo.

What I want is for people to at least not include Internal and External as Cloud models, but rather used them as parameters like I have in the table above.

Does this make any sense to you?


Update: In a great set of discussions regarding this on Twitter with @jamesurquhart from Cisco and @zhenjl from VMware, @zhenjl came up with a really poignant solution to the issues surrounding the redefinition of Public Cloud and their ability to be deployed “internally.”  His idea which highlights the “third party managed” example I gave is to add a new category/class called “Managed” which is essentially the example which I highlighted in boldface above:

managed-clouds

This means that we would modify the table above to look more like this (updated again based on feedback on Twitter & comments) — Ultimately revised as part of the work I did for the Cloud Security Alliance in alignment to the NIST model, abandoning the ‘Managed’ section:

Revised Model

This preserves the notion of how people generally define “Public” clouds but also makes a critical distinction between what amounts to managed Cloud services which are provided by third parties using infrastructure/services located on-premise. It also still allows for the notion of Private Clouds which are distinct.

Thoughts?

Related articles by Zemanta

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

The Cloud Is a Fickle Mistress: DDoS&M…

April 2nd, 2009 6 comments

It’s interesting to see how people react when they are reminded that the “Cloud” still depends upon much of the same infrastructure and underlying protocols that we have been using for years.

BGP, DNS, VPNs, routers, swtiches, firewalls…

While it’s fun to talk about new attack vectors and sexy exploits, it’s the oldies and goodies that will come back to haunt us:

Simplexity

Building more and more of our business’ ability to remain an on-going concern on infrastructure that was never designed to support it is a scary proposition.  We’re certainly being afforded more opportunity to fix some of these problems as the technology improves, but it’s a patching solution to an endemic problem, I’m afraid.  We’ve got two ways to look at Cloud:

  • Skipping over the problems we have and “fixing” crappy infrastructure and applications by simply adding mobility and orchestration to move around an issue, or
  • Actually starting to use Cloud as a forcing function to fundamentally change the way we think about, architect, deploy and manage our computing capabilities in a more resilient, reliable and secure fashion

If I were a betting man…

Remember that just because it’s in the “Cloud” doesn’t mean someone’s sprinkled magic invincibility dust on your AppStack…

That web service still has IP addresses, open sockets. It still gets transported over MANY levels of shared infrastructure, from the telcos to the DNS infrastructure…you’re always at someone elses’ mercy.

Dan Kaminsky has done a fabulous job reminding us of that.

A more poignant reminder of our dependency on the Same Old Stuff™ is the recent DDoS attacks against Cloud provider Go-Grid:

ONGOING DDoS ATTACK

Our network is currently the target of a large, distributed DDoS attack that began on Monday afternoon.   We took action all day yesterday to mitigate the impact of the attack, and its targets, so that we could restore service to GoGrid customers.  Things were stabilized by 4 PM PDT and most customer servers were back online, although some of you continued to experience intermittent loss in network connectivity.

This is an unfortunate thing.  It’s also a good illustration of the sorts of things you ought to ask your Cloud service providers about.  With whom do they peer? What is their bandwidth? How many datacenters do they have and where? What DoS/DDoS countermeasures do you have in place? Have they actually dealt with this before?  Do they drill disaster scenarios like this?

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

This is where engineering, architecture and security meet the road.  Your provider’s ability to sustain an attack like this is critical.  Further, how you’ve designed your BCP/DR contingency plans is pretty important, too.  Until we get true portability/interoperability between Cloud providers, it’s still up to you to figure out how to make this all work.  Remember that when you’re assuming those TCO calculations accurately reflect reality.

Big providers like eBay, Amazon, and Microsoft invest huge sums of money and manpower to ensure they are as survivable as they can be during attacks like this.  Do you?  Does your Cloud Provider? How many do you have.

Again, even Amazon goes down.  At this point, it’s largely been operational issues on their end and not the result of a massive attack. Imagine, however, if someday it is.  What would that mean to you?

As more and more of our applications and information are moved from inside our networks to beyond the firewalls and exposed to a larger audience (or even co-mingled with others’ data) the need for innovation and advancement in security is only going to skyrocket to start to deal with many of these problems.

/Hoff

Categories: Cloud Computing, Cloud Security Tags: