Archive

Archive for February, 2009

Berkeley RAD Lab Cloud Computing Paper: Above the Clouds or In the Sand?

February 19th, 2009 2 comments

Cal
I've waffled on how, or even if, I would write my critique of the Berkeley RAD Lab's paper titled "Above the Clouds: A Berkeley View of Cloud Computing.

I think I've had a hard time deciding where the authors have their heads, hence the title.

Those of you who know me are probably chuckling at the fact that I was a good boy and left off the potential third cranial location option…

Many people have written their respective reviews of the work including James UrquhartDavid Linthicum and Chuck Hollis who all did a nice job summarizing various perspectives.

I decided to add my $0.02 because it occurred to me that despite several issues I have with the paper, two things really haven't been appropriately discussed:
  1. The audience for the paper
  2. Expectations of the reader 

The goals of the paper were fairly well spelled out and within context of what was written, the authors achieved many of them.

Given that it was described as a "view" of Cloud Computing and not the definitive work on the subject, I think perhaps the baby has been unfairly thrown out with the bath water even when balanced with the "danger" that the general public or press may treat it as gospel.

I think the reason there has been so much frothy reaction to this paper by the "Cloud community" is that because the paper comes from the Electrical Engineering/Computer Science department of UC Berkeley, a certain level of technical depth and a more holistic (dare I say empirical) model for analysis is expected by many readers and their expectations are therefore set a certain way.  

Most of the reviews that might be perceived as negative are coming from folks who are reasonably technical, of which I am one.

To that point and that of item #1 above, I don't feel that "we" are the intended audience for this paper and thus, to point #2 above, our expectations — despite the goals of the paper — were not met.

That being said, I do have issues with the authors' definition of cloud computing as unnecessarily obtuse, their refusal to discuss the differences between the de facto SPI model and its variants is annoying and short-sighted, and their dismissal of private clouds as relevant is quite disturbing.  The notion that Cloud Computing must be "external" to an enterprise and use the Internet as a transport is simply delusional. 

Eschewing de facto models of reference because the authors could not agree amongst themselves on the differences between them — despite consensus in industry outside of academia and even models like the one I've been working on — comes across as myopic and insulated.  

Ultimately I think the biggest miss of the paper was the fact that they did not successfully answer "What is Cloud Computing and how is it different from previous paradigm shifts such as Software as a Service (SaaS)?"  In fact, I came away from the paper with the feeling that Cloud Computing is SaaS…

However, I found the coverage of the business drivers, economic issues and the top 10 obstacles to be very good and that people unfamiliar with Cloud Computing would come away with a better understanding — not necessarily complete — of the topic.

It was an interesting read that is complimentary to much of the other work going on right now in the field.  I think we should treat it as such and move on.

/Hoff
Categories: Uncategorized Tags:

Incomplete Thought: Separating Virtualization From Cloud?

February 18th, 2009 18 comments

I was referenced in a CSO article recently titled "Four Questions On Google App Security." I wasn't interviewed for the story directly, but Bill Brenner simply referenced our prior interviews and my skepticism for virtualization security and cloud Security as a discussion point.

Google's response was interesting and a little tricky given how they immediately set about driving a wedge between virtualization and Cloud.  I think I understand why, but if the article featured someone like Amazon, I'm not convinced it would go the same way…

As I understand it, Google doesn't really leverage much in the way of virtualization (from the classical compute/hypervisor perspective) for their "cloud" offerings as compared to Amazon. That may be in large part due to the fact of the differences in models and classification — Amazon AWS is an IaaS play while GoogleApps is a SaaS offering.

You can see why I made the abstraction layer in the cloud taxonomy/ontology model "optional."

This post dovetails nicely with Lori MacVittie's article today titled "Dynamic Infrastructure: The Cloud Within the Cloud" wherein she highlights how the obfuscation of infrastructure isn't always a good thing. Given my role, what's in that cloudy bubble *does* matter.

So here's my incomplete thought — a question, really:

How many of you assume that virtualization is an integral part of cloud computing? From your perspective do you assume one includes the other?  Should you care?

Yes, it's intentionally vague.  Have at it.

/Hoff

First Oracle with “Unbreakable” Now IBM “Guarantees Cloud Security”

February 17th, 2009 4 comments

I'm heading out in a few minutes for an all day talk, but I choked on my oatmeal when I read this:

In a CBR article titled "We Can Guarantee Cloud Security" Kristof Kloeckner, IBM's Cloud Computing CTO was quoted at the IBM's Pulse 2009 conference as he tried to "…ease worries over security in the cloud":

Despite all the hype surrounding cloud computing, the issue of security is one debate that will not go away. It is regularly flagged as one of the potential stumbling blocks to widespread cloud adoption.

He said: “We’ve developed some interesting technologies that allow the separation of applications and data on the same infrastructure. We guarantee the security through Tivoli Security and Identity Management and Authentication software, and we also ensure the separation of workloads through the separation of the virtual machines and also the separation of client data in a shared database.” Speaking to CBR after the press conference, Kloeckner went into more detail about IBM’s cloud security offering.

“Security is not essentially any different from securing any kind of open environment; you have to ensure that you know who accesses it and control their rights. We have security software that allows you to manage identities from an organisational model, from whoever is entitled to use a particular service. We can actually ensure that best practices are followed,” Kloeckner said.

Kloeckner added that most people do not realise just how vulnerable they really are. He said: “Most people, unless forced by regulations, usually treat security as a necessary evil. They say it’s very high on their list, but if you really scratch the service, it’s not obvious to me that best practices are followed.”

I wonder if this guarantee is backed up with anything else short of a "sorry" if something bad happens?

This will make for some very interesting discussion when I return today.

/Hoff


Categories: Cloud Computing, Cloud Security Tags:

Old MacDonald Had a (Virtual Server) Farm, I/O, I/O, Oh!

February 13th, 2009 4 comments

Sheep
It's all about the I/O and your ability to shuffle packets…or see them in the first place…

In reading Neil macDonald's first post under the Gartner-branded blog titled "Virtualization Security Is Transformational — If the legacy Security Vendors Would Stop Fighting It," I find myself nodding in violent agreement whilst also shaking my head in bewilderment.  Perhaps I missed the point, but I'm really confused.

Neil sets the stage by suggesting that "established" security vendors who offer solutions for non-virtualized environments simply "…don't get it" when it comes to realizing the shortcomings of their existing solutions in virtualized contexts and that they are "fighting" the encroachment of virtualization on their appliance sales:

Many are clinging to business models based on their overpriced hardware-based solutions and not offering virtualized versions of their solutions. They are afraid of the inevitable disruption (and potential cannibalization) that virtualization will create. However, you and I have real virtualization security needs today and smaller innovative startups have rushed in to fill the gap. And, yes, there are pricing discontinuities. A firewall appliance that costs $25,000 in a physical form can cost $2500 or less in a virtual form from startups like Altor Networks or Reflex Systems.

I'm very interested in which "established" vendors are supposedly clinging to their overpriced hardware-based solutions and avoiding virtualization besides niche players in niche markets that are hardware-bound.  

As far as I can tell the top five vendors by revenue in the security space (that sell hardware, not just software) are all actively engaged in both supporting these environments with the limitations that currently exist based on the virtualization platforms today and are very much investing in development of new solutions to work properly in virtual environments given the unique requirements thereof.


Neil is really comparing apples to muffler brackets.  He points out in his blog that physical appliances can offer multi-gigabit performance whereas software-based VA's cannot, and yet we're surprised that pricing differentials in orders of magnitude exist?  You get what you pay for.


As I pointed out in my Four Horsemen presentation (and is alluded to in the remainder of Neil's post below) EVERY SINGLE VENDOR is currently hamstrung by the same level of integration and architectural limitations involved with the current state of virtual appliance performance in the security space, including those he mentions such as Altor and Reflex.  They are all in a holding pattern.  I've written about that numerous times.

In fact, as I mentioned in my post titled "Visualization Through Virtualization", the majority of these new-fangled, virtualization-specific "security" tools are actually (now) more focused on visibility, management and change montoring/control than they are pure network-level security because they cannot compete from a performance and scalability perspective with hardware-based solutions.

Here's where I do agree with Neil, based upon what I mention above: 

Feature-wise, the security protection services delivered are similar. But, there is a key difference — throughput. What the legacy security vendors forget is that there is still a role for dedicated hardware. There is no way you are going to get full multi-gigabit line speed deep-packet inspection and protocol decode for intrusion prevention from a virtual appliance. A next-generation data center will need both physical and virtualized security controls — ideally, from a vendor that can provide both. I’ll argue that the move to virtualize security controls will grow the overall use of security controls. 

So this actually explains the disparity in both approach and pricing that he alluded to above.  How does this represent vendors "fighting" virtualization?  I see it as hanging on for as long as possible to preserve and milk their investment in the physical appliances Neil says we'll still need while they perform the R&D on their virtualized versions.  They can't deploy the new solutions until the platform to support them exists!

The move to virtualize security controls reduces barriers to adoption. Rather than a sprinkle a few physical appliance here and there based on network topology, we can now place controls when and where they ar
e needed, including physical appliances as appropriate. If fact, the legacy vendors have a distinct advantage over virtualization security startups since you prefer a security solution that spans both your physical and virtual environments with consistent management.

Exactly.  So again, how is this "fighting" virtualization?  


Here's where we ignore reality again:

Over the past six months, I’ve seen signs of life from the legacy physical security vendors. However, some of the legacy physical security vendors have simply taken the code from their physical appliance and moved it into a virtual machine. This is like wrapping a green-screen terminal application with a web front end — it looks better, but the guts haven’t changed. In a data center where workloads move dynamically between physical servers and between data centers, it makes no sense to link security policy to static attributes such as TCP/IP addresses, MAC addresses or servers. 

First of all, what we're really talking about in the enterprise space is VMware, since given its market dominance, this is where the sweet spot is for security vendors.  This will change over time, but for now, it's VMware.


That being the case, the moment VMsafe was announced/hinted at two years ago, 20+ security vendors — big and small — have been diligently working within the constructs of what is made available from VMware to re-engineer their products to take advantage of the API's that will be coming in VMware's upcoming release.  This is no small feat.  Distributed virtual switching and the two-tier driver architecture with DVfilters means re-engineering your products and approach.

Until VMware's next platform is released, every security vendor — big or small — is hamstrung by having to do exactly what Neil says; creating a software instantiation of their hardware products which is integration-limited for the reasons I've already stated.  What should vendors do?  Firesale their inventories and wait it out?  

I ask again: how is this "fighting" virtualization?

The reason there hasn't been a lot of movement is because the entire industry is in a holding pattern. Pretending otherwise is absolutely ridiculous.  The obvious exception is Cisco which has invested in and developed substantial solutions such as the Nexus 1000v and VN-Link (which is again awaiting the availability of VMware's next release.)

Security policy in a virtualized environment must be tied to logical identities – like identities of VM workloads, identities of application flows and identities of users. When VMs move, policies need to move. This requires more than a mere port of an existing solution, it requires a new mindset.

Yep.  And most of them are adapting their products as best they can.  Many companies will follow the natural path of consolidation and wait to buy a startup in this space and integrate it…much like VMware did with BlueLane, for example.  Others will look to underlying enablers such as Cisco's VN-Link/Nexus 1000v and chose to integrate at the virtual networking layer there and/or in coordination with VMsafe.

The legacy vendors need to wake up. If they don’t offer robust virtualization security capabilities (and, yes, potentially cannibalize the sales of some of their hardware), another vendor will. With virtualization projects on the top of the list of IT initiatives for 2009, we can’t continue to limp along without protection. It’s time to vote with our wallets and make support of virtual environments a mandatory part of our security product evaluation and selection.

Absolutely!  And every vendor — big and small — that I've spoken to is absolutely keen on this concept and are actively engaged in developing solutions for these environments with these unique requirements in mind. Keep in mind that VMsafe is about more than just network visibility via the VMM, it also includes disk, memory and CPU…most network-based appliances have never had this sort of access before (since they are NETWORK appliances) and so OF COURSE products will have to be re-tooled.


Overall, I'm very confused by Neil's post as it seems quite contradictory and at odds with what I've personally been briefed on by vendors in the space and overlooks the huge left turns being made by vendors over the last 18 months who have been patiently waiting for VMsafe and other introspection capabilities of the underlying platforms.

I think the windshield needs cleaning on the combine harvester…

/Hoff

Categories: Virtualization Tags:

Cisco Is NOT Getting Into the Server Business…

February 13th, 2009 5 comments

Walklikeaduck
Yes, yes. We've talked about this before here. Cisco is introducing a blade chassis that includes compute capabilities (heretofore referred to as a 'blade server.')  It also includes networking, storage and virtualization all wrapped up in a tidy bundle.

So while that looks like a blade server (quack!,) walks like a blade server (quack! quack!) that doesn't mean it's going to be positioned, talked about or sold like a blade server (quack! quack! quack!)

What's my point?  What Cisco is building is just another building block of virtualized INFRASTRUCTURE. Necessary infrastructure to ensure control and relevance as their customers' networks morph.

My point is that what Cisco is building is the natural by-product of converged technologies with an approach that deserves attention.  It *is* unified computing.  It's a solution that includes integrated capabilities that otherwise customers would be responsible for piecing together themselves…and that's one of the biggest problems we have with disruptive innovation today: integration.

While the analysts worry about margin erosion and cannibalizing the ecosystem (which is inevitable as a result of both innovation and consolidation,) this is a great move for Cisco, especially when you recognize that if they didn't do this, the internalization of network and storage layers within the virtualization platforms  would otherwise cause them to lose relevance beyond dumb plumbing in virtualized and cloud environments.

Also, let us not forget that one of the beauties of having this "end-to-end" solution from a security perspective is the ability to leverage policy across not only the network, but compute and storage realms also.  You can whine (and I have) about the quality of the security functionality offered by Cisco, but the coverage you're going to get with centralized policy that has affinity across the datacenter (and beyond,) iis  going to be hard to beat.

(There, I said it…OMG, I'm becoming a fanboy!)

And as far as competency as a "server" vendor, c'mon. Firstly, you can't swing a dead cat without hitting a commoditzed PC architecture that Joe's Crab Shack could market as a solution and besides which, that's what ODM's are for.  I'm sure we'll see just as much "buy and ally" with the build as part of this process. 

What's the difference between a blade chassis with intel line processors and integrated networking and a switch these days?  Not much.

So, what Cisco may lose in margin in the "server" sale, they will by far make up with the value people will pay for with converged compute, network, storage, virtualization, management, VN-Link, the Nexus 1000v, security and the integrated one-stop-shopping you'll get.  And if folks want to keep buying their HP's and IBM's, they have that choice, too.

QUACK!

/Hoff
Categories: Cisco, Cloud Computing, Cloud Security Tags:

Incomplete Thought: What Should Come First…Cloud Portability or Interoperability

February 13th, 2009 6 comments

Chickenegg
It seems that my incomplete thoughts are more popular with folks than the one's I take the time to think all the way through and conclude, so here's the next one…

Here it is:

There is a lot of effort being spent now on attempts to craft standards and definitions in order to provide interfaces which allow discrete Cloud elements and providers to interoperate. Should we not first focus our efforts on ensuring portability between Clouds of our atomic instances (however you wish to define them) and the metastructure* that enables them?

/Hoff

*Within this context I mean 'metastructure' to define not only the infrastructure but all the semantic configuration information and dynamic telemetry needed to support such.
Categories: Cloud Computing, Cloud Security Tags:

Dear Mr. Oberlin: Here’s Your Sign…

February 11th, 2009 4 comments

Thanksfornothing
No Good Deed Goes Unpunished…

I've had some fantastic conversations with folks over the last couple of weeks as we collaborated from the perspective of how a network and security professional might map/model/classify various elements of Cloud Computing.

I just spent several hours with folks at ShmooCon (a security conference) winding through the model with my peers getting excellent feedback.  

Prior to that, I've had many people say that the collaboration has yielded a much simpler view on what the Cloud means to them and how to align solutions sets they already have and find gaps with those they don't.

My goal was to share my thinking in a way which helps folks with a similar bent get a grasp on what this means to them.  I'm happy with the results.

And then….one day at Cloud Camp…

However, it seems I chose an unfortunate way of describing what I was doing in calling it a taxonomy/ontology, despite what I still feel is a clear definition of these words as they apply to the work.

I say unfortunate because I came across a post by Steve Oberlin, Cassat's Chief Scientist on his "Cloudology" blog titled "Cloud Burst" that resonates with me as the most acerbic, condescending and pompous contributions to nothingness I have read in a long time.

Steve took 9 paragraphs and 7,814 characters to basically say that he doesn't like people using the words taxonomy or ontology to describe efforts to discuss and model Cloud Computing and that we're all idiots and have provided nothing of use.

The most egregiously offensive comment was one of his last points:

I do think some blame (a mild chastisement) is owed to anyone participating in the cloud taxonomy conversation that is not exercising appropriately-high levels of skepticism and insisting on well-defined and valid standards in their frameworks.  Taxonomies are thought-shaping tools and bad tools make for bad thinking.   One commenter on one of the many blogs echoing/amplifying the taxonomy conversation remarked that some of the diagrams were mere “marketecture” and others warned against special interests warping the framework to suit their own ends.  We should all be such critical thinkers.

What exactly in any of my efforts (since I'm not speaking for anyone else) suggests that in collaborating and opening up the discussion for unfettered review and critique, constitutes anything other than high-levels of skepticism?  The reason I built the model in the first place was because I didn't feel the others accurately conveyed what was relevant and important from my perspective.  I was, gasp!, skeptical. 

We definitely don't want to have discussions that might "shape thought."  That would be dangerous.  Shall we start burning books too?

From the Department of I've Had My Digits Trampled..

So what I extracted from Oberlin's whine is that we are all to be chided because somehow only he possesses the yardstick against which critical thought can be measured?  I loved this bit as he reviewed my contribution:

I might find more constructive criticism to offer, but the dearth of description and discussion of what it really means (beyond the blog’s comments, which were apparently truncated by TypePad) make the diagram something of a Rorschach test.  Anyone discussing it may be revealing more about themselves than what the concepts suggested by the diagram might actually mean.

Interestingly, over 60 other people have stooped low enough to add their criticism and input without me "directing" their interpretation so as not to be constraining, but again, somehow this is a bad thing.

So after sentencing to death all those poor electrons that go into rendering his rant about how the rest of us are pissing into the wind, what did Oberlin do to actually help clarify Cloud Computing?  What wisdom did he impart to set us all straight?  How did he contribute to the community effort — no matter how misdirected we may be — to make sense of all this madness?

Let me be much more concise than the 7,814 characters Oberlin needed and sum it up in 8:

NOTHING.

So it is with an appropriate level of reciprocity that I thank him for it accordingly.

 /Hoff

P.S. Not to be outdone, William Vanbenepe has decided to bestow upon Oberlin a level of credibility not due to his credentials or his conclusions, but because (and I quote) "...[he] just love[s] sites that don't feel the need to use decorative pictures. His doesn't have a single image file which means that even if he didn't have superb credentials (which he does) he'd get my respect by default."

Yup, we bottom feeders who have to resort to images really are only in it for the decoration. Nice, jackass.

Update: The reason for the strikethrough above — and my public apology here — is that William contacted me and clarified he was not referring to me and my pretty drawings (my words,) although within context it appeared like he was.  I apologize, William and instead of simply deleting it, I am admitting my error, apologizing and hanging it out to dry for all to see.  William is not a jackass. As is readily apparent, I am however. 😉

Categories: Cloud Computing, Cloud Security Tags:

Incomplete Thought: Support of IPv6 in Cloud Providers…

February 9th, 2009 7 comments

This is the first of my "incomplete thought" entries; thoughts too small for a really meaty blog post, but too big for Twitter.  OK wiseguy.  I know *most* of my thoughts are incomplete, but don't quash my artistic license, mkay?

Here it is:

How many of the cloud providers (IaaS, PaaS) support IPv6 natively or support tunneling without breaking things like NAT and firewalls?  As part of all this Infrastruture 2.0 chewy goodness, from a networking (and security) perspective, it's pretty important.

/Hoff
Categories: Cloud Computing, Cloud Security Tags:

How I Know The Cloud Ain’t Real…

February 4th, 2009 1 comment

You want to know how I know that The Cloud is all hot air and will never catch on?

AWS-fail

…because I can't order it on Amazon.com and get free shipping with Prime.

FAIL!  FAIL, I say.

/Hoff

You Keep Calling Cloud Computing “Confusing, Over-Hyped & a Buzzword” & It Will Be…

February 3rd, 2009 6 comments

Apathy
A word of unsolicited advice to those of us trying to help "sort out" Cloud Computing — myself included:

The more times we lead off a description of Cloud Computing as "Confusing," "Over-hyped" and "a Buzzword" then people are going to start to believe us.  The press is going to start to believe us.  Our customers are going to start to believe us.  Pretty soon we won't be able to escape the gravity of our own message.

Granted, we mean well in our cautious and guarded admonishment, but it's starting to wear as thin as those who promote Cloud Computing as the second coming (when we all know full well that is Fiber Channel over Token Ring.)

We don't all have to chant the same mantra and we don't have to preach rainbows and unicorns, but it's important to be accurate and balanced.

I, too, am waiting for the day Cloud Computing will wash my car, bring me a beer and make me a ham sandwich. Until that day, instead of standing around trying to look smart by telling everybody that Cloud Computing is nothing more than hot air, how about making a difference by not playing a game of bad news telephone and add something constructive.

There's value in Cloud Computing so how about we move past the "confusing, over-hyped and buzzword" stage and get to work making it straight-forward, realistic and meaningful instead.

/Hoff

Categories: Cloud Computing, Cloud Security Tags: