Archive

Posts Tagged ‘Ben Kepes’

Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit…

April 5th, 2011 6 comments

My wife is in the midst of an extended multi-phasic, multi-day delivery process of our fourth child.  In between bouts of her moaning, breathing and ultimately sleeping, I’m left to taunt people on Twitter and think about Cloud.

Reviewing my hot-button list of terms that are annoying me presently, I hit upon a favorite: Cloudbursting.

It occurred to me that this term brings up a visceral experience that makes me want to punch kittens.  It’s used by people to describe a use case in which workloads that run first and foremost within the walled gardens of an enterprise, magically burst forth into public cloud based upon a lack of capacity internally and a plethora of available capacity externally.

I call bullshit.

Now, allow me to qualify that statement.

Ben Kepes suggests that cloud bursting makes sense to an enterprise “Because you’ve spent a gazillion dollars on on-prem h/w that you want to continue using. BUT your workloads are spiky…” such that an enterprise would be focused on “…maximizing returns from on-prem. But sending excess capacity to the clouds.”  This implies the problem you’re trying to solve is one of scale.

I just don’t buy this.

Either you build a private cloud that gives you the scale you need in the first place in which you pattern your operational models after public cloud and/or design a solid plan to migrate, interconnect or extend platforms to the public [commodity] cloud using this model, therefore not bursting but completely migrating capacity, but you don’t stop somewhere in the middle with the same old crap internally and a bright, shiny public cloud you “burst things to” when you get your capacity knickers in a twist:

The investment and skillsets needed to rectify two often diametrically-opposed operational models doesn’t maximize returns, it bifurcates and diminishes efficiencies and blurs cost allocation models making both internal IT and public cloud look grotesquely inaccurate.

Christian Reilly suggested I had no legs to stand on making these arguments:

Fair enough, but…

Short of workloads such as HPC in which scale really is a major concern, if a large enterprise has gone through all of the issues relevant to running tier-1 applications in a public cloud, why on earth would you “burst” to the public cloud versus execute on a strategy that has those workloads run there in the first place.

Christian came up with another ringer during this exchange, one that I wholeheartedly agree with:

Ultimately, the reason I agree so strongly with this is because of the architectural, operational and compliance complexity associated with all the mechanics one needs to allow for interoperable, scaleable, secure and manageable workloads between an internal enterprise’s operational domain (cloud or otherwise) and the public cloud.

The (in)ability to replicate capabilities exactly across these two models means that gaps arise — gaps that unfairly amplify the immaturity of cloud for certain things and it’s stellar capabilities in others.  It’s no wonder people get confused.  Things like security, networking, application intelligence…

NOTE: I make a wholesale differentiaton between a strategy that includes a structured hybrid cloud approach of controlled workload placement/execution versus  a purely overflow/capacity movement of workloads.*

There are many workloads that simply won’t or can’t *natively* “cloudburst” to public cloud due to a lack of supporting packaging and infrastructure.**  Some of them are pretty important.  Some of them are operationally mission critical. What then?  Without an appropriate way of understanding the implications and complexity associated with this issue and getting there from here, we’re left with a strategy of “…leave those tier-1 apps to die on the vine while we greenfield migrate new apps to public cloud.”  That doesn’t sound particularly sexy, useful, efficient or cost-effective.

There are overlay solutions that can allow an enterprise to leverage utility computing services as an abstracted delivery platform and fluidly interconnect an enterprise with a public cloud, but one must understand what’s involved architecturally as part of that hybrid model, what the benefits are and where the penalties lay.  Public cloud needs the same rigor in its due diligence.

[update] My colleague James Urquhart summarized well what I meant by describing the difference in DC-DC (cloud or otherwise) workload execution as what I see as either end of a spectrum: VM-centric package mobility or adopting a truly distributed application architecture.  If you’re somewhere in the middle, things like cloudbursting get really hairy.  As we move from IaaS -> PaaS, some of these issues may evaporate as the former (VM’s) becomes less relevant and the latter (Applications deployed directly to platforms) more prevalent.

Check out this zinger from JP Morgenthal which much better conveys what I meant:

If your Tier-1 workloads can run in a public cloud and satisfy all your requirements, THAT’S where they should run in the first place!  You maximize your investment internally by scaling down and ruthlessly squeezing efficiency out of what you have as quickly as possible — writing those investments off the books.

That’s the point, innit?

Cloud bursting — today — is simply a marketing term.

Thoughts?

/Hoff

* This may be the point that requires more clarity, especially in the wake of examples that were raised on Twitter after I posted this such as using eBay and Netflix as examples of successful “cloudbursting” applications.  My response is that these fine companies hardly resemble a typical enterprise but that they’re also investing in a model that fundamentally changes they way they operate.

** I should point out that I am referring to the use case of heterogeneous cloud platforms such as VMware to AWS (either using an import/conversion function and/or via VPC) versus a more homogeneous platform interlock such as when the enterprise runs vSphere internally and looks to migrate VMs over to a VMware vCloud-powered cloud provider using something like vCloud Director Connector, for example.  Either way, the point still stands, if you can run a workload and satisfy your requirements outright on someone else’s stack, why do it on yours?

Enhanced by Zemanta