Home > Cloud Computing > Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit…

Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit…

My wife is in the midst of an extended multi-phasic, multi-day delivery process of our fourth child.  In between bouts of her moaning, breathing and ultimately sleeping, I’m left to taunt people on Twitter and think about Cloud.

Reviewing my hot-button list of terms that are annoying me presently, I hit upon a favorite: Cloudbursting.

It occurred to me that this term brings up a visceral experience that makes me want to punch kittens.  It’s used by people to describe a use case in which workloads that run first and foremost within the walled gardens of an enterprise, magically burst forth into public cloud based upon a lack of capacity internally and a plethora of available capacity externally.

I call bullshit.

Now, allow me to qualify that statement.

Ben Kepes suggests that cloud bursting makes sense to an enterprise “Because you’ve spent a gazillion dollars on on-prem h/w that you want to continue using. BUT your workloads are spiky…” such that an enterprise would be focused on “…maximizing returns from on-prem. But sending excess capacity to the clouds.”  This implies the problem you’re trying to solve is one of scale.

I just don’t buy this.

Either you build a private cloud that gives you the scale you need in the first place in which you pattern your operational models after public cloud and/or design a solid plan to migrate, interconnect or extend platforms to the public [commodity] cloud using this model, therefore not bursting but completely migrating capacity, but you don’t stop somewhere in the middle with the same old crap internally and a bright, shiny public cloud you “burst things to” when you get your capacity knickers in a twist:

The investment and skillsets needed to rectify two often diametrically-opposed operational models doesn’t maximize returns, it bifurcates and diminishes efficiencies and blurs cost allocation models making both internal IT and public cloud look grotesquely inaccurate.

Christian Reilly suggested I had no legs to stand on making these arguments:

Fair enough, but…

Short of workloads such as HPC in which scale really is a major concern, if a large enterprise has gone through all of the issues relevant to running tier-1 applications in a public cloud, why on earth would you “burst” to the public cloud versus execute on a strategy that has those workloads run there in the first place.

Christian came up with another ringer during this exchange, one that I wholeheartedly agree with:

Ultimately, the reason I agree so strongly with this is because of the architectural, operational and compliance complexity associated with all the mechanics one needs to allow for interoperable, scaleable, secure and manageable workloads between an internal enterprise’s operational domain (cloud or otherwise) and the public cloud.

The (in)ability to replicate capabilities exactly across these two models means that gaps arise — gaps that unfairly amplify the immaturity of cloud for certain things and it’s stellar capabilities in others.  It’s no wonder people get confused.  Things like security, networking, application intelligence…

NOTE: I make a wholesale differentiaton between a strategy that includes a structured hybrid cloud approach of controlled workload placement/execution versus  a purely overflow/capacity movement of workloads.*

There are many workloads that simply won’t or can’t *natively* “cloudburst” to public cloud due to a lack of supporting packaging and infrastructure.**  Some of them are pretty important.  Some of them are operationally mission critical. What then?  Without an appropriate way of understanding the implications and complexity associated with this issue and getting there from here, we’re left with a strategy of “…leave those tier-1 apps to die on the vine while we greenfield migrate new apps to public cloud.”  That doesn’t sound particularly sexy, useful, efficient or cost-effective.

There are overlay solutions that can allow an enterprise to leverage utility computing services as an abstracted delivery platform and fluidly interconnect an enterprise with a public cloud, but one must understand what’s involved architecturally as part of that hybrid model, what the benefits are and where the penalties lay.  Public cloud needs the same rigor in its due diligence.

[update] My colleague James Urquhart summarized well what I meant by describing the difference in DC-DC (cloud or otherwise) workload execution as what I see as either end of a spectrum: VM-centric package mobility or adopting a truly distributed application architecture.  If you’re somewhere in the middle, things like cloudbursting get really hairy.  As we move from IaaS -> PaaS, some of these issues may evaporate as the former (VM’s) becomes less relevant and the latter (Applications deployed directly to platforms) more prevalent.

Check out this zinger from JP Morgenthal which much better conveys what I meant:

If your Tier-1 workloads can run in a public cloud and satisfy all your requirements, THAT’S where they should run in the first place!  You maximize your investment internally by scaling down and ruthlessly squeezing efficiency out of what you have as quickly as possible — writing those investments off the books.

That’s the point, innit?

Cloud bursting — today — is simply a marketing term.



* This may be the point that requires more clarity, especially in the wake of examples that were raised on Twitter after I posted this such as using eBay and Netflix as examples of successful “cloudbursting” applications.  My response is that these fine companies hardly resemble a typical enterprise but that they’re also investing in a model that fundamentally changes they way they operate.

** I should point out that I am referring to the use case of heterogeneous cloud platforms such as VMware to AWS (either using an import/conversion function and/or via VPC) versus a more homogeneous platform interlock such as when the enterprise runs vSphere internally and looks to migrate VMs over to a VMware vCloud-powered cloud provider using something like vCloud Director Connector, for example.  Either way, the point still stands, if you can run a workload and satisfy your requirements outright on someone else’s stack, why do it on yours?

Enhanced by Zemanta
  1. Mike Fratto
    April 5th, 2011 at 14:16 | #1

    I bet a lot of organizations get to that conclusion really, really quickly. Once you start getting into the dependencies, the whole concept of cloud bursting falls apart. Yeah, it's BS and hurts vendors credibility by asserting it.

    I do think that it will be possible to cloudburst, but I give it 10 years, minimum for cloud bursting to have a hope of being commonplace. There are too many applications being written today that are monolithic in nature. There is too much under the covers that needs to change such as building applications that are devoid of dependencies, and I mean all dependencies, designing applications that naturally and effective shard databases in ways that simultaneously maintains coherence w/o requiring shoveling GB or TB of data over the WAN. And that doesn't even begin to address cultural change. It'll be a long, long ride.

  2. April 5th, 2011 at 15:34 | #2

    I think ultimately cloudbursting will be feasible technically across platforms in the long term; what I alluded to when I said "Cloud bursting — today — is simply a marketing term." At the point at which it will be feasible, one has to wonder if it will be relevant, especially given the crawl up the stack to PaaS…

  3. April 5th, 2011 at 17:48 | #3

    One place that cloud bursting might just work is where companies already have a grid for HPC stuff

  4. April 5th, 2011 at 18:23 | #4

    @Chris Swan

    Hey Chris, I don't disagree — in fact, that's what I meant (above) where I said:

    "Short of workloads such as HPC in which scale really is a major concern, if a large enterprise has gone through all of the issues relevant to running tier-1 applications in a public cloud, why on earth would you “burst” to the public cloud versus execute on a strategy that has those workloads run there in the first place."

    I think that's a perfectly reasonable use case.

  5. April 19th, 2011 at 05:47 | #5

    I've been saying "cloudbursting" is bullshit since I first heard the term — thanks for taking the time to eloquently explain why.

    The real world (meteorological) definition of a "cloudburst", courtesy Wikipedia, is apt:

    "A cloudburst is an extreme amount of precipitation, sometimes with hail and thunder, which normally lasts no longer than a few minutes but is capable of creating flood conditions."

    I'd argue it gives rise to a more appropriate definition of "cloudburst" in the context of cloud computing:

    "A cloudburst occurs under extreme load, which normally lasts no longer than a few minutes but is capable of causing severe outages."

  6. April 19th, 2011 at 06:00 | #6


    I think we are overcomplicating a matter that may turn to be an order of magnitude simpler than this.

    One scenario off the top of my head:

    Enterprise has a limited amount of test/dev resources. They NEED to keep local resources because some of the test/development requires using sensitive data and for compliance they need to run "privately". They however do not always spend cycles on the infrastructure so when they don't other non-sensitive test/dev workloads can benefit from that spare capacity. When there is contention you cloudburst the non-sensitive workloads. You can't get rid of the local capacity (for the sensitive workloads) nor you can buy infinite capacity to run everything (chances are that it may be idle most of the time.

    I can think of others but I have a conf call in 1 minute….


  1. April 19th, 2011 at 16:21 | #1
  2. April 20th, 2011 at 07:43 | #2
  3. May 3rd, 2011 at 00:00 | #3
  4. May 23rd, 2011 at 14:03 | #4
  5. June 2nd, 2011 at 08:11 | #5
  6. June 2nd, 2011 at 08:30 | #6
  7. November 14th, 2011 at 16:43 | #7