Home > Cloud Computing > Hack The Stack Or Go On a Bender With a Vendor?

Hack The Stack Or Go On a Bender With a Vendor?

September 24th, 2010 Leave a comment Go to comments
Cloud computing icon
Image via Wikipedia

I have the privilege of being invited around the world to talk with (and more importantly) listen to some of the biggest governments, enterprises and service providers about their “journey to cloud computing.”

I feel a bit like Kwai Chang Caine from the old series Kung-Fu at times; I wander about blind but full of self-assured answers to the questions I seek to ask, only to realize that asking them is more important than knowing the answer — and that’s the point.  Most people know the answers, they just don’t know how — or which — questions to ask.

Yes, it’s a Friday.  I always get a little philosophical on Fridays.

In the midst of all this buzz and churn, there’s a lot of talk but depending upon the timezone and what dialect of IT is spoken, not necessarily a lot of compelling action.  Frankly, there’s a lot of analysis paralysis as companies turn inward to ask questions of themselves about what cloud computing does or does not mean to them. (Ed: This comment seemed to suggest to some that cloud adoption was stalled. Not what I meant. I’ll clarify by suggesting that there is brisk uptake in many areas, but it’s diversified, split between many parallel paths I reference below; public and private deployments. It doesn’t mean it’s harmonious, however.)

There is, however, a recurring theme across geography, market segment, culture and technology adoption appetites; everyone is seriously weighing their options regarding where, how and with whom to make their investments in terms of building cloud computing infrastructure (and often platform) as-a-service strategy.  The two options, often discussed in parallel but ultimately bifurcated based upon explored use cases come down simply to this:

  1. Take any number of available open core or open source software-driven cloud stacks, commodity hardware and essentially engineer your own Amazon, or
  2. Use proprietary or closed source virtualization-nee-cloud software stacks, high-end “enterprise” or “carrier-class” converged compute/network/storage fabrics and ride the roadmap of the vendors

One option means you expect to commit to an intense amount of engineering and development from a software perspective, the other means you expect to focus on integration of other companies’ solutions.  Depending upon geography, it’s very, very unclear to enterprises of service providers what is the most cost-effective and risk-balanced route when use-cases, viability of solution providers and the ultimate consumers of these use-cases are conflated.

There is no one-size-fits-all solution.  There is no ‘THE Cloud.”

This realization is why most companies are spinning around, investigating the myriad of options they have available and the market is trying to sort itself out, polarized at one end of the spectrum or trying to squeeze out a happy balance somewhere in the middle.

The default position for many is to go with what they know and “bolt on” new technology both tactically (in absence of an actual long-term strategy) to revamp what they already have.

This is where the battle between “public” versus “private” cloud rages — where depending upon which side of the line you stand, the former heralds the “new” realized model of utility computing and the latter is seen as building upon virtualization and process automation to get more agile.  Both are realistically approaching a meet-in-the-middle strategy as frustration mounts, but it’s hard to really get anyone to agree on what that is.  That’s why we have descriptions like “hybrid” or “virtual private” clouds.

The underlying focus for this discussion is, as one might imagine, economics.  What architects (note I didn’t say developers*) quickly arrive at is that this is very much a “squeezing the balloon problem.” Both of these choices hold promise and generally cause copious amounts of iteration and passionate debate centered on topics like feature agility, compliance, liability, robustness, service levels, security, lock-in, utility and fungibility  of the solutions.  But it always comes back to cost.

Hard costs are attractive targets that are easily understood and highly visible.  Soft costs are what kill you.  The models by which the activity and operational flow-through — and ultimate P&L accountability of IT — are still black magic.

The challenge is how those costs are ultimately modeled and accounted for and how to appropriately manage risk. Nobody wants the IT equivalent of credit-default swaps where investments are predicated on a house of cards and hand-waving and at the same time, nobody wants to be the guy whose obituary reads “didn’t get fired for buying IBM.”

Interestingly, the oft-cited simplicity of the “CapEx vs. OpEx” discussion isn’t so simple in hundred year old companies whose culture is predicated upon the existence of processes and procedures whose ebb and flow quite literally exist on the back of TPM reports.  You’d think that the way many of these solutions are marketed — both #1 and #2 above — that we’ve reached some sort of capability/maturity model inflection point where either are out of diapers.

If this were the case, these debates wouldn’t happen and I wouldn’t be writing this blog.  There are many, many tradeoffs to be made here. It’s not a simple exercise, no matter who it is you ask — vendors excluded 😉

Ultimately these discussions — and where these large companies and service providers with existing investment in all sorts of solutions (including previous generations of things now called cloud) are deciding to invest in the short term — come down to the following approaches to dealing with “rolling your own” or “integrating pre-packaged solutions”:

  1. Keep a watchful eye on the likes of mass-market commodity cloud providers such as Amazon and Google. Use (enterprise) and/or emulate the capabilities (enterprise and service providers) of these companies in opportunistic and low-risk engagements which distribute/mitigate risk by targeting non-critical applications and information in these services.  Move for short-term success while couching wholesale swings in strategy with “pragmatic” or guarded optimism.
    .
  2. Distract from the back-end fracas by focusing on the consumption models driven by the consumerization of IT that LOB and end users often define as cloud.  In other words, give people iPhones, use SaaS services that enrich user experience, don’t invest in any internal infrastructure to deliver services and call it a success while trying to figure out what all this really means, long term.
    .
  3. Stand up pilot projects which allow dabbling in both approaches to see where the organizational, operational, cultural and technological landmines are buried.  Experiment with various vendors’ areas of expertise and functionality based upon the feature/compliance/cost see-saw.
    .
  4. Focus on core competencies and start building/deploying the first iterations of “infrastructure 2.0” with converged fabrics and vendor-allied pre-validated hardware/software, vote with dollars on cloud stack adoption, contribute to the emergence/adoption of “standards” based upon use and quite literally *hope* that common formats/packaging/protocols will arrive at portability and ultimately interoperability of these deployment models.
    .
  5. Drive down costs and push back by threatening proprietary hardware/software vendors with the “fact” that open core/open source solutions are more cost-effective/efficient and viable today whilst trying not to flinch when they bring up item #2 questioning where and how you should be investing your money and what your capabilities really are is it relates to development and support.  React to that statement by threatening to move all your apps atop  someone elses’ infrastructure. Try not to flinch again when you’re reminded that compliance, security, SLA’s and legal requirements will prevent that.  Rinse, lather, repeat.
    .
  6. Ride out the compliance, security, trust and chasm-crossing comfort gaps, hedging bets.

If you haven’t figured it out by now, it’s messy.

If I had to bet which will win, I’d put my money on…<carrier lost>

/Hoff

*Check out Bernard Golden’s really good post “The Truth About What Really Runs On Amazon” for some insight as to *who* and *what* is running in public clouds like AWS.  The developers are leading the charge.  Often times they are disconnected from the processes I discuss above, but that’s another problem entirely, innit?

Enhanced by Zemanta
Categories: Cloud Computing Tags:
  1. September 24th, 2010 at 07:07 | #1

    Liked your post, as always. Was wondering if you could shed some light on this. From your travels and observations, are you seeing more of long-term strategic thinking about cloud (like "we have this cloud thing, how do we start using it") or are you seeing more of short-term project-oriented thinking (like "I have this new project, it could be done on-premises or in cloud, I am trying to pick the best option regardless of what my long-term strategy is going to be")? Looks to me like making a decision in the latter case is easier.

    OTOH, you probably meet companies on the biggish side, and I won't be surprised if they can't do any project-oriented thinking before doing long-term strategy first…

    • September 24th, 2010 at 07:18 | #2

      Dmitriy:

      Quick answer is: both. If that didn't come through in the post, I need to revisit it.

      Trying to figure out what the long term strategy is and yet very opportunistic on short term new greenfield applications.

  2. September 24th, 2010 at 11:34 | #3

    Cool post, nice ending.

    My feeling is that you will see some combination of #1 and #2 – where organizations want the reliability of "enterprise/carrier class" equipment but also want to reduce costs on the software side and go the open source route. Easier to customize, support costs are relatively lower, and you can introduce some IP into the stack.

    One of the perceived benefits of choosing the "pre-validated hardware/software" package is that a lot of the issues w/interop, performance, support, and scalability (to name a few) have already been worked out. Whether or not those perceived benefits outweigh the concerns (security, viability, lock-in, etc) I think is still unclear.

  3. September 26th, 2010 at 11:51 | #4

    Chris, good post.

    I like to approaches you suggest at the end. I just posted some commentary on cloud adoption in the fortune 1000 and a poll on what percentage of Fortune 1000 companies have truly implemented a private (within their four walls) cloud infrastructure.

    http://bobolwig.wordpress.com/2010/09/26/has-any-

    I think there is a great deal of deliberation going within IT organizations and a fair amount of "cloud-fusion" coming from the technology vendors proposing cloud solutions. As my post suggests, the technology vendors are only providing cloud building blocks. IT organizations still need to deal with the economics, manage risk and, ultimately build and operate the infrastructure.

    Best Regards,

    Bob

  1. September 25th, 2010 at 00:35 | #1
  2. September 25th, 2010 at 00:51 | #2
  3. October 5th, 2010 at 08:09 | #3
  4. October 17th, 2010 at 18:18 | #4
  5. January 17th, 2011 at 15:46 | #5
  6. June 12th, 2011 at 19:31 | #6
  7. December 10th, 2011 at 09:53 | #7