Hack The Stack Or Go On a Bender With a Vendor?
I have the privilege of being invited around the world to talk with (and more importantly) listen to some of the biggest governments, enterprises and service providers about their “journey to cloud computing.”
I feel a bit like Kwai Chang Caine from the old series Kung-Fu at times; I wander about blind but full of self-assured answers to the questions I seek to ask, only to realize that asking them is more important than knowing the answer — and that’s the point. Most people know the answers, they just don’t know how — or which — questions to ask.
Yes, it’s a Friday. I always get a little philosophical on Fridays.
In the midst of all this buzz and churn, there’s a lot of talk but depending upon the timezone and what dialect of IT is spoken, not necessarily a lot of compelling action. Frankly, there’s a lot of analysis paralysis as companies turn inward to ask questions of themselves about what cloud computing does or does not mean to them. (Ed: This comment seemed to suggest to some that cloud adoption was stalled. Not what I meant. I’ll clarify by suggesting that there is brisk uptake in many areas, but it’s diversified, split between many parallel paths I reference below; public and private deployments. It doesn’t mean it’s harmonious, however.)
There is, however, a recurring theme across geography, market segment, culture and technology adoption appetites; everyone is seriously weighing their options regarding where, how and with whom to make their investments in terms of building cloud computing infrastructure (and often platform) as-a-service strategy. The two options, often discussed in parallel but ultimately bifurcated based upon explored use cases come down simply to this:
- Take any number of available open core or open source software-driven cloud stacks, commodity hardware and essentially engineer your own Amazon, or
- Use proprietary or closed source virtualization-nee-cloud software stacks, high-end “enterprise” or “carrier-class” converged compute/network/storage fabrics and ride the roadmap of the vendors
One option means you expect to commit to an intense amount of engineering and development from a software perspective, the other means you expect to focus on integration of other companies’ solutions. Depending upon geography, it’s very, very unclear to enterprises of service providers what is the most cost-effective and risk-balanced route when use-cases, viability of solution providers and the ultimate consumers of these use-cases are conflated.
There is no one-size-fits-all solution. There is no ‘THE Cloud.”
This realization is why most companies are spinning around, investigating the myriad of options they have available and the market is trying to sort itself out, polarized at one end of the spectrum or trying to squeeze out a happy balance somewhere in the middle.
The default position for many is to go with what they know and “bolt on” new technology both tactically (in absence of an actual long-term strategy) to revamp what they already have.
This is where the battle between “public” versus “private” cloud rages — where depending upon which side of the line you stand, the former heralds the “new” realized model of utility computing and the latter is seen as building upon virtualization and process automation to get more agile. Both are realistically approaching a meet-in-the-middle strategy as frustration mounts, but it’s hard to really get anyone to agree on what that is. That’s why we have descriptions like “hybrid” or “virtual private” clouds.
The underlying focus for this discussion is, as one might imagine, economics. What architects (note I didn’t say developers*) quickly arrive at is that this is very much a “squeezing the balloon problem.” Both of these choices hold promise and generally cause copious amounts of iteration and passionate debate centered on topics like feature agility, compliance, liability, robustness, service levels, security, lock-in, utility and fungibility of the solutions. But it always comes back to cost.
Hard costs are attractive targets that are easily understood and highly visible. Soft costs are what kill you. The models by which the activity and operational flow-through — and ultimate P&L accountability of IT — are still black magic.
The challenge is how those costs are ultimately modeled and accounted for and how to appropriately manage risk. Nobody wants the IT equivalent of credit-default swaps where investments are predicated on a house of cards and hand-waving and at the same time, nobody wants to be the guy whose obituary reads “didn’t get fired for buying IBM.”
Interestingly, the oft-cited simplicity of the “CapEx vs. OpEx” discussion isn’t so simple in hundred year old companies whose culture is predicated upon the existence of processes and procedures whose ebb and flow quite literally exist on the back of TPM reports. You’d think that the way many of these solutions are marketed — both #1 and #2 above — that we’ve reached some sort of capability/maturity model inflection point where either are out of diapers.
If this were the case, these debates wouldn’t happen and I wouldn’t be writing this blog. There are many, many tradeoffs to be made here. It’s not a simple exercise, no matter who it is you ask — vendors excluded
Ultimately these discussions — and where these large companies and service providers with existing investment in all sorts of solutions (including previous generations of things now called cloud) are deciding to invest in the short term — come down to the following approaches to dealing with “rolling your own” or “integrating pre-packaged solutions”:
- Keep a watchful eye on the likes of mass-market commodity cloud providers such as Amazon and Google. Use (enterprise) and/or emulate the capabilities (enterprise and service providers) of these companies in opportunistic and low-risk engagements which distribute/mitigate risk by targeting non-critical applications and information in these services. Move for short-term success while couching wholesale swings in strategy with “pragmatic” or guarded optimism.
- Distract from the back-end fracas by focusing on the consumption models driven by the consumerization of IT that LOB and end users often define as cloud. In other words, give people iPhones, use SaaS services that enrich user experience, don’t invest in any internal infrastructure to deliver services and call it a success while trying to figure out what all this really means, long term.
- Stand up pilot projects which allow dabbling in both approaches to see where the organizational, operational, cultural and technological landmines are buried. Experiment with various vendors’ areas of expertise and functionality based upon the feature/compliance/cost see-saw.
- Focus on core competencies and start building/deploying the first iterations of “infrastructure 2.0″ with converged fabrics and vendor-allied pre-validated hardware/software, vote with dollars on cloud stack adoption, contribute to the emergence/adoption of “standards” based upon use and quite literally *hope* that common formats/packaging/protocols will arrive at portability and ultimately interoperability of these deployment models.
- Drive down costs and push back by threatening proprietary hardware/software vendors with the “fact” that open core/open source solutions are more cost-effective/efficient and viable today whilst trying not to flinch when they bring up item #2 questioning where and how you should be investing your money and what your capabilities really are is it relates to development and support. React to that statement by threatening to move all your apps atop someone elses’ infrastructure. Try not to flinch again when you’re reminded that compliance, security, SLA’s and legal requirements will prevent that. Rinse, lather, repeat.
- Ride out the compliance, security, trust and chasm-crossing comfort gaps, hedging bets.
If you haven’t figured it out by now, it’s messy.
If I had to bet which will win, I’d put my money on…<carrier lost>
*Check out Bernard Golden’s really good post “The Truth About What Really Runs On Amazon” for some insight as to *who* and *what* is running in public clouds like AWS. The developers are leading the charge. Often times they are disconnected from the processes I discuss above, but that’s another problem entirely, innit?
Related articles by Zemanta
- Open Source And Cloud Computing: How Bitnami Helps Launch Open Source Apps On EC2 In 2 Minutes (cloudave.com)
- Impact of OpenStack Project Goes Beyond the Cloud Industry Leaders (readwriteweb.com)
- Why OpenStack Has its Work Cut Out (gigaom.com)
- Cloud Computing: A Treasure Trove for Hackers (deurainfosec.com)
- Cloud.com Collaborates with Industry Leaders in Open Source and Cloud Computing to Help Put an End to Cloud Lock-in (eon.businesswire.com)
- CLOUDINOMICON: Idempotent Infrastructure, Survivable Systems & Bringing Sexy Back to Information Centricity (rationalsurvivability.com)
- The Classical DMZ Design Pattern: How To Kill Security In the Cloud (rationalsurvivability.com)
- Incomplete Thought: Why We Need Open Source Security Solutions More Than Ever… (rationalsurvivability.com)
- Hoff’s 5 Rules Of Cloud Security… (rationalsurvivability.com)