Archive

Posts Tagged ‘Facebook’

Cloud Computing, Open* and the Integrator’s Dilemma

April 11th, 2011 4 comments

My esteemed co-tormentor of Twitter, Christian Reilly (@reillyusa,) did a fantastic job of describing the impact — or more specifically the potential lack thereof — of Facebook’s OpenCompute initiative on the typical enterprise as compared to the real target audience, the service provider and manufacturers of equipment for service providers:

…I genuinely believe that for traditional service providers who are making investments in new areas and offerings, XaaS providers, OEM hardware vendors and those with plans to become giants in the next generation(s) of Systems Integrators that the OpenCompute project is a huge step forward and will be a fantastic success story over the next few years as the community and its innovations grow and tangible benefits emerge.

I think Christian has it dead on; the trickle-down effect with large service providers leveraging innovation in facilities and compute construction looking to squeeze maximum cost efficiencies (based on power, density, cooling, and space efficiency) from their services will be good for everyone, but that it’s quite important to recognize why and how:

…consider that today’s public cloud services and co-location providers are today’s equivalent of commercial airlines, providing their own multi-tenant services, price structures and user experiences on top of just a handful of airframe and engine manufacturers. OpenCompute has the potential to influence the efficiency and effectiveness of those manufacturers by helping to contribute towards ideas and potentially standards that can be adopted across the industry.

Specific to the adoption of OpenCompute as an enterprise blueprint, he widened the bifurcation between “private clouds operated by service providers as public clouds” (my words) and “private clouds operated by enterprises for their own use” with a telling analog:

Bottom line ? To today’s large corporate IT shops; those who either have, or will continue to operate on-premise or co-located “private cloud” environments, the excitement levels around the OpenCompute project (if anyone actually hears of it at all) will be all-to-familiarly low as sadly, to wake some of these sleeping giants, it will take more than a poke from the very same company who’s website their IT teams are trying to prevent employees from accessing.

This is the point of departure for OpenCompute — it’s not framed for or designed for enterprise consumption.  In an altogether fascinating description of why Facebook open-sourced its data center design, the Huffington Post summarized it thus:

“[The Open Compute Project] really is a big deal because it constitutes a general shift in terms of what how we look at technology as a competitive advantage,” O’Grady said. “For Facebook, the evidence is piling up that they don’t consider technology to be a competitive advantage. They view their competitive advantage in the marketplace to be their users.”

Here we see the general abstraction of technology in line with Nick Carr’s premise that “IT Doesn’t Matter:”

“Sharing its blueprints may gain Facebook not only free manpower, but cheaper equipment. The company’s bet, analysts say, is that giving away intellectual property will help it foster an ecosystem of competing vendors that will drive down the cost of parts.”

With that in mind, I am just as worried about the fate of OpenStack and its enterprise versus service provider audience and how it’s being perceived as they watch the mad scramble by tech companies to add value and get a seat at the table.

Each of these well-intentioned projects are curated by public cloud operators and technology vendors and are indirectly positioned for the benefit of enterprises, but not really meant for their consumption — at least not if they don’t end up putting enterprises right back where they were trying to escape from in the first place with cloud computing: the integrator’s dilemma.

If you look at the underlying premise of OpenStack — it’s modularity, flexibility and open design — what you get is the ability to craft a solution finely tuned to an operating environment of your design. Integrate solutions into the stack as you see fit.  Contribute code.  Develop an ecosystem. Integrate, manage, maintain…

This is as much a problem as it is a solution for an enterprise.  This is why, in many cases, enterprises choose to use a single vendor with a single neck to choke in order to avoid having to act as an integrator in the first place or simply look to outsource to one or more public cloud providers and avoid this in the first place.

Chances are, most are realistically caught up somewhere in the nether-regions in between the two.

I wish to make it clear that I am very much a proponent of Open* but I realize that the lack of direct enterprise involvement in standards bodies, “open” initiatives and a lack of information sharing and experience for fear of losing competitive advantage is what drives enterprises to Closed* in the first place; they want to lessen their developmental and integration burdens and the Lego erector-set approach in many ways scares conservative, risk-averse CxO’s away from projects like this.

I think this is where we’ll see more of these “clouds in a box” being paired with managed services to keep it all humming, regardless of where it lives. [See infrastructure solutions from: Dell, VCE, HP, Oracle, etc. paired with “Cloud” distributions layered atop]

Let’s hope we see enterprise success stories built on leveraging OpenCompute and OpenStack…it will be good for all of us.

/Hoff

Update: I just saw that my colleague, James Urquhart, wrote a blog titled “Cloud disrupts, creates channel opportunities” in which he details the channel’s role in this integration challenge. Spot on.

Related articles

Enhanced by Zemanta

Dear SaaS Vendors: If Cloud Is The Way Forward & Companies Shouldn’t Spend $ On Privately-Operated Infrastructure, When Are You Moving Yours To Amazon Web Services?

April 30th, 2010 6 comments

We’re told repetitively by Software as a Service (SaaS)* vendors that infrastructure is irrelevant, that CapEx spending is for fools and that Cloud Computing has fundamentally changed the way we will, forever, consume computing resources.

Why is it then that many of the largest SaaS providers on the planet (including firms like Salesforce.com, Twitter, Facebook, etc.) continue to build their software and choose to run it in their own datacenters on their own infrastructure?  In fact, many of them are on a tear involving multi-hundred million dollar (read: infrastructure) private datacenter build-outs.

I mean, SaaS is all about the software and service delivery, right?  IaaS/PaaS is the perfect vehicle for the delivery of scaleable software, right?  So why do you continue to try to convince *us* to move our software to you and yet *you* don’t/won’t/can’t move your software to someone else like AWS?

Hypocricloud: SaaS firms telling us we’re backwards for investing in infrastructure when they don’t eat the dog food they’re dispensing (AKA we’ll build private clouds and operate them, but tell you they’re a bad idea, in order to provide public cloud offerings to you…)

Quid pro quo, agent Starling.

/Hoff

* I originally addressed this to Salesforce.com via Twitter in response to Peter Coffee’s blog here but repurposed the title to apply to SaaS vendors in general.

Reblog this post [with Zemanta]