Archive

Archive for 2009

Hoff’s (Still) For Hire: There’s Only So Many Honey-Do’s I can Do’s…

April 15th, 2009 No comments

 

hoffforhireUpdate: Since I posted this in February, I’ve had some awesome opportunities arise but I haven’t yet secured my dream job, so I thought I’d repost this prior to the RSA Security show next week.

I’ll be keynoting at the America’s Growth Capital Information Security Conference as well as speaking numerous times at RSA.  You can reach me in any of the ways listed below.

The last two years have been a blast but all things must come to an end.

At the conclusion of March, I am moving on to newer pastures.  Where that is may be up to you.
I am exploring all options with a focus on traditional security roles including CISO/CSO, but I’d prefer architect/evangelist/CTO roles that focus more on virtualization and Cloud Computing security.  Start-ups, Up-Starts or large companies are all game.

If you’ve got an opportunity that you think we’d both be a match for, feel free to reach out.  

A dose of reality: If you’re not serious about envelope pushing, thought/industry leadership, world domination and unabashed enthusiasm sprinkled with rational pragmatism, I’m not your guy…

My LinkedIn profile is here.  My email is here.  You can reach my call router at +1.978.631.0302.  You can find me on Twitter here: @beaker

Thanks,

/Hoff

Categories: Career Tags:

Private Clouds: Even A Blind Squirrel Finds A Nut Once In A While

April 12th, 2009 6 comments

Over the last month it’s been gratifying to watch the “mainstream” IT press provide more substantive coverage on the emergence and acceptance of Private Clouds after the relatively dismissive stance prior.  

I think this has a lot to do with the stabilization of definitions and applications of Cloud Computing and it’s service variants as well as the realities of Cloud adoption in large enterprises and the timing it involves.

To me, Private Clouds represent the natural progression toward wider scale Cloud adoption for larger enterprises with sunk costs and investments in existing infrastructure and it has always meant more than simply “Amazon-izing your Intranet.”  Private Clouds offer larger enterprises a logical, sustainable and intelligent path forward from their virtualization and automation initiatives in play already.

I think my definition a few months ago was still a little rough, but it gets the noodle churning:

Private clouds are about extending the enterprise to leverage infrastructure that makes use of cloud computing capabilities and is not (only) about internally locating the resources used to provide service.  It’s also not an all-or-nothing proposition.

It occurs to me that private clouds make a ton of sense as an enabler to enterprises who want to take advantage of cloud computing for any of the oft-cited reasons, but are loathe to (or unable to) surrender their infrastructure and applications without sufficient control.  Private clouds mean that an enterprise can decide how and how much of the infrastructure can/should be maintained as a non-cloud operational concern versus how much can benefit from the cloud.

Private clouds make a ton of sense; they provide the economic benefits of outsourced scaleable infrastructure that does not require capital outlay, the needed control over that infrastructure combined with the ability to replicate existing topologies and platforms and ultimately the portability of applications and workflow.  These capabilities may eliminate the re-write and/or re-engineering of applications like is often required when moving to typical IaaS (infrastructure as a Service) player such as Amazon.

From a security perspective — which is very much my focus — private clouds provide me with a way of articulating and expressing the value of cloud computing while still enabling me to manage risk to an acceptable level as chartered by my mandate.

Here are some of the blog entries I’ve written on Private Clouds. I go into reasonable detail in my “Frogs Who Desired a King” Cloud Security presentation.  James Urquhart’s got some doozies, too.  Here’s a great one.  Chuck Hollis has been pretty vocal on the subject.

My Google Reader has no less than 10 articles on Private Clouds in the last day or so including an interesting one featuring GE’s initiative over the next three years.

I hope the dialog continues and we can continue to make headway in arriving at common language and set of use cases, but as I discovered a couple of weeks ago, in my post titled “The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…”, the definition of Private Cloud is the most variable of all and promotes the most contentious of debates:

hppiev7

Private Clouds seem to point to validate the proimise of what real time infrastructure/adapative enterprise visions painted many years ago, with the potential for even more scale and control.  The intersection of virtualization, automation, Cloud and converged and unified computing are making sure of that.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Does Cloud Infrastructure Matter? You Bet Your Ass(ets) It Does!

April 8th, 2009 5 comments

James Urquhart wrote a great blog today titled “The new cloud infrastructure: do you care?” in which he says:

…if you are a consumer of cloud-based resources, the mantra has long been that you can simply deploy or consume your applications/services without any regard to the infrastructure on which they are being hosted. A very cool concept for an application developer, to be sure, but I think it’s a mistake to ignore what lies under the hood.

At the very least, the future of hardware ought to touch the inner geek in all of us.

What is happening in data center infrastructure is a complete rethinking of the architectures utilized to deliver online services, from the overall data center architectures all the way down to the very components that serve the “big four” elements of the data center: facilities, servers, storage and networking.

Amen!

While James’ post focused mostly on how the underlying compute platforms are changing such as his illustration with Cisco’s UCS, Rackable’s C2 and Google’s custom machines, this trend will expand up and down the infrastructure stack.

From a technologist or architect’s perspective, what powers the underlying Cloud infrastructure is really important. As James alludes to, issues of interoperability can and will be impacted by the underlying platforms upon which the abstracted application resources sit.  This may sound contentious from the PaaS and SaaS perspective, but not so from that of IaaS, afterall the “I” in IaaS stands for infrastructure.

I made this point recently from a security perspective in my blog post titled “The Cloud Is a Fickle Mistress: DDoS&M…”  wherein I said:

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

…or here in Cloud Catastrophes (Cloudtastophes?) Caused by Clueless Caretakers?:

The abstraction of infrastructure and democratization of applications and data that Cloud Computing services can bring does not mean that all services are created equal.  It does not make our services or information more secure (or less for that matter.)  Just because a vendor brands themselves as a “Cloud” provider does not mean that “their” infrastructure is any more implicitly reliable, stable or resilient than traditional infrastructure or that proper enterprise architecture as it relates to people, process and technology is in place.  How the infrastructure is built and maintained is just as important as ever.

What we’ll also see is that even though we’re not supposed to care what our Cloud providers’ infrastructure is powered by and how, we absolutely will in the long term and the vendors know it.   This is where people start to freak about how standards and consolidation will kill innovation in the space but it’s also where the realities of running a business come crashing down on early adopters. Large enterprises will move to providers who can demonstrate that their services are solid by way of co-branding with the reputation of the providers of infrastructure coupled with the compliance to “standards.”

Remember the “Cisco Powered Network” program?  How about a “Cisco Powered Cloud?”  See how GoGrid advertises their load balancers are f5?

In the long term, like the CapitalOne credit card commercials challenging the company providing your credit card services by asking “What’s in your wallet?” you can expect to start asking the same thing about your Cloud providers’ offerings, also.

So, depending on what you do and what you need, your choice of provider — and what sits under their hood — may matter a ton.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Google’s Updated App Engine – “Secure” Data Connector: Your Firewall Means Nothing (Again)

April 8th, 2009 3 comments

This will be a quickie.  

This is such a juicy topic and really merits a ton more than just a mention, but unfortunately, I’m out of time.

Google’s latest updates to the Google App Engine Platform has all sorts of interesting  functionality:

  • Access to firewalled data: grant policy-controlled access to your data behind the firewall.
  • Cron support: schedule tasks like report generation or DB clean-up at an interval of your choosing.
  • Database import: move GBs of data easily into your App Engine app. Matching export capabilities are coming soon, hopefully within a month.

To me, the most interesting is the boldfaced item above…Google Apps access to information behind corporate firewalls*

From a Cloud interoperability and integration perspective, this is fantastic.  From a security perspective, I am as intrigued and concerned as I am about anytime I hear “access internal data from an external service.”

The capability to gain access to internal data is provided by the Secure Data Connector.  You can find reasonably detailed information about it here.

Here’s how it works:

SDC forms an encrypted connection between your data and Google Apps. SDC lets you control who in your domain can access which resources using Google Apps.

SDC works with Google Apps to provide data connectivity and enable IT administrators to control the data and services that are accessible in Google Apps. With SDC, you can build private gadgets, spreadsheets, and applications that interact with your existing corporate systems.

The following illustration shows SDC connection components.

Secure Data Connector Components

The steps are:

  1. Google Apps forwards authorized data requests from users who are within the Google Apps domain to the Google tunnel protocol servers.
  2. The tunnel servers validate that a user is authorized to make the request to the specified resource. Google tunnel servers are connected by an encrypted tunnel to SDC, which runs within a company’s internal network.
  3. The tunnel protocol allows SDC to connect to a Google tunnel server, authenticate, and encrypt the data that flows across the Internet.
  4. SDC uses resource rules to validate if a user is authorized to make a request to a specified resource.
  5. An optional intranet firewall can be used to provide extra network security.
  6. SDC performs a network request to the specified resource or services.
  7. The service verifies the signed requests and if the user is authorized, returns the data.

From a security perspective, access control and confidentiality are provided by filters, resource rules, and SSL/TLS encrypted tunnels.  We’ll take this apart in detail (as time permits) later.

In the mean time, here’s a link to the SDC Security guide for developers.

…and no, you’re firewall likely won’t help save you (again.) 

At least I won’t be bored now.

/Hoff

* The database import/export is profound also. Craig Balding followed up with his OAuth-focused commentary here.

Categories: Cloud Computing, Cloud Security, Google Tags:

Pimping My Friends: One Of My Favorite NonCons – Troopers

April 8th, 2009 No comments

One of my favorite international security conferences is happening April 22nd/23rd in Munich, Germany. It’s run by my good friend Enno Rey and his team at ERNW:

TROOPERS09 – WHAT IS IT?
Troopers09 is an international IT-Security Conference on the 22nd and 23rd of April 2009 in Munich, Germany. This event is created for CISOs, ISOs, IT-Auditors, IT-Sec-Admins, IT-Sec Consultants and everyone who is involved with IT-Security on a professional basis. The goal is to share in-depth knowledge about the aspects of attacking and defending information technology infrastructure and applications. The featured presentations and demonstrations represent the latest discoveries and developments of the global hacking scene and will provide the audience with valuable practical know-how.

Troopers09 is hosted by ERNW GmbH, an independent IT-Security consultancy from Heidelberg, Germany. In the past years, speakers from ERNW were invited all around the world to present their latest IT-Sec research results and to share their knowledge within the global hacking community. With this global experience in mind ERNW decided to launch an international conference in Germany in 2008. After last year’s success of Troopers08 we’re thrilled to do it again. Once more it’s going to be an event unlike all other „Security Conferences“ we have seen in Germany so far: No product presentations, no marketing blabla, no bull*ht-bingo – just pure practical IT-Security. Real answers and practical benefits to meet today´s and tomorrows threats.

Troopers08 was a fantastic event, so I can only imagine that this year’s will be just as good if not better.

Check it out here.

/Hoff

Categories: Security Conferences Tags:

HyTrust: An Elegant Solution To a Messy Problem

April 6th, 2009 8 comments

logo_hytrust I had a pre-release briefing with the folks from HyTrust on Friday and was impressed with their solution.  I had previously met with the VC’s within whose portfolio HyTrust sits and they were bullish on the team and technology approach.  Here’s why.

  “Security” solutions in virtualized environments are becoming less about “pure” security functions like firewalls and IDP and much more focused on increasing the management and visibility of virtualization and keeping pace with the velocity of change, configuration control and compliance.  I’ve talked about that a lot recently.

HyTrust approaches this problem in a very elegant manner. Their approach is based on the old adage “you cannot manage that which you cannot see.”  

In the case of VMware, there are numerous vectors for managing and configuring the platform; from the various host and platform management interfaces to the guests and virtual networking components.

There are many tools on the market which address these issues. Reflex, Third Brigade and Catbird come to mind with the latter being the most similar.

The difference between HyTrust and their competitors is how they integrate their solution to provide visibility and protect the management network.  

HyTrust’s answer is to both physically and logically sit in front of the the virtualization platform management network and actually proxy each configuration request, whether that’s an SSH session to the service console, or a VirtualCenter configuration
change through the GUI. 

These requests are mapped to roles which are in turn authenticated against an Enterprises’ Active Directory service so fine-grained role-based access to specific functions via templates can be performed. Further, since every request is proxied, logging is robust and can be mapped back directly to a single user.

The policy engine and templates appear quite easy to use given the demo I saw and the logging and reporting looks good.

Actions that violate policy can be allowed or permitted and can either be simply logged or even remediated should a violation occur.

This centralized approach is very elegant. It has its downsides, of course, inasmuch as it becomes a single point of failure and performance and high-availability should be paid close attention to.

 The HyTrust offering will be available as both a hardware appliance as well as a virtual appliance. They will also release what they call a FREE “Community Edition” which is a full-featured version but is limited to securing three VMware ESX hosts.

Check them out here.

/Hoff

Categories: Virtualization Security, VMware Tags:

The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…

April 5th, 2009 19 comments

Updated again at 13:43pm EST – Please see bottom of post

Hybrid, Public, Private, Internal and External.

The HPPIE model; you’ve heard these terms used to describe and define the various types of Cloud.

What’s always disturbed me about using these terms singularly is that separetely they actually address scenarios that are orthogonal and yet are often used to compare and contrast one service/offering to another.

The short story: Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location.

The longer story: Hybrid, Public, Private, Internal and External seek to summarily describe no less than five different issues and categorize a cloud service/offering into one dumbed-down term for convenience.  In terms of a Cloud service/offering, using one of the HPPIE labels actually attempts to address in one word:

  1. Who manages it
  2. Who owns it
  3. Where it’s located
  4. Who has access to it
  5. How it’s accessed

That’s a pretty tall order.  I know we’re aiming for simplicity in description by using a label analogous to LAN, WAN, Intranet or Internet , but unfortunately what we’re often describing here is evolving to be much more complex.

Don’t get me wrong, I’m not aiming for precision but instead  accuracy.  I don’t find that these labels do a good enough job when used by themselves.

Further, you’ll find most people using the service deployment models (Hybrid, Public, Private) in absence of the service delivery models (SPI – Saas/PaaS/IaaS) while at the same time intertwining the location of the asset (internal, external) usually relative to a perimeter firewall (more on this in another post.)

This really lends itself to confusion.

I’m not looking to rename the HPPIE terms.  I am looking to use them more accurately.

Here’s a contentious example.  I maintain you can have an IaaS service that is Public and Internal.  WHAT!?  HOW!?

Let’s take a look at a summary table I built to think through use cases by looking at the three service deployment models (Hybrid, Public and Private):

The HPPIE Table

THIS TABLE IS DEPRECATED – PLEASE SEE UPDATE BELOW!

The blue separators in the table designate derivative service offerings and not just a simple and/or; they represent an actual branching of the offering.

Back to my contentious example wherein I maintain you can have an IaaS offering which is Public and yet also Internal. Again How?

Remember how I said “Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location?” That location refers to both the physical location of the asset as well as the logical location relative to an organization’s management umbrella which includes operations, security, compliance, etc.

Thus if you look at a managed infrastructure service (name one) that utilizes Cloud Computing principles, there’s no reason that a third party MSP could not deploy said service internally on customer premises equipment which the third party owns but operates and manages on behalf of an organization with the scale and pay-by-use model of Cloud internally that can include access from trusted OR untrusted sources, is there?

Some might call it a perversion of the term “Public.” I highlight it to illustrate that “Public” is a crappy word for the example because just as it’s valid in this example, it’s equally as valid to suggest that Amazon’s EC2 can also share the “Public” moniker, despite being External.

In the same light, one can easily derive examples of SaaS:Private:Internal offerings…You see my problem with these terms?

Moreover, the “consumer” focus of the traditional HPPIE models means that using broad terms like these generally implies that people are describing access to a service/offering by a human operating a web browser, and do not take into account access to services/offerings via things like API’s or programmatic interfaces.

This is a little goofy, too. I don’t generally use a web browser  (directly) to access Amazon’s S3 Storage-as-a-Service offering just like I don’t use a web browser to make API calls in GoogleGears.  Other non-interactive elements of the AppStack do that.

I don’t expect people to stop using these dumbed down definitions, but this is why it makes me nuts when people compare “Private” Cloud offerings with “Internal” ones. It’s like comparing apples and buffalo.

What I want is for people to at least not include Internal and External as Cloud models, but rather used them as parameters like I have in the table above.

Does this make any sense to you?


Update: In a great set of discussions regarding this on Twitter with @jamesurquhart from Cisco and @zhenjl from VMware, @zhenjl came up with a really poignant solution to the issues surrounding the redefinition of Public Cloud and their ability to be deployed “internally.”  His idea which highlights the “third party managed” example I gave is to add a new category/class called “Managed” which is essentially the example which I highlighted in boldface above:

managed-clouds

This means that we would modify the table above to look more like this (updated again based on feedback on Twitter & comments) — Ultimately revised as part of the work I did for the Cloud Security Alliance in alignment to the NIST model, abandoning the ‘Managed’ section:

Revised Model

This preserves the notion of how people generally define “Public” clouds but also makes a critical distinction between what amounts to managed Cloud services which are provided by third parties using infrastructure/services located on-premise. It also still allows for the notion of Private Clouds which are distinct.

Thoughts?

Related articles by Zemanta

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

The Cloud Is a Fickle Mistress: DDoS&M…

April 2nd, 2009 6 comments

It’s interesting to see how people react when they are reminded that the “Cloud” still depends upon much of the same infrastructure and underlying protocols that we have been using for years.

BGP, DNS, VPNs, routers, swtiches, firewalls…

While it’s fun to talk about new attack vectors and sexy exploits, it’s the oldies and goodies that will come back to haunt us:

Simplexity

Building more and more of our business’ ability to remain an on-going concern on infrastructure that was never designed to support it is a scary proposition.  We’re certainly being afforded more opportunity to fix some of these problems as the technology improves, but it’s a patching solution to an endemic problem, I’m afraid.  We’ve got two ways to look at Cloud:

  • Skipping over the problems we have and “fixing” crappy infrastructure and applications by simply adding mobility and orchestration to move around an issue, or
  • Actually starting to use Cloud as a forcing function to fundamentally change the way we think about, architect, deploy and manage our computing capabilities in a more resilient, reliable and secure fashion

If I were a betting man…

Remember that just because it’s in the “Cloud” doesn’t mean someone’s sprinkled magic invincibility dust on your AppStack…

That web service still has IP addresses, open sockets. It still gets transported over MANY levels of shared infrastructure, from the telcos to the DNS infrastructure…you’re always at someone elses’ mercy.

Dan Kaminsky has done a fabulous job reminding us of that.

A more poignant reminder of our dependency on the Same Old Stuff™ is the recent DDoS attacks against Cloud provider Go-Grid:

ONGOING DDoS ATTACK

Our network is currently the target of a large, distributed DDoS attack that began on Monday afternoon.   We took action all day yesterday to mitigate the impact of the attack, and its targets, so that we could restore service to GoGrid customers.  Things were stabilized by 4 PM PDT and most customer servers were back online, although some of you continued to experience intermittent loss in network connectivity.

This is an unfortunate thing.  It’s also a good illustration of the sorts of things you ought to ask your Cloud service providers about.  With whom do they peer? What is their bandwidth? How many datacenters do they have and where? What DoS/DDoS countermeasures do you have in place? Have they actually dealt with this before?  Do they drill disaster scenarios like this?

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

This is where engineering, architecture and security meet the road.  Your provider’s ability to sustain an attack like this is critical.  Further, how you’ve designed your BCP/DR contingency plans is pretty important, too.  Until we get true portability/interoperability between Cloud providers, it’s still up to you to figure out how to make this all work.  Remember that when you’re assuming those TCO calculations accurately reflect reality.

Big providers like eBay, Amazon, and Microsoft invest huge sums of money and manpower to ensure they are as survivable as they can be during attacks like this.  Do you?  Does your Cloud Provider? How many do you have.

Again, even Amazon goes down.  At this point, it’s largely been operational issues on their end and not the result of a massive attack. Imagine, however, if someday it is.  What would that mean to you?

As more and more of our applications and information are moved from inside our networks to beyond the firewalls and exposed to a larger audience (or even co-mingled with others’ data) the need for innovation and advancement in security is only going to skyrocket to start to deal with many of these problems.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Introducing the Cloud Security Alliance

March 31st, 2009 5 comments

I’m a founding member and serve as the technical advisor for the Cloud Security Alliance (CSA.)  This is an organization you may not have heard of yet, so I wanted to introduce you.

The more formal definition of the role and goals of the CSA appears below, but it’s most easily described as a member-driven forum for both industry, providers and “consumers” of Cloud Computing services to discuss issues and opportunities for security in this emerging space and help craft awareness, guidance and best practices for secure Cloud adoption.  It’s not a standards body. It’s not a secret cabal of industry-only players shuffling for position.  

It’s a good mix of vendors, practitioners and interested parties who are concerned with framing the most pressing concerns related to Cloud security and working together to bring ideas to life on how we can address them. 

From the website, here’s the more formal definition:

The CSA is a non-profit organization formed to promote the use of best practices for providing security assurance within Cloud Computing, and provide education on the uses of Cloud Computing to help secure all other forms of computing.

The Cloud Security Alliance is comprised of many subject matter experts from a wide variety disciplines, united in our objectives:

  • Promote a common level of understanding between the consumers and providers of cloud computing regarding the necessary security requirements and attestation of assurance.
  • Promote independent research into best practices for cloud computing security.
  • Launch awareness campaigns and educational programs on the appropriate uses of cloud computing and cloud security solutions.
  • Create consensus lists of issues and guidance for cloud security assurance.

The Cloud Security Alliance will be launched at the RSA Conference 2009 in San Francisco, April 20-24, 2009.

It’s clear that people will likely draw parallels between the CSA and the Open Cloud Manifesto given the recent announcement of the latter.  

The key difference between the two efforts relates to the CSA’s engagement and membership by both providers and consumers of Cloud Services and the organized non-profit structure of the CSA.  The groups are complimentary in nature and goals.

You can see who is participating in the CSA now based upon the pre-release of the working draft of our initial whitepaper.  Full attribution of company affiliation will be posted as the website is updated:

 

Co-Founders

Nils Puhlmann
Jim Reavis

Founding Members and Contributors

Todd Barbee
Alan Boehme
Jon Callas
Sean Catlett
Shawn Chaput
Dave Cullinane
Ken Fauth
Pam Fusco
Francoise Gilbert
Christofer Hoff
Dennis Hurst
Michael Johnson
Shail Khiyara
Subra Kumaraswamy
Paul Kurtz
Mark Leary
Liam Lynch
Tim Mather
Scott Matsumoto
Luis Morales
Dave Morrow
Izak Mutlu
Jean Pawluk
George Reese
Jeff Reich
Jeffrey Ritter
Ward Spangenberg
Jeff Spivey
Michael Sutton
Lynn Terwoerds
Dave Tyson
John Viega
Dov Yoran
Josh Zachry

Founding Charter Companies

PGPQualysZscaler

If you’d like to get involved, here’s how:

Individuals

Individuals with an interest in cloud computing and expertise to help make it more secure receive a complimentary individual membership based on a minimum level of participation. If you are interested in becoming a member, apply to join our LinkedIn Group

Affiliates

Not-for-profit associations and industry groups may form an affiliate partnership with the Cloud Security Alliance to collaborate on initiatives of mutual concern. Contact us at affiliates@cloudsecurityalliance.org for more information.

Corporate

Information on corporate memberships and sponsorship programs will be available soon. Contact info@cloudsecurityalliance.org for more information.

/Hoff


Meditating On the Manifesto: It’s Good To Be King…

March 29th, 2009 6 comments

By now you’ve heard of ManifestoGate, no?  If not, click on that link and read all about it as James Urquhart does a nice job summarizing it all.

In the face of all of this controversy, tonight Reuven Cohen twittered that the opencloudmanifesto.org website was live.

So I mosied over to take a look at the promised list of supporters of said manifesto since I’ve been waiting for a definition of the “we” who developed/support it.  

It’s a very interesting list.

There are lots of players. Some of them are just starting to bring their Cloud visions forward.

But clearly there are some noticeable absences, namely Google, Microsoft Salesforce and Amazon — the three four largest established Cloud players in the Cloudusphere.

I think it’s been said in so many words before, but let me make it perfectly clear why, despite the rhetoric both acute and fluffy from both sides, that these three Cloud giants aren’t listed as supporters.

Here are the the listed principles of the Open Cloud from the manifesto itself:

Of course, many clouds will continue to be different in a number of important ways, 
providing unique value for organizations. It is not our intention to define standards for 
every capability in the cloud and create a single homogeneous cloud environment. 

Rather, as cloud computing matures, there are several key principles that must be 
followed to ensure the cloud is open and delivers the choice, flexibility and agility 
organizations demand:
 

1. Cloud providers must work together to ensure that the challenges to 
cloud adoption (security, integration, portability, interoperability, 
governance/management, metering/monitoring) are addressed through 
open collaboration and the appropriate use of standards.
 

2. Cloud providers must not use their market position to lock customers 
into their particular platforms and limit their choice of providers.
 

3. Cloud providers must use and adopt existing standards wherever 
appropriate. The IT industry has invested heavily in existing standards 
and standards organizations; there is no need to duplicate or reinvent 
them.
 

4. When new standards (or adjustments to existing standards) are needed, 
we must be judicious and pragmatic to avoid creating too many 
standards. We must ensure that standards promote innovation and do 
not inhibit it.
 

5. Any community effort around the open cloud should be driven by 
customer needs, not merely the technical needs of cloud providers, and 
should be tested or verified against real customer requirements.
 

6. Cloud computing standards organizations, advocacy groups, and 
communities should work together and stay coordinated, making sure 
that efforts do not conflict or overlap.
 

Fact is, from a customer’s point of view, I find all of these principles agreeable and despite calling it a manifesto, I could see using it as a nice set of discussion points with which I can chat about my needs from the Cloud.   It’s intersting to note that given the audience as stated in the manifesto, that the only list of supporters are vendors and not “customers.”

I think the more discussion we have on the matter, the better.  Personally, I grok and support the principles herein.  I’m sure this point will be missed as I play devil’s advocate, but so be it.  

However, from the “nice theory, wrong universe” vendor’s point-of-view, why/how could I sign it?

See #2 above?  It relates to exactly the point made by James when he said “Those who have publicly stated that they won’t sign have the most to lose.”

Yes they do.  And the last time I looked, all three of them have notions of what the Cloud ought to be, and how and to what degree  it ought to interoperate and with whom.  

I certainly expect they will leverage every ounce of “lock-in” enhanced customer experience through a tightly-coupled relationship they can muster and capitalize on the de facto versus de jure “standardization” that naturally occurs in a free market when you’re in the top 4.  Someone telling me I ought to sign a document to the contrary would likely not get offered a free coffee at the company cafe.

Trying to socialize (in every meaning of the word) goodness works wonders if you’re a kibbutz.  With billions up for grabs in a technology land-grab, not so much.

This is where the ever-hopeful consumer, the idealist integrator, and the vendor-realist personalities in me begin to battle.

Oh, you should hear the voices in my head…

/Hoff

Categories: Cloud Computing Tags: