Home > Cisco, Cloud Computing, Cloud Security > Inter-Cloud Rock, Paper, Scissors: Service Brokers, Semantic Web or APIs?

Inter-Cloud Rock, Paper, Scissors: Service Brokers, Semantic Web or APIs?

A very interesting philosophical and market trajectory arms race is quietly ramping while the rest of the world tries to ping together how the Kindle will kill Cloud Computing and how Twitter already has.

As @Jamesurquhart and I spend our time exploring the longer term evolution of Cloud Computing, we end up in orbit around the notion of the Inter-Cloud (or Intercloud, or InterCloud)

Inter-Cloud represents one vision that describes how Clouds of many types will interoperate, federate and provide for workload portability as well as how those that provide these services and those that consume them, will interact.  You can see an interesting summary of these issues here in a fellow colleague’s post titled: “From India to Intercloud

In the broadest sense, Cloud is being positioned in the long term to allow for true utility.  This means that at a 30,000 foot view, consumers should be able to declare their business and technology requirements for workloads or application needs and TAMO! (then a miracle occurs,) that workload or application presents itself operating somewhere that meets those needs backed up by some form of attestation by the provider. Ultimately, I’d like to see a common way of auditing and validating those attestations.  Apropos for this discussion, I bring up the notion of an API 😉

This all seems like a deceptively simple scenario.  Realistically, it represents a monstrous challenge in execution.  To wit, in Reuven Cohen’s recent write-up (“The Inter-Cloud and the Cloud of Clouds“) he quotes Vint Cerf’s definition of the problem with the issues at hand:

“…each cloud is a system unto itself. There is no way to express the idea of exchanging information between distinct computing clouds because there is no way to express the idea of “another cloud.” Nor is there any way to describe the information that is to be exchanged. Moreover, if the information contained in one computing cloud is protected from access by any but authorized users, there is no way to express how that protection is provided and how information about it should be propagated to another cloud when the data is transferred.

There’s a giant sucking sound coming from the Cloudosphere…

The market is essentially rotating around three ways of describing a solution to this problem:

  1. Consumers of service declare their requirements using some methodology for doing so (either directly to trusted and discrete service providers or) using an intermediary or “service broker.”  In the case of the service broker, it’s their job to take these declarations of service definition (service contracts) and translate them across subscribing service providers who may each have their own proprietary interface.  This is starting to heat up as we already have players emerging in this space and analyst groups are picking up interest (Yankee, Gartner)It would be much better if there were an open and standardized way of ensuring that all providers used the same common interface and way of providing attestation of service contract satisfaction/compliance, which leads to…
  2. There’s the notion of the “semantic” exchange of information between Clouds positioned by folks like Sir Tim Berners-Lee (in reference to Cerf’s quote above): “…by semantically linking data, we are able to create “the missing part of the vocabulary needed to interconnect computing clouds. The semantics of data and of the actions one can take on the data, and the vocabulary in which these actions are expressed appear to constitute the beginning of an inter-cloud computing language.” Capitalizing on Berners-Lee’s definition of the Semantic Web wherein “a vision of information that is understandable by computers, so that they can perform more of the tedious work involved in finding, sharing and combining information on the web,” we see how this approach would play well into the service broker model, also.

  3. We’ve seen a lot of noise around using one or more API’s — open or proprietary — that allow for individual Cloud operation, management, assurance and governance, however nuanced those functions may be.  Open-sourced or not, and even with unifying management interfaces available such as libcloud, each Cloud vendor today sees its capability for management and streamlined operations as its first layer of competitive differentiation and individual API’s — even when abstracted through service brokers — are a way to move offerings forward whilst working toward open standards such as these.

Honestly, my bet is that this arms race will net out such that we’ll end up with some combination of all three.

This isn’t as simple-sounding as it started, especially when we throw in the definitional differences between workload portability and interoperability  as alluded to by all three approaches.

Add packaging elements such as OVF and the problem starts expanding into a very complex multi-dimensional issue very quickly.

Workload portability using common packaging formats (such as OVF) can be leaned upon to show how providers might deal the “lock-in” argument (you can move from my competitor to me,) but true interoperability is the real challenge here.

Reuven said it very well: “...what the world needs is not yet another API to control the finer nuances of a physical or virtual infrastructure but instead a way for that infrastructure to communicate with other clouds around it regardless of what it is. The biggest hurdle to cloud interoperability appears to have very little to do with a willingness for cloud vendors to create open cloud API’s but instead the willingness to provide the ability for these clouds to effectively inter-operate with one another. More simply the capability to work along side other cloud platforms in an open way.”

Here’s how I see Inter-Cloud playing out: In the short term we’ll need the innovators to push with their own API’s, then the service brokers will abstract them on behalf of consumers in the mid-stream and ultimately we will arrive at a common, open and standardized way of solving the problem in the long term with a semantic capability that allows fluidity and agility in a consumer being able to take advantage of the model that works best for their particular needs.



  1. July 27th, 2009 at 07:10 | #1

    All I'm going to say is:-

    Paul Baran -> packet switching -> multiple network protocols (ipx / spx, decnet, sna etc) -> bridges (ipx/spx to tcp/ip) -> standardisation (tcp/ip as defacto market adopted standard regardless of standards body) -> standards bodies capitulate -> explosion in innovation.

    Do we really need to do the mess of protocols, bridges, imposed standard etc before we get on with just adopting the defacto standard as the standard?

  2. July 27th, 2009 at 07:28 | #2

    I think you're spot on. Right now all of the innovators have different APIs because they all have somewhat different technology behind the API even if at the end of the day they all deliver "compute infrastructure". We're going to end up with a mix of all 3 if we move forward (and I really hope and will try to help us doing so) or we'll end up in "standardization paralysis" for years all trying to agree.

    The closest historical parallel to this is probably in the networking space where every vendor had their own SNMP MIBs for configuration of routers/switches that at the end of the day all provided "networking infrastructure" — now all of the cloud providers have a "SOAP/ReST API" for configuration but each of them vary in specific functions/methods.

    The SNMP MIBs were never standardized — still to this day Cisco and Juniper both have different proprietary MIBs and then monitoring/management tool vendors have become "service brokers" allowing you to manage a mixed network with a single set of processes and tools.

    Regarding the comment from Simon Wardley: We're very early in the cloud computing life-cycle as a technology. Picking the "defacto standard" today and running with it forever would be akin to picking DECnet for a networking protocol, Twinax for cable infastructure, etc. in the evolution of previous technologies because they were the early on "defacto standards". We should continue to innovate and iterate to come up with a much better cloud than we have today.

    Bret Piatt / @bpiatt

    Rackspace Hosting

  3. Arthur
    July 27th, 2009 at 11:20 | #3

    I'm going to go with what Simon was saying but with a slightly different angle:

    EDI->Web Services->Inter-Cloud Data Interchange Services

  4. July 28th, 2009 at 03:45 | #4

    I'll come at the question from a linguistics/theory of communication angle, since effectively what we are talking about here is logical resources having a directed conversation of some sort. There clearly have to be some conventions about what the semantics are at a high-level, but how the entities communicate have to be driven by context. Who are you speaking to clearly defines how you speak. In the same way we use 'domain specific' syntax and semantics, I'd expect that we'll see the evolution of inter-cloud domains with standardized conventions (healthcare inter-clouds eg) before there is true consensus around interoperability. I agree with Simon's observation, but it seems like we have some time for the AWS/Google/vendor tussle to play out to see what API emerges to be the API that unites us all.

  5. July 28th, 2009 at 06:22 | #5

    Just because it could get rationalized and streamlined in such a way doesn't mean it will.

    Sure there are some technical considerations involved: APIs, catalogs (http://stage.vambenepe.com/archives/889), semweb ontologies (glad to see you mention this), etc…

    But in the end you have a bunch of companies elbowing one another for the most profitable position and this, more than any new interop specification, is what is going to define the landscape.

    Let's walk before we run. First give me a way to use my Sprint phone on the Verizon network. Then you show me how to dynamically broker my business apps, ok? 😉

  6. July 29th, 2009 at 08:45 | #6

    Brokers as a business seems to come up at every new tech paradigm rolls out and hey let's face it –

    any problem can be solved by adding a layer of indirection.

    My guess though is that brokers must play role like availability & perf (akamai) or stronger identity (ping).

    Also they may be buses not brokers…in any case I think brokers won't really exist in a standalone abstract context per se but rather in the context of the value they are adding.

  7. August 27th, 2009 at 07:19 | #7

    While there may be a role for service brokers, it's not obvious to me what the incentive would be to include a middle person into the equation. Vint makes a good point, Tim responded appropriately, but as you point out crafting and approving standards doesn't a standard make– adoption does, and adoption is a far more complex process, including extension of 'false' standards (or attempt thereof) with market power, innovative new products that attract critical mass, and energy output of customers given their choices (or lack thereof).

  8. legalpowder.cn.com SCAM
    January 15th, 2012 at 22:12 | #8

    This provider ripped me off so i want to help make most people aware of it, why not support me spread this thing so they can not rob other peoples bucks!!! They took all my funds (1250$) and just do not answer anymore… same task happened to someone before but i found out too late. Now i am making the effort to warn many people so they do not get rid of their money like it has happened to others… The name of the website: legalpowder.cn.com

  1. September 21st, 2011 at 01:31 | #1