Home > Cloud Computing, Cloud Security > Private Clouds: Your Definition Sucks

Private Clouds: Your Definition Sucks

Archie_bunker I think we have a failure to communicate…or at least I do.

Tonight I was listening to David Linthicum’s podcast titled “The Harsh Realities Of Private Clouds” in which he referenced and lauded Dimitry Sotnikov’s blog of the same titled “No Real Private Clouds Yet?
I continue to scratch my head not because of David’s statements that he’s yet to find any “killer applications” for Private Clouds but rather the continued unappetizing use of the definition (quoting Dimitry) of a Private Cloud:

In a nutshell, private clouds are Amazon-like cost-effective and scalable infrastructures but run by companies themselves within their firewalls.

This seems to be inline with Gartner’s view of Private Clouds also:

The future of corporate IT is in private clouds, flexible computing networks modeled after public providers such as Google and Amazon yet built and managed internally for each business’s users

My issue is again that of the referenced location and perimeter.  It’s like we’ve gone back to the 80’s with our screened subnet architectural Maginot lines again!  “This is inside, that is outside.”

That makes absolutely zero sense given the ubiquity, mobility and transitivity of information and platforms today.  I understand the impetus to return back to the mainframe in the sky, but c’mon…

For me, I’d take a much more logical and measured approach to this definition. I think there’s a step missing in the definitions above and how Private Clouds really ought to be described and transitioned to.

I think that the definitions above are too narrow end exculpatory in definition when you consider that you are omitting solutions like GoGrid’s CloudCenter concepts — extending your datacenter via VPN onto a cloud IaaS provider whose infrastructure is not yours, but offers you the parity or acceptable similarity in platform, control, policy enforcement, compliance, security and support to your native datacenter.
In this scenario, the differentiator between the “public” and “private” is then simply a descriptor defining from whom and where the information and applications running on that cloud may be accessed:

From the “Internet” = Public Cloud.  From the “Intranet” (via a VPN connection between the internal datacenter and the “outsourced” infrastructure) = Private Cloud.
Check out James Urquhart’s thoughts along these lines in his post titled “The Argument For Private Clouds.”

Private clouds are about extending the enterprise to leverage infrastructure that makes use of cloud computing capabilities and is not (only) about internally locating the resources used to provide service.  It’s also not an all-or-nothing proposition.

It occurs to me that private clouds make a ton of sense as an enabler to enterprises who want to take advantage of cloud computing for any of the oft-cited reasons, but are loathe to (or unable to) surrender their infrastructure and applications without sufficient control.

Private clouds mean that an enterprise can decide how and how much of the infrastructure can/should be maintained as a non-cloud operational concern versus how much can benefit from the cloud.
Private clouds make a ton of sense; they provide the economic benefits of outsourced scaleable infrastructure that does not require capital outlay, the needed control over that infrastructure combined with the ability to replicate existing topologies and platforms and ultimately the portability of applications and workflow.

These capabilities may eliminate the re-write and/or re-engineering of applications like is often required when moving to typical IaaS (infrastructure as a Service) player such as Amazon.
From a security perspective — which is very much my focus — private clouds provide me with a way of articulating and expressing the value of cloud computing while still enabling me to manage risk to an acceptable level as chartered by my mandate.

So why wouldn’t a solution like GoGrid’s CloudCenter offering paired with CohesiveFT’s VPN Cubed and no direct “public” Internet originated access to my resources count as Private Cloud Computing?
I get all the benefits of elasticity, utility billing, storage, etc., don’t have to purchase the hardware, and I decide based upon risk what I am willing to yield to that infrastructure.
CohesiveFT-ClustersExtended
David brought up the notion of proprietary vendor lock-in, but yet we see GoGrid has also open sourced their CloudCenter API OpenSpec…
Clearly I’m mad because I simply don’t see why folks are painting Private Clouds into a corner only to say that we’re years away from recognizing their utility when in fact we have the technology, business need and capability to deliver them today.
/Hoff
Categories: Cloud Computing, Cloud Security Tags:
  1. January 30th, 2009 at 19:55 | #1

    I'd rather Garter et al use "internal" instead of "private".
    Then.. "private" could mean hosted on an isolated/dedicated physical infrastructure. And "virtual private" could mean hosted on a shared/multi-tenant physical infrastructure but isolated in some virtual-y way.
    Then.. you could have "internal private", "internal virtual private", "external private", "external virtual private".
    Then.. "public" would simply be when it's not your cloud, but someone else's cloud who's services you are using.

  2. January 30th, 2009 at 20:12 | #2

    Yes, that makes a lot of sense. You'll notice in my first blog on the topic I made mention of the fact that:
    "…many of those involved in the debates subtley change the argument from discussing "private clouds" as a service model to instead focus on the location of the infrastructure used to provide service by using wording such as "internal clouds" or "in-house clouds." I believe these are mutually exclusive topics. "
    I like your combinatorial compromise of both logical/physical location and the provider of service and that last line sums it up way, way better than I did.
    Great job with the one-liner! 😉
    /Hoff

  3. January 30th, 2009 at 23:01 | #3

    Hmmm. Comparing public and private clouds to the Internet and Intranet…where have I seen that before? 😉
    Chris, as usual, you are right on the money with this stuff. My own definition of private cloud (with a little help from Chuck Hollis of EMC and others) is rapidly morphing, but now includes a critical term, "trust boundaries", as in:
    "An implementation of cloud architectures that allows enterprises to manage IT resources within virtual trust boundaries, rather than physical ones. The goal of a private cloud is to give enterprise IT and their customers the illusion that they are operating a cloud within their own data center, whether they are leveraging internal or external infrastructures, or a combination of the two. Provides traditional cloud attributes (elasticity, economics, flexibility, geographic relocation, etc.) with the control that IT needs (management, service delivery, security, etc.)."
    James

  4. January 31st, 2009 at 04:29 | #4

    Hoff,
    While I agree that architectural and from a model perspective there is no difference between private and public clouds (I've said so many times), I think from a legal and compliance perspective the distinction is necessary in order to recognize where demarcation lines of control begin and end.
    It's no different than ensuring that the physicality of hosted applications and services are noted – it's necessary for auditors and internally process folks to understand where that line exists in order to properly implement compliance with regulations as well as understand the risks associated with handing over control of infrastructure to someone else (and how that affects security, data, and access control).
    I agree with you that architectually there is no difference, and from that point of view there's no need to differentiate, but when we're talking about business and legal and security aspects of cloud computing – even hybrid models – we may have to be careful about pointing out what is and what is not under the control of the implementer.
    Lori

  5. January 31st, 2009 at 05:05 | #5

    The sad part is that it's ALWAYS been about trust boundaries, James. The problem is that we have not had, and still do not have, the ubiquity of policy and and the ability to uniformly apply it. Despite the amorphous perimeter, people continue to draw lines in the sand between inside and outside in order to define and apply controls because of the limitations of what the standards and technology currently deliver.
    We waffle between trying to play catch-up and invest our efforts in the network. When that doesn't work, we move to the host. When that doesn't work, we try to be information centric. In between we focus on the user then we might move to the application.
    The smallest atomic measure of what is important to us and where it is located changes; we say "it's the information" but here we are again talking about images, VMs or VLANs or Intranets versus Internets…
    We continue to play policy ping-pong up and down the stack by trying to solve our problems at one layer, not recognizing that we we need it all.
    If you read Lori's comment below, she basically says that yes, it shouldn't matter where your information is located from an architectural perspective" but that "…it's necessary for auditors and internally process folks to understand where that line exists in order to properly implement compliance with regulations as well as understand the risks associated with handing over control of infrastructure to someone else (and how that affects security, data, and access control)."
    To me, when done properly, that "line" isn't physical…it can't be. Yet here we are talking about autonomic networks that can self-govern but I have to still distinguish between inside and out?
    I don't understand how we continue to push this vision of virtuality and then artificially limit ourselves to what we can/should/ought to do because the technology doesn't exist!? We know what we need, someone needs to damned well build it!
    /Hoff

  6. January 31st, 2009 at 05:06 | #6

    Lori:
    See my reply to James above. 😉
    Thanks,
    /Hoff

  7. January 31st, 2009 at 06:08 | #7

    OK, wow…um, sorry about that kids…I didn't have my coffee before I started the comment rant above. The message holds but the tone? A little grumpy, maybe?
    @james: Sorry about the lack of link love. That "intranet vs. internet" concept was definitely grafted from yours — I added the link/attribution.
    @Lori: I had the same schizophrenic reaction to your post that I have to mine — you have that split personality problem too, it seems…all your "dynamic infrastructure, free info. love" posts clash with the "yeah, but we have to have those lines of demarcation" comments…DOES NOT COMPUTE! Although it really does, and I understand why you said — for the same reasons that I also do. 😉
    So, seriously, I am TOTALLY psyched about the Cloud because it's really pushing boundaries and getting people to talk and ACT to help solve some of these elemental issues.
    /Hoff

  8. January 31st, 2009 at 11:03 | #8

    Amen, brother Beaker, amen!
    I am working within Cisco to make the focus of ongoing cloud efforts begin with the words "trust" and "opportunity". "Trust" as the first thing enterprises need addressed before they even consider leveraging the economic benefits of cloud technology; "opportunity" as what exists for service providers (both "over the top" and carrier) who adopt technologies that address the former.
    To your point, I don't mind if location is *audited*. However, to define that compliance means that location is *fixed* is archaic, and will change in every case over the next five years. That being said, your observation about technology gaps is well taken, and will need to be aggressively addressed in parallel.
    James

  9. January 31st, 2009 at 11:07 | #9

    I was not in the least offended, but thanks for the link back.
    You are on a roll, dude. Keep up the good work.
    James

  10. mcsilvia
    January 31st, 2009 at 11:15 | #10

    The difference between private and public clouds for those of us that process huge amounts of relatively static data and (need to do it inexpensively) is BANDWIDTH.
    The pipes to the cloud where ever it may be are EXPENSIVE, now not Y2K expensive, but a cable modem isn't going to solve my problems.
    At the moment I can't use all the fabulous left over processing power, because, my problem is a storage problem, which quickly turns into a bandwidth problem.
    I have to move the data, to manipulate it… and suddenly a 'private' cloud on my super sized internal pipes is a nifty thing.

  11. January 31st, 2009 at 11:45 | #11

    So riddle me this…
    If it's "relatively static data" that needs to be processed and you can leverage the elasticity combined with lower cost of a private cloud, why wouldn't that (1) offset some of the bandwidth costs (which can also be mitigated easily enough these days) and (2) offset some of the storage costs and (3) allow you to essentially reorganize where you process your data in the first place?
    I'm not really talking about using "left over processing power" but rather ensuring that it gets done in the place most appropriate for it to get done in the first place. Static or not, problems like the one you describe have been solved (not that I claim to know your situation specifically.)
    It's not about "left over processing power" in my mind, it's about rightsizing the pool of resources and allocating them based upon many input factors: cost, bandwidth, latency, processing, storage, etc.
    See this is the bit that's missing from the "cloud," and isn't there yet in most "virtualized environments" but is exactly what real time infrastructure (RTI) is designed to solve.
    The governance/provisioning/automation/orchestration intelligence that is available today when used in conjunction with virtualization and private clouds is phenomenal. They just haven't been knitted together yet.
    These are multi-dimensional problems that need to be assessed; you telling me that you don't have enough bandwidth and me telling you that's solvable are sort of silly point/counterpoints because without instrumentation that can actually intelligently gauge your entire compute/storage/networking capability/capacity based on policy and business logic, we're pissing in the wind.
    Go take a look at CIRBA as an example of what I mean.
    /Hoff

  12. mcsilvia
    January 31st, 2009 at 13:22 | #12

    I don't disagree with your points at all, and I can't be more specific about what I do, so it is difficult to make an example.
    What I have internally could be called the beginnings of a 'private cloud' its trying leverage a 'public' cloud that is my problem. Ultimately, you are right that in many cases the business rules and processes are what need to be reorganized.
    For me the primary problem at the end of the day, is at the moment I can't afford the bandwidth I need. (You say easily mitigated) (eh not so much) Lets say I 'process' over 500TB a month, some of the 'processing' requires this 'data' to be copied or moved in a RTI manner.
    So I guess my point was for some enterprises to take advantage of the 'public' cloud (for anything large)(and my left of power comment was a smartass crack /sorry) the price of the pipes to the intetubes will need to continue to fall. Kind of like how this last spring storage prices suddenly started to commoditize and fell through the floor.
    That you address the intelligence aspect I think is very important. You are correct that the brains haven't been connected to the virtual body yet so to say. Business logic is the king.
    I bring up the bandwidth issue, because as I review the massive amounts of excellent cloud data out there and I don't see that issue being actively addressed. May be its a personal hiccup, by I find the broad or implied assumption that fat pipes are easy to be annoying.

  13. mcsilvia
    January 31st, 2009 at 13:24 | #13

    and I have to say brillant response, without knowing the specifics you hammered many a nail off the cuff.

  14. January 31st, 2009 at 13:38 | #14

    A cloud is a cloud is a cloud…
    John
    johnmwillis.com

  15. mcsilvia
    January 31st, 2009 at 14:54 | #15

    Yes from the 50k view and even the 10k. As a guy that just had his hands in a rack pulling a router and a switch out to replace them. While the cloud is a cloud is a cloud and I happen to care which one I'm screwing with. The private and public cloud argument is one on organizational hierarchy. As the architect I don't care, as the designer I need to make note of it, as the implementer I need to know.

  16. January 31st, 2009 at 15:00 | #16

    "…I find the broad or implied assumption that fat pipes are easy to be annoying." <– Fair enough. I was trying not to generalize, but I guess I did.
    I didn't want to imply the Cloud can sprinkle magic pixie dust on all your problems and *poof!* all your problems will disappear…we both know that's not true. However, I did want point out that there are a lot of variables in this mad equation and sometimes we reduce things to the ridiculous to the point that we don't see the forest for the twigs.
    Good points all around.

  17. mcsilvia
    January 31st, 2009 at 18:08 | #17

    ~and the fat pipe snipe wasn't directed at your comment, it just seems to be common missed part of the equation, but then maybe I'm arguing about electricity' now. Power and Cooling are assumed in these equation and we don't discuss them. Useless to point out potential savings. I'm usually down in the weeds to far anyway. Thanks for the insight.

  18. February 2nd, 2009 at 07:46 | #18

    Peeps.. this is Business 101… Private/Internal Clouds just "ain't gonna happen". Imagine, if you will, you are a CFO. You live your business day to day but plan for the future.. you project growth and spending out for certain future time period based on some degree of models, current data, each org's projections, the sales forecast and a little bit of witchcraft.
    GoGrid, Amazon, Joyent, FlexiScale, Rackspace (And everyone else i unintentionally forgot) are in the business of managing various levels of excess capacity (like an electric company does) and selling those resources. Computing capacity, not used by Eli Lilly or Animoto today, can be used by someone else, because there is an available, and planned pile of excess capacity. Heck, think of it as inventory.
    Back to your dream as a CFO. In your "real side of the business" — lets say you manufacture things for a living, you want to minimize carrying costs of inventory, so you implement things like JIT/TOC/TQM so the amount of "time" that "money" is trapped in the product creation process is a small as possible. You vary your cycle — especially if you're Wal-Mart down to the microsecond. The last thing you want to have is alot of raw materials stacking up in your warehouse, waiting to be used. The carrying costs, are expensive, and money is trapped in those raw materials until they can be manufactured.
    Internal/Private clouds are the same thing. Sure you can do the virtualization thing, which really saves you the CFO on power consumption — because thats VMware's ROI message to you, the CFO. Virtualization of existing and new resources makes alot of sence, because it allows you to minimize the amount of unused computing capacity on blades you already own or might acquire in the near future.
    The power of cloud computing is in the elastic provisioning, not inv virtualization per se. (Virtualization just makes it cheaper). If one day, Eli Lilly needs to let scientists fire of 500 instances to crunch numbers, thats possible up on EC2 or GoGrid (with permission)–but not in the enterprise. To provide for elasticity, you, the CFO would have to have a decent business case the value of having that excess capacity lying around–and frankly, it doesn't add up.
    Which to my final point is why you the CFO, makes sure you're company spends money ONLY on its core competencies and then finds high-service, low-cost providers for all other aspects.
    After all, you don't run your own power generating station, just in case Pink Floyd plays a gig at your Sales Kickoff and needs more power.. do you? Didn't think so. You just buy it from X-Gas and Electric Company.

  19. February 2nd, 2009 at 08:13 | #19

    @Michael:
    Perhaps it's a lack of coffee, but I can't tell if your agreeing with me and arguing against the crappy definition of private/internal clouds as presented by people like David Linthicum or misunderstanding my point?
    You seem to be suggesting that others' definition of private clouds (using virtualization to provide cloudy-like services) doesn't make a lot of sense or won't catch on because it doesn't really solve the economically-driven impetus for Cloud in the first place. For the bulk of companies, I agree with this.
    However, you also seem to suggest that "Private Cloud" capabilities like those offered by GoGrid as I referenced are sustainable and make sense; this was my point. To me, the next step to the Cloud for realizing value in Cloud Computing is Private Clouds — based on my definition/example above.
    So, which is it?
    /Hoff

  20. February 2nd, 2009 at 08:40 | #20

    Yeah.. good point and maybe i had too much coffee… I agree with you… and i disagree with Gartner

    "The future of corporate IT is in private clouds, flexible computing networks modeled after public providers such as Google and Amazon yet built and managed internally for each business's users"

    What the hell are they thinking, have they abandoned the money part of the equation?

  21. February 2nd, 2009 at 08:48 | #21

    Oh thank God. I couldn't live if I thought you were disagreeing with me! 😉
    Yeah, I think the notion of "internal" clouds is a relevant stepping stone toward the ultimate extension to "private clouds" as I defined them for companies that have a boatload of sunk costs in infrastructure and are already consolidating via virtualization.
    It's not an end play, however.
    RTI (real time infrastructure) really will provide the capabilities for larger companies to make this happen. Smaller companies? Nah, they'll just jump straight to the mix and go right to full-or Private Cloud for things like hub-spoke BCP and cloud bursting/hopping.
    BTW, me lovee Splunk. 😉

  22. February 2nd, 2009 at 11:50 | #22

    Wow, great discussion! I actually re-listened to the podcast on the plane right out to the Open Group show. There is a bit of miscommunication here in some areas, and certainly some areas of disagreement.
    Here are the points I was attempting to make, and will be making in the future:
    1. Private clouds are indeed needed, in fact most of my clients are building them. The reasons are the ones I’ve addressed my blogs and podcasts. We’re using many different tools and technologies, no two private clouds are the same.
    2. The area of private cloud computing will be a huge growth area in 2009 and 2010, based on the work I see coming in the door, and what other consultants and analysts are telling me.
    3. However, to the point I was making in the podcast, the use of “private cloud technology” is still evolving, and I don’t see a real “killer solution” out there…yet…but I’ll keep looking. This is not any different than any other emerging area, it takes time for the definitions to jell, this discussion is a good example of that issue. Nothing better or worse here.
    One thing that I will talk about in the next podcast is the use of “Amazon Like.” That was from the blog I was reading. I think I’m inclined to agree that private clouds have different patterns, and comparing them to public cloud offerings might not be the best approach. I’ll talk about this during the next podcast, and any of you are welcome to join me. Can’t be more fair than that.
    One of the reasons I started Blue Mountain Labs was to get ahead of the hype and find a good direction for cloud computing technology, and those seeking to leverage it. I’ve been down this road many time with other areas of hyper growth. This seems like more fun.
    Dave Linthicum

  23. February 11th, 2009 at 05:02 | #23

    Ok so how is your VPN link to an external datacenter different to what enterprises typically have in place today? They manage massive VPNs linking many sites, some of which have (or are) datacenters. It makes little difference whether the sites are run by (e.g. under the administrative control of) the organisation or by some 3rd party like IBM – if it's a single-tenant architecture then you're going to pay +/- the same and you're still going to have to engineer for peak loads and miss out on the economies of scale.
    Yes virtualisation is naturally evolving (fairly quickly given stiff competition) but adding an accounting layer does not make a cloud, nor does having multiple sites or outsourcing the management to a contractor. Strapping the 'cloud' moniker to this is underselling cloud, which is all about reducing complexity, leveraging economies of scale, securely sharing resources between tenants, 'infinite' on-demand scalability, tearing down perimeters, etc. – none of which are offered by your 'private cloud'.
    On the other hand, having a single sign on system based on say OpenID/OAuth and then being able to access any (authorised) resource from anywhere is (logically) 'private', even if Internet facing, and yet still derives the many benefits of cloud computing.
    In summary, let's call a spade a spade. If you're rolling out virtualised datacenters then don't kid yourself (or your clients) into thinking it's 'cloud computing'.
    Sam

  24. February 11th, 2009 at 05:38 | #24

    The VPN link from the corporate datacenter to the cloud provider's isn't different for the most part. It can be controlled and managed by software rather than fixed hardware (as in Cohesive FT's VPN-Cubed's case) but the issue isn't the VPN.
    This sentence, I argue, is really the stepping off point:
    "It makes little difference whether the sites are run by (e.g. under the administrative control of) the organisation or by some 3rd party like IBM – if it's a single-tenant architecture then you're going to pay +/- the same and you're still going to have to engineer for peak loads and miss out on the economies of scale."
    Firstly, lets work backwards…(1) Who said it's single tenant? Many (most?) of the Cloud providers in the IaaS are explicitly multi-tenant, so this makes ALL the difference in the world. (2) Based upon utility-baesd usage billing versus always on, you can pay significantly less (or more.)
    I've already said virtualization <> cloud, but it's certainly being leveraged to provide much of what we see today in terms of Cloud services, especially to provide multi-tenancy. If you go look at my taxonomy model, the "abstraction layer" is optional…but it's about more than adding an "accounting layer."
    In the private cloud example in this post, I'm not talking about outsourcing management, in fact I was implying that management would still be done by the corporation itself, albeit on someone elses' hardware.
    I think you missed the entire point of what I was illustrating or I did a really crappy job of defining it. Either way, "virtualized datacenters" <> "private clouds" by nature of my definition in the post.
    /Hoff

  1. July 31st, 2009 at 01:19 | #1