Archive

Posts Tagged ‘Open Source’

AwkwardCloud: Here’s Hopin’ For Open

February 14th, 2012 3 comments

MAKING FRIENDS EVERYWHERE I GO…

There’s no way to write this without making it seem like I’m attacking the person whose words I am about to stare rudely at, squint and poke out my tongue.

No, it’s not @reillyusa, featured to the right.  But that expression about sums up my motivation.

Because this ugly game of “Words With Friends” is likely to be received as though I’m at odds with what represents the core marketing message of a company, I think I’m going to be voted off the island.

Wouldn’t be the first time.  Won’t be the last.  It’s not personal.  It’s just cloud, bro.

This week at Cloud Connect, @randybias announced that his company, Cloudscaling, is releasing a new suite of solutions branded under the marketing moniker of  “Open Cloud.”

I started to explore my allergy to some of these message snippets as they were strategically “leaked” last week in a most unfortunate Twitter exchange.  I promised I would wait until the actual launch to comment further.

This is my reaction to the website, press release and blog only.  I’ve not spoken to Randy.  This is simply my reaction to what is being placed in public.  It’s not someone else’s interpretation of what was said.  It’s straight from the Cloud Pony’s mouth. ;p

GET ON WITH IT THEN!

“Open Cloud” is described as a set of solutions for those looking to deploy clouds that provide “… better economics, greater flexibility, and less lock-in, while maintaining control and governance” than so-called Enterprise Clouds that are based on what Randy tags are more proprietary foundations.

The case is made where enterprises will really want to build two clouds: one to run legacy apps and one to run purpose-built cloud-ready applications.  I’d say that enterprises that have a strategy are likely looking forward to using clouds of both models…and probably a few more, such as SaaS and PaaS.

This is clearly a very targeted solution which looks to replicate AWS’ model for enterprises or SP’s who are looking to exercise more control over the fate over their infrastructure.  How much runway this serves against the onslaught of PaaS and SaaS will play out.

I think it’s a reasonable bet there’s quite a bit of shelf life left on IaaS and I wonder if we’ll see follow-on generations to focus on PaaS.

Yet I digress…

This is NOT going to be a rant about the core definition of “Open,” (that’s for Twitter) nor is this going to be one of those 40 pagers where I deconstruct an entire blog.  It would be fun, easy and rather useful, but I won’t.

No. Instead I  will suggest that the use of the word “Open” in this press release is nothing more than opportunistic marketing, capitalizing on other recent uses of the Open* suffix such as “OpenCompute, OpenFlow, Open vSwitch, OpenStack, etc.” and is a direct shot across the bow of other companies that have released similar solutions in the near past (Cloud.com, Piston, Nebula)

If we look at what makes up “Open Cloud,” we discover it is framed upon on four key solution areas and supported by design blueprints, support and services:

  1. Open Hardware
  2. Open Networking
  3. Open APIs
  4. Open Source Software

I’m not going to debate the veracity or usefulness of some of these terms directly, but we’ll come back to them as a reference in a second, especially the notion of “open hardware.”

The one thing that really stuck under my craw was the manufactured criteria that somehow defined the so-called “litmus tests” associated with “Enterprise” versus “Open” clouds.

Randy suggests that if you are doing more than 1/2 of the items in the left hand column you’re using a cloud built with “enterprise computing technology” versus “open” cloud should the same use hold true for the right hand column:

So here’s the thing.  Can you explain to me what spinning up 1000 VM’s in less than 5 minutes has to do with being “open?”  Can you tell me what competing with AWS on price has to do with being “open?” Can you tell me how Hadoop performance has anything to do with being “open?”  Why does using two third-party companies management services define “open?”

Why on earth does the complexity or simplicity of networking stacks define “openness?”

Can you tell me how, if Cloudscaling’s “Open Cloud” uses certified vendors from “name brand” vendors like Arista how this is any way more “open” than using an alternative solution using Cisco?

Can you tell me if “Open Cloud” is more “open” than Piston Cloud which is also based upon OpenStack but also uses specific name-brand hardware to run?  If “Open Cloud” is “open,” and utilizes open source, can I download all the source code?

These are simply manufactured constructs which do little service toward actually pointing out the real business value of the solution and instead cloaks the wolf in the “open” sheep’s clothing.  It’s really unfortunate.

The end of my rant here is that by co-opting the word “open,” this takes a perfectly reasonable approach of a company’s experience in building a well sorted, (supposedly more) economical and supportable set of cloud solutions and ruins it by letting its karma get run over by its dogma.

Instead of focusing on the merits of the solution as a capable building block for building plain better clouds, this reads like a manifesto which may very well turn people off.

Am I being unfair in calling this out?  I don’t think so.  Would some prefer a private conversation over a beer to discuss?  Most likely.  However, there’s a disconnect here and it stems from pushing public a message and marketing a set of solutions that I hope will withstand the scrutiny of this A-hole with a blog.

Maybe I’m making a mountain out of a molehill…

Again, I’m not looking to pick on Cloudscaling.  I think the business model and the plan is solid as is evidenced by their success to date.  I wish them nothing but success.

I just hope that what comes out the other end is being “open” to consider a better adjective and more useful set of criteria to define the merits of the solution.

/Hoff

Enhanced by Zemanta

Cloud & Virtualization Stacks: Users Fear Lock-In, Ecosystem Fears Lock-Out…

June 7th, 2011 No comments
Cover of "Groundhog Day (15th Anniversary...

Cover via Amazon

I don’t think I’m verbalizing something very well…so I decided to write it down. I still don’t think I’m managing to stick my point, but perhaps clarity will come by discussion.

Simon Wardley has written about market dynamics and behaviors associated with the emergence and ultimate commoditization of things many, many times, but I’m not sure that I’ve found satisfaction in being able to accurately describe the dysfunctional co-dependency between consumer, leading vendor and ecosystem supporters to my liking.

Here’s an example…

There’s an uneasy tension that seems to often become nothing more than wink-and-nod-subtext in discussions relating to the various stacks offered by leading cloud and virtualization vendors.  Even those efforts with the “open” or “open source” descriptor bolted on for good measure.

It occurs to me that this can be attributed to many things; the business and licensing model of the solution provider, the ultimate “consumer” the offering targets, the area(s) of differentiation around technology, the maturity of the ecosystem, and the amount of self-integration versus vendor support required to successfully operationalize and maintain the solution.

More directly, the tension I refer to is the desire (or at least oft-verbalized complaints) on the behalf of “consumers” of cloud and virtualization stacks to not be “locked in” to a single vendor balanced against the odd juxtaposition — but not entirely unreasonable requirement — of simultaneously not being subject to the “integrator’s dilemma,” and having to support it all themselves.

Stir in the ecosystem of ISVs and solutions providers who orbit around these stacks, adding value where they have the “permission” to do so before either the stack provider obviates their existence by baking those features in directly, or simply makes it increasingly more difficult to roadmap given engineering dependencies they can’t control or count on.

I alluded to some of this in my blog titled Cloud Computing, Open* and the Integrator’s Dilemma wherein I mused:

I am just as worried about the fate of OpenStack and its enterprise versus service provider audience and how it’s being perceived as they watch the mad scramble by tech companies to add value and get a seat at the table.

Each of these well-intentioned projects are curated by public cloud operators and technology vendors and are indirectly positioned for the benefit of enterprises, but not really meant for their consumption — at least not if they don’t end up putting enterprises right back where they were trying to escape from in the first place with cloud computing: the integrator’s dilemma.

If you look at the underlying premise of OpenStack — it’s modularity, flexibility and open design — what you get is the ability to craft a solution finely tuned to an operating environment of your design. Integrate solutions into the stack as you see fit.  Contribute code.  Develop an ecosystem. Integrate, manage, maintain…

This is as much a problem as it is a solution for an enterprise.  This is why, in many cases, enterprises choose to use a single vendor with a single neck to choke in order to avoid having to act as an integrator in the first place or simply look to outsource to one or more public cloud providers and avoid this in the first place.

Chances are, most are realistically caught up somewhere in the nether-regions in between the two.

This all sounds so eerily familiar…

Enhanced by Zemanta

Incomplete Thought: Why We Need Open Source Security Solutions More Than Ever…

July 17th, 2010 1 comment
Illustrates a rightward shift in the demand curve.
Image via Wikipedia

I don’t have time to write a big blog post and quite frankly, I don’t need to. Not on this topic.

I do, however, feel that it’s important to bring back into consciousness how very important open source security solutions are to us — at least those of us who actually expect to make an impact in our organizations and work toward making a dent in our security problem pile.

Why do open source solutions matter so much in our approach to dealing with securing the things that matter most to us?

It comes down to things we already know but are often paralyzed to do anything about:

  1. The threat curve and innovation of attacker outpaces that of the defender by orders of magnitudes (duh)
  2. Disruptive technology and innovation dramatically impacts the operational, threat and risk modeling we have to deal with (duh duh)
  3. The security industry is not in the business of solving security problems that don’t have a profit motive/margin attached to it (ugh)

We can’t do much about #1 and #2 except be early adopters, by agile/dynamic and plan for change. I’ve written about this many times and built and entire series of talks presentations (Security and Disruptive Innovation) that Rich Mogull and I have taken to updating over the last few years.

We can do something about #3 and we can do it by continuing to invest in the development, deployment, support, and perhaps even the eventual commercialization of open source security solutions.

To be clear, it’s not that commercialization is required for success, but often it just indicates it’s become mainstream and valued and money *can* be made.)

When you look at the motivation most open source project creators bring a solution to market, it’s because the solution generally is not commercially available, it solves an immediate need and it’s contributed to by a community. These are all fantastic reasons to use, support, extend and contribute back to the open source movement — even if you don’t code, you can help by improving the roadmaps of these projects by making suggestions and promoting their use.

Open source security solutions deliver and they deliver quickly because the roadmaps and feature integration occur in an agile, meritocratic and vetted manner than often times lacks polish but delivers immediate value — especially given their cost.

We’re stuck in a loop (or a Hamster Sine Wave of Pain) because the problems we really need to solve are not developed by the companies that are in the best position to develop them in a timely manner. Why? Because when these emerging solutions are evaluated, they live or die by one thing: TAM (total addressable market.)

If there’s no big $$$ attached and someone can’t make the case within an organization that this is a strategic (read: revenue generating) big bet, the big companies wait for a small innovative startup to develop technology (or an open source tool,) see if it lives long enough for the market demand to drive revenues and then buy them…or sometimes develop a competitive solution.

Classical crossing the chasm/Moore stuff.

The problem here is that this cycle is broken horribly and we see perfectly awesome solutions die on the vine. Sometimes they come back to life years later cyclically when the pain gets big enough (and there’s money to be made) or the “market” of products and companies consolidate, commoditize and ultimately becomes a feature.

I’ve got hundreds of examples I can give of this phenomenon — and I bet you do, too.

That’s not to say we don’t have open-source-derived success stories (Snort, Metasploit, ClamAV, Nessus, OSSec, etc.) but we just don’t have enough of them. Further, there are disruptions such as virtualization and cloud computing that fundamentally change the game that we can harness in conjunction with open source solutions that can accelerate the delivery and velocity of solutions because of how impacting the platform shift can be.

I’ve also got dozens of awesome ideas that could/would fundamentally solve many attendant issues we have in security — but the timing, economics, culture, politics and readiness/appetite for adoption aren’t there commercially…but they can be via open source.

I’m going to start a series which identifies and highlights solutions that are either available as kernel-nugget technology or past-life approaches that I think can and should be taken on as open source projects that could fundamentally help our cause as a community.

Maybe someone can code/create open source solutions out of them that can help us all.  We should encourage this behavior.

We need it more than ever now.

/Hoff

Enhanced by Zemanta

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…

January 31st, 2010 15 comments

Allan Leinwand from GigaOm wrote a great article asking “Where are the network virtual appliances?” This was followed up by another excellent post by Rich Miller.

Allan sets up the discussion describing how we’ve typically plumbed disparate physical appliances into our network infrastructure to provide discrete network and security capabilities such as load balancers, VPNs, SSL termination, firewalls, etc.  He then goes on to describe the stunted evolution of virtual appliances:

To be sure, some networking devices and appliances are now available in virtual form.  Switches and routers have begun to move toward virtualization with VMware’s vSwitch, Cisco’s Nexus 1000v, the open source Open vSwitch and routers and firewalls running in various VMs from the company I helped found, Vyatta.  For load balancers, Citrix has released a version of its Netscaler VPX software that runs on top of its virtual machine, XenServer; and Zeus Systems has an application traffic controller that can be deployed as a virtual appliance on Amazon EC2, Joyent and other public clouds.

Ultimately I think it prudent for discussion’s sake to separate routing, switching and load balancing (connectivity) from functions such as DLP, firewalls, and IDS/IPS (security) as lumping them together actually abstracts the problem which is that the latter is completely dependent upon the capabilities and functionality of the former.  This is what Allan almost gets to when describing his lament with the virtual appliance ecosystem today:

Yet the fundamental problem remains: Most networking appliances are still stuck in physical hardware — hardware that may or may not be deployed where the applications need them, which means those applications and their associated VMs can be left with major gaps in their infrastructure needs. Without a full-featured and stateful firewall to protect an application, it’s susceptible to various Internet attacks.  A missing load balancer that operates at layers three through seven leaves a gap in the need to distribute load between multiple application servers. Meanwhile, the lack of an SSL accelerator to offload processing may lead to performance issues and without an IDS device present, malicious activities may occur.  Without some (or all) of these networking appliances available in a virtual environment, a VM may find itself constrained, unable to take full advantage of the possible economic benefits.

I’ve written about this many, many times. In fact almost three years ago I created a presentation called  “The Four Horsemen of the Virtualization Security Apocalypse” which described in excruciating detail how network virtual appliances were a big ball of fail and would be for some time. I further suggested that much of the “best-of-breed” products would ultimately become “good enough” features in virtualization vendor’s hypervisor platforms.

Why?  Because there are some very real problems with virtualization (and Cloud) as it relates to connectivity and security:

  1. Most of the virtual network appliances, especially those “ported” from the versions that usually run on dedicated physical hardware (COTS or proprietary) do not provide feature, performance, scale or high-availability parity; most are hobbled or require per-platform customization or re-engineering in order to function.
  2. The resilience and high availability options from today’s off-the-shelf virtual connectivity does not pair well with the mobility and dynamism of de-coupled virtual machines; VMs are ultimately temporal and networks don’t like topological instability due to key components moving or disappearing
  3. The performance and scale of virtual appliances still suffer when competing for I/O and resources on the same physical hosts as the guests they attempt to protect
  4. Virtual connectivity is a generally a function of the VMM (or a loadable module/domain therein.) The architecture of the VMM has dramatic impact upon the architecture of the software designed to provide the connectivity and vice versa.
  5. Security solutions are incredibly topology sensitive.  Given the scenario in #1 when a VM moves or is distributed across the pooled infrastructure, unless the security capabilities are already present on the physical host or the connectivity and security layers share a control plane (or at least can exchange telemetry,) things will simply break
  6. Many virtualization (and especially cloud) platforms do not support protocols or topologies that many connectivity and security virtual appliances require to function (such as multicast for load balancing)
  7. It’s very difficult to mimic the in-line path requirements in virtual networking environments that would otherwise force traffic passing through the connectivity layers (layers 2 through 7) up through various policy-driven security layers (virtual appliances)
  8. There is no common methodology to express what security requirements the connectivity fabrics should ensure are available prior to allowing a VM to spool up let alone move
  9. Virtualization vendors who provide solutions for the enterprise have rich networking capabilities natively as well as with third party connectivity partners, including VM and VMM introspection capabilities. As I wrote about here, mass-market Cloud providers such as Amazon Web Services or Rackspace Cloud have severely crippled networking.
  10. Virtualization and cloud vendors generally force many security vs. performance tradeoffs when implementing introspection capabilities in their platforms: third party code running in the kernel, scheduler prioritization issues, I/O limitations, etc.
  11. Much of the basic networking capabilities are being pushed lower into silicon (into the CPUs themselves) which makes virtual appliances even further removed from the guts that enable them
  12. Physical appliances (in the enterprise) exist en-mass.  Many of them provide highly scalable solutions to the specific functions that Alan refers to.  The need exists, given the limitations I describe above, to provide for integration/interaction between them, the VMM and any virtual appliances in order to offload certain functions as well as provide coverage between the physical and the logical.

What does this mean?  It means that ultimately to ensure their own survival, virtualization and cloud providers will depend less upon virtual appliances and add more of the basic connectivity AND security capabilities into the VMMs themselves as its the only way to guarantee performance, scalability, resilience and satisfy the security requirements of customers. There will be new generations of protocols, APIs and control planes that will emerge to provide for this capability, but this will drive the same old integration battles we’re supposed to be absolved from with virtualization and Cloud.

Connectivity and security vendors will offer virtual replicas of their physical appliances in order to gain a foothold in virtualized/cloud environments in order to intercept traffic (think basic traps/ACL’s) and then interact with higher-performing physical appliance security service overlays or embedded line cards in service chassis.  This is especially true in enterprises but poses many challenges in software-only, mass-market cloud environments where what you’ll continue to get is simply basic connectivity and security with limited networking functionality.  This implies more and more security will be pushed into the guest and application logic layers to deal with this disconnect.

This is exactly where we are today with Cloud providers like Amazon Web Services: basic ingress-only filtering with a very simplistic, limited and abstracted set of both connectivity and security capability.  See “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye”  Will they add more functionality?  Perhaps. The question is whether they can afford to in order to limit the impact that connecitivity and security variability/instability can bring to an environment.

That said, it’s certainly achievable, if you are willing and able to do so, to construct a completely software-based networking environment, but these environments require a complete approach and stack re-write with an operational expertise that will be hard to support for those who have spent the last 20 years working in a different paradigm and that’s a huge piece of this problem.

The connectivity layer — however integrated into the virtualized and cloud environments they seem — continues to limit how and what the security layers can do and will for some time, thus limiting the uptake of virtual network and security appliances.

Situation normal.

/Hoff

Reblog this post [with Zemanta]