Home > Cloud Computing, Cloud Security, Virtualization, Virtualization Security > A Contentious Question: The Value Proposition & Target Market Of Virtual Networking Solutions?

A Contentious Question: The Value Proposition & Target Market Of Virtual Networking Solutions?

September 28th, 2011 Leave a comment Go to comments

I have, what I think, is a simple question I’d like some feedback on:

Given the recent influx of virtual networking solutions, many of which are OpenFlow-based, what possible in-roads and value can they hope to offer in heavily virtualized enterprise environments wherein the virtual networking is owned and controlled by VMware?

Specifically, if the only third-party VMware virtual switch to date is Cisco’s and access to this platform is limited (if at all available) to startup players, how on Earth do BigSwitch, Nicira, vCider, etc. plan to insert themselves into an already contentious environment effectively doing mindshare and relevance battle with the likes of mainline infrastructure networking giants and VMware?

If you’re answer is “OpenFlow and OpenStack will enable this access,” I’ll follow along with a question that asks how long a runway these startups have hanging their shingle on relatively new efforts (mainly open source) that the enterprise is not a typically early adopter of.

I keep hearing notional references to the problems these startups hope to solve for the “Enterprise,” but just how (and who) do they think they’re going to get to consider their products at a level that gives them reasonable penetration?

Service providers, maybe?

Enterprises…?

It occurs to me that most of these startups are being built to be acquired by traditional networking vendors who will (or will not) adopt OpenFlow when significant enterprise dollars materialize in stacks that are not VMware-centric.

Not meaning to piss anyone off, but many of these startups’ business plans are shrouded in the mystical vail of “wait and see.”

So I do.

/Hoff

Ed: To be clear, this post isn’t about “OpenFlow” specifically (that’s only one of many protocols/approaches,) but rather the penetration of a virtual networking solution into a “closed” platform environment dominated by a single vendor.

If you want a relevant analog, look at the wasteland that represents the virtual security startups that tried to enter this space (and even the larger vendors’ solutions) and how long this has taken/fared.

If you read the comments below, you’ll see people start to accidentally tease out the real answer to the question I was asking…about the value of these virtual networking solutions providers.  The funny part is that despite the lack of comments from most of the startups I mention, it took Brad Hedlund (from Cisco) to recognize why I wrote the post, which is the following:

“The *real* reason I wrote this piece was to illustrate that really, these virtual networking startups are really trying to invade the physical network in virtual sheep’s clothing…”

…in short, the problem space they’re trying to solve is actually in the physical network, or more specifically bridge the gap between the two.

Enhanced by Zemanta
  1. Kyle Mestery
    September 28th, 2011 at 18:02 | #1

    I think this is the sad reality for most of the companies you mentioned here. The reality with OpenFlow is that it's not enterprise ready now, now are enterprises ready to consume it. If and when that changes, the acquisition spree can start by the larger vendors. (Disclaimer: I work for Cisco, though I'm pretty sure no one takes my advice on acquisitions).

  2. September 28th, 2011 at 18:19 | #2

    Cool graphic though

  3. September 28th, 2011 at 18:21 | #3

    Why not ask the same question of the IaaS software start-ups as well. Many admit they were built to be acquired, and certainly you can see that was the aim of cloud.com (albiet achieved)–but where are they going now?

    Is your point that is bad to be built only to be acquired? Or that like the IaaS startups their product may not be enterprise relevant in any useful size for the foreseeable future?

    • RatSurv
      September 28th, 2011 at 18:35 | #4

      Thanks for the comment, James.My answer?The latter, DEFINITELY NOT the former. There are many successfully executed examples of that strategy (as you point out.)/Hoff

  4. Dave Walker
    September 28th, 2011 at 18:33 | #5

    There's a further subtle point about vSwitches, which raises the barrier to replacing them in environments which care about such things: separation assurance.

    One of the things I had cause to do a while back (about a year ago, so I'm most likely out of date) was delve through a ton of fine print to investigate assurance of traffic separation, particularly when it comes to VLANs. It turns out that, at least for Common Criteria, the vSwitch is the *only* networking device doing Layer 2 – real or virtual – which specifically has in its Target of Evaluation (as part of ESX, in the vSwitch's case) a statement which comes out as "where you have VLANs 1 and 2 configured on a vSwitch, data on VLAN 1 cannot be misdirected to VLAN 2 or vice versa, under any circumstances".

    (These days, I'd be unsurprised to find this in the ToE for Juniper's SRX series, but I admit I've not looked yet).

    So, taking out a vSwitch and replacing it with something else drops assurability of traffic separation in your virtual network, unless the "something else" has been suitably blessed by another authority deemed appropriate (NSA, CESG, etc). Outside of Public Sector, I'd expect Financial Services customers to consider this carefully…

  5. mfratto
    September 28th, 2011 at 18:38 | #6

    I don't think the interesting thing about open flow is based in the current crop of controllers and certainly not in switch hardware. I think where Openflow gets interesting (and this is regardless of hyper visor or even the presence of server virtualization) is *if* or vendors that are used to be in the middle of traffic use Openflow to influence the paths that flows take like load balancers, security stuff, qos, etc. It's going to be a while before anything useful comes from Openflow.

    But the potential is awesome!

    • RatSurv
      September 28th, 2011 at 18:46 | #7

      Mike:

      I responded by adding some clarity as an addition (at the bottom) of the post…this isn't really about OpenFlow…

  6. September 28th, 2011 at 19:05 | #8

    Are you asking the question in the context of L2 switching only or in general context of any non-hardware-provided networking functionality?

    I guess my answer to your question is in my question – there could be potentially more networking than L2 switching.

    Disclosure: I am lead engineer of CohesiveFT VPN-Cubed but I don't speak for CohesiveFT here.

  7. September 28th, 2011 at 19:21 | #9

    Hoff:

    You answered your own question in the clarifying edit you made, or at least one of a few possible answers. The virtual networking startups can't ignore the dominant virtualization vendor and the effect it has on their adoption, but they can look at the non-proprietary options that have a strong showing. Like you said, this question isn't specific to OpenFlow, but as an example: XenServer 6.0 just shipped with Open vSwitch as the default networking stack. It may not have the penetration of vSphere, but all of those startups just got one more angle to approach customers (Enterprise & Service Provider) through.

    (Disclaimer: I work for Citrix.)

  8. ERT
    September 28th, 2011 at 19:39 | #10

    Sometimes to understand what could happen, one needs to see how the status quo came to be.

    The the timing of how vSwitch and Nexus1000V came about is key. Why is it that VMware decided to go all in for 2yrs+ with Cisco instead of doing an ecosystem play a la server OEM model? And did Cisco's $150mil pre-IPO investment have any influence? Other vendors have tried to replicate the Nexus1000V solution with VMware, and have not been able to……

    But the great thing about how vSwitch and how the N1kV came about be the only 3rd Party VDS in the market, is that it was scoped out 4 years before it was launched by the engineering folks on both sides (some of the same key folks who are already at Nicira). It worked because VMware needed Cisco to bring down the last bastion of resistance against 100% virtualization adoption (the network admin) and Cisco needed VMware to help build the case for virtual switched that helped build the market for UCS. It was no accident the N1kV was placed in only the highest cost SKU of vSphere when it launched, and that VMware was the primary use case for early UCS shipments.

    While other vendors "might" have approached VMware with the ability/desire to execute on one of the above, the effort "could" be so significant to VMware that the investment of people and time (vs. funding) with a 3rd party, would not have the same pay off.

    For the virtual startups- the need to go where VMware hits a limit…. and the ones who can solve those limitations might have a chance. FWIW… my money is on Nicira.

    Disclaimer: Ex-VMware, part of the team that helped build and launch N1kV/UCS.

    • RatSurv
      September 28th, 2011 at 19:57 | #11

      Yes, excellent history…and that platform "lock out" (no matter the reason) simply means that the majority of the startups in this "market" don't have a rats ass chance in a cheese factory of gaining access to building or integrating a solution into this environment for the very reasons you stipulate.

      …which means that either they never intended to, their business model is flawed and predicated on the hype of open*, or I'm missing something…

      I'd tend to agree with your second-to-last comment that Nicira has an unfair advantage amongst them — but one wonders also how the theft of their source code will impact this since we don't know who's responsible for it…

      Again, this is an arms race. The only thing that counts here for the lifespan of these companies is revenue to offset expense — unless you simply expect to be acquired by a larger company who has the wherewithal to deal with the cost, bodies, marketing/sales and strategic alliance issues that come out of this sort of relationship that Cisco currently enjoys with VMware.

      My Disclaimer (again) is that I worked at Cisco in STBU alongside the SAVBU team and VMware and witnessed firsthand the very activity you describe.

      The giant sucking sound is the lack of comments from folks IN these companies who would otherwise argue I'm wrong, but don't.

  9. September 28th, 2011 at 20:17 | #12

    Everything you say here can be true, however there's more to it than that. I think a key point that doesn't fit within the scenario you describe here is that with more *aas being used, the problems of the network span vendors, providers and organizations. The problems a single enterprise (or SP) running VMware on a network they control are big and important ones to solve. However, what alternatives exist for someone that has some EC2 (across regions), some on-prem, something at a co-lo, etc? Virtual networks is an overloaded term, for sure, but I've always tried to keep the virtual machine analogy as best I could. To me, a virtual network (just like a virtual machine) is "a network that I control, that runs on one that I don't control". So, with this as a working definition, I'll turn this around and ask within the context of IaaS, who doesn't need this? More here from back in Jan.: http://blog.vcider.com/2011/01/virtual-networking

  10. Derick
    September 28th, 2011 at 21:05 | #13

    Working in the financial services industry, I am stunned that we still don't have a real audit-compliant virtual-networking solution from any vendor. OpenFlow has a big gaping hole it can walk right through here particularly with XenServer and Microsoft's Hypervisor.

    Single VLAN tag separation on standard enterprise ethernet switches will not pass audit. Different security tiers/zones need to be physically separate. With one exception: VRF, VSI, or S-VLAN separation will pass audit. This is because of how these technologies are implemented. There are separate RIB/FIB structures tied to these things that ensures the service is completely transparent to the tenant/subscriber in the control plane. For instance, each VSI can use the entire VLAN-ID range 1-4094. Each VSI has a completely separate mac-table and separate spanning-tree configuration.

    This is what makes Cisco's relationship to VMWare shameful. Cisco is 100% aware of this. Yet we don't see support for MPLS (VRFs, VPLS), 802.1ah (mac-in-mac) or 802.1Q-in-Q (S-VLANs) in vSwitch or Nexus 1000V. Cisco knows that enterprise networks are increasingly deploying MPLS in their infrastructure to natively extended logically segregated networks between data centers, yet they seem to forget this entirely when speaking of UCS, Nexus or the 1000V.

    One this point, I agree wholeheartedly with vCider: VLANs ARE NOT VIRTUAL-NETWORKING. By extension, this also means port-groups are not virtual-networking. Until VMWare/Cisco implement at least one of these things (Q-in-Q being incredibly easy to implement) than its not virtual-networking. Its just not. Telling the customer to re-VLAN their environment or to just run separate physical cables for each tenant is unacceptable. There are ways to pass audit with single VLAN tags (see http://blinking-network.blogspot.com/2011/08/mult… but really… Why not just implement Q-in-Q and reasonably secure separation in vSwitch? A separate vSwitch for each tenant, that is mapped to an outer VLAN tag on shared 10G uplinks into the network? You could have the full range of VLANs for each tenant.

    This isn't about solving inter-host issues like VXLAN is supposed to do, this is about interfacing with the external world. VMWare must realize that not everything in the world is a VM host.

    But wait… this is where OpenFlow could really be an ally and partner to VMWare. VMWare could get out of the business of failing to implement real networking in their host (lets face it…. vSwitch is awful… its hard not to think a server/app developer made a freshman attempt at developing a networking component) and just support an OpenFlow v1.1 or greater compliant vSwitch. Let Big Switch or Juniper, or anybody else that understands networking develop the controller. The controller will support all the things I mentioned above that VMWare is absolutely clueless on, and at the same time *provide centralized management of all the vSwitches in the network.*

    Thank you, buh-bye… See you at the OF symposium on Oct 26th!

    Derick Winkworth
    CCIE/JNCIE
    "All energy flows according to the whims of the Great Magnet" -HST

    • Kyle Mestery
      September 28th, 2011 at 21:20 | #14

      I find it hard to believe that VMware is willing to give up control over networking that allowing OpenFlow controllers would enable. They only way VMware ever does OpenFlow in the vSwitch is if customers demand that. And as our gracious host would say, that giant sucking sound is the sound of no customers asking for OpenFlow on the vSwitch. It's possible this situation changes down the road when OpenFlow actually grows up, but in it's current form, you're not going to see a lot of production networks, much less VMware networks, running with only OpenFlow controlling them.

      • Derick
        September 28th, 2011 at 22:34 | #15

        The giant sucking sound may not be customers demanding OpenFlow. It might end up being the sound of customers leaving to go to XenServer or something else because of nice products built on OpenFlow that solve these problems.

        Also I'm not speaking of an entire network built on OpenFlow (though, that is possible). In this case I am referring to a specific pain point that OpenFlow could address. This would be a nice, controlled way of getting OpenFlow into existing environment: Solve a problem that gets you into the door. Its better than trying to boil an ocean. This assumes of course that the OpenFlow controller you use has features allowing you to integrate with existing network infrastructure. I'm thinking of an OpenFlow "layer" starting at the vSwitch extending out one or two hops and then uplinking into an existing infrastructure via MPLS or Q-in-Q.

        OpenFlow v1.1 (or greater) supports MPLS and Q-in-Q and multiple forwarding tables. It has the right components for the kind of separation we need… Just need controllers, nodes, and vSwitches that do it.

        It might be a pipe dream. I don't give a shit, I like dreams that come from pipes.

        • September 29th, 2011 at 03:54 | #16

          You're forgetting an important point – vSwitch-to-physical network integration. That could be solved with an end-to-end OpenFlow-based network (good luck with that for the next decade or so), an integration protocol (EVB is the only viable alternative in the IEEE stack or you could go down the MPLS/VPN or VPLS route), or a total separation, running virtual networks over IP (VXLAN, NVGRE, Amazon EC2).

  11. JosephGlanville
    September 28th, 2011 at 22:12 | #17

    The current solutions for networking of virtual machines are quite lacking in my opinion. Especially those relying on dot1q vlans etc for segregation. Trying to merely extend the physical network is ignoring the fact that the network itself needs to be virtualized, just as we have done to storage and compute.

    As awesome as OpenFlow is I don't agree that it is the be all and end all solution to these problems.
    If you for instance have a vSwitch with OpenFlow support on a hardware OpenFlow network all you have now is centralized configuration. The network itself still has the same inherent issues you have always dealt with at large scale. Like deciding to build a Layer 2 vs Layer 3 network, issues with large forwarding tables, dot1q hopping and exhaustion issues etc.

    The real solution lies in a fully virtualized distributed Ethernet, with truly segregated domains that don't rely on any underlying hardware support bar something to encapsulate the Ethernet frames. be it Ethernet, Infiniband, TCP/IP or even UDP style packet protocols.

    • September 29th, 2011 at 03:50 | #18

      You got it almost right, but the really scalable solution lies in a fully virtualized IP. Ethernet is just a transport medium like so many others. Don't get too attached to it; 99% of all apps running over Ethernet don't know that; they use IP.

      • JosephGlanville
        September 29th, 2011 at 04:52 | #19

        True and untrue. Pure Layer 3 networking deprives you of multicast, broadcast and other Layer 2 protocols that many HA/clustering solutions rely on and will continue to be of significance. The overhead of maintaining the Ethernet frame is inconsequential so you may as well leave the abstraction in place rather than to lobotomize the implementation needlessly.

        • Florian Otel
          September 29th, 2011 at 15:56 | #20

          I think Ivan is on the right track and your argument is flawed, for the following reason: The fact that many current HA/clustering solutions use 20+ y.o. designs and rely on Layer2 mechs that worked nicely in a TestDev environment it doesn't mean we need to keep bending backwards — and introduce abominations like VXLAN, NVGRE or other L2-over-L(>2.5) overlay kludges/hacks (e.g. VPLS) — in an effort to "scale them up". Much less so (trying to) accommodate for that sort of a dud in future designs.

          So yes, I'd say: Let Ethernet be Layer 2 and do what it does — and have respect for what the old lady cannot do; And let layers above to be just that — above — and do what they were meant to do. And, most importantly, don't get swayed away by the golden boys of today that drink too much of their own KoolAid and think that having 60% or so of the x86 virtualization space makes the world their oyster. Let them wake up — and deal with — the harsh reality of their design limitations.

          Or, like I "kindly" put it when these discussions come up: "If you rely and/or require L2 adjacency a small, single L2 domain and 4094 freakin' VLANs is all you're ever gonna get. And now, get off my lawn !" 🙂

          • JosephGlanville
            September 29th, 2011 at 21:52 | #21

            Hmm. I think you have missed the importance Layer 2. Say for instance I am building a high performnace in ram distributed database. Like VoltDB for instance. How can I possibly achieve low latency lockless operation without the ability to commit transactions to multiple machines simultanously? I can emulate it with Level 3 unicast, but it's horribly inefficient to do as as I now send Y x (number of nodes) the amount of data I need to.

            Multicast and other Layer 2 protocols enable use cases such as these, zeroconf clustering etc.

            Ethernet isn't the best Layer 2 protocol but Layer 2 protocols are NOT dead and never will be for reasons similar to that outlined above.

  12. Derick Winkworth
    September 29th, 2011 at 18:47 | #22

    vpls is not a kludge or hack. there are other reasons to use it such as adequate separation for multi-tenant/multi-security zone virtualization of the network. im not even thinking of HA and VMotion. I just want multiple segregated switched domains overlayed on my infrastructure. each one independent of the other. each one with the full vlan-id range and separate mac tables. im not going to build a separate physical network for every tenant/security-zone.

    agree that ha and vmotion should be L3 though.

  13. September 29th, 2011 at 20:05 | #23

    You're right Hoff, the OpenFlow startups will be locked out of the vswitch layer of VMware deployments for quite some time. That's a painful reality.

    That said, there is nothing stopping these startups from building a better physical network in the very same environment, with single point of management being the primary pain point target. They can walk in the door saying: "Hey, we can build you an easier to manage network that has all the ease of implementation and single point of management like a Juniper QFabric, without all of the proprietary lock-in." "And because the special sauce is in software, you can buy cheaper hardware". Etc, etc.

    Once they have the physical network it only makes sense to extend the footprint into the virtual network.

    I know, I know, this post "isn't about OpenFlow" … but it kinda is.

    Cheers,
    Brad
    (Cisco)

    • RatSurv
      September 29th, 2011 at 20:20 | #24

      AHA!It's funny, Brad, that you hit the nail on the head — the one I was hoping to hear from one of the vendors from these companies would have answered.The *real* reason I wrote this piece was to illustrate that really, these virtual networking startups are really trying to invade the physical network in virtual sheep's clothing……which, to your point, means that while this wasn't about OpenFlow specifically, it (sort of) *WAS* and is about new architectures that wed the physical and virtual but not in either the direct embed VERSUS the overlay.Funny that this recognition came from someone who doesn't work for one of these companies. I don't understand why being honest about the business plan/GTM requires such obfuscation (assuming we're correct)Thanks,HoffSent from my iPhone

  14. Donny
    October 1st, 2011 at 21:18 | #25

    I find this interesting on a completely different level. As I work to design virtualized datacenters, there is a rising thread of the network becoming a "bus" for systems activity. That everything beyond the gateway of a given VM is untrusted and should be treated as such. This is driven by portability, elasticity, and availability. As each VM becomes an island performing a service as part of a greater matrix, the network becomes less complicated.

    The future of open networks may be one of the Autobahn (the old one). A dedicated high speed lane with minimal controls. Because a VM may be connected anywhere along the Autobahn, it is difficult to perform high level network administration and maintain performance, portability, and elasticity. But, if I treat the entire network as untrusted and only require low latency, high speed passage, the requirements are quite different.

    Please don't misunderstand, the traditional router, firewall, IDS remain. But behind these exist a single large fabric whose top priority is transport.

    From what I perceive, the companies referenced in this post failed due to lack of customer demand. Good enough was in the box from the chosen vendor. I believe products like OpenFlow will gain traction when the idea of network software being more important than network hardware is realized. Today, many virtualized environments care little about the underpinning server hardware. The abstraction makes hardware selection more about compatibility and performance than feature set. Similarly, the day is coming when we will be able to pick this "network software" due to features and this network hardware to run it on. I have to admit, it will be interesting to watch network personnel wrestle with the idea that true value comes from capability, not a label.

  15. Greg Ness
    November 9th, 2011 at 15:33 | #26

    OpenFlow and the emerging startup ecosystem are LT threats to the hardware-centric network vendors in teh same way that VMware and its early ecosystem were once LT threats to server vendors tied to particular apps. VMware decoupled the one app one server link in DevTest then crossed into the production environment. It came down to TCO, productivity and efficiency.

    Today’s clouds have their share of network problems, outages, cascading failures etc. Automation of the network on a device by device basis is not nearly as compelling as a broader solution not tied to a specific app or piece of hardware. Me thinks that is where this is all going and a handful of the OpenFlow players, assisted by a handful of service providers or large enterprises looking for the Philosopher’s Stone will eventually force the same kinds of innovations in the network as have been introduced in the server infrastructure.

  1. October 27th, 2011 at 15:37 | #1
  2. October 29th, 2011 at 17:11 | #2
  3. December 6th, 2011 at 17:59 | #3
  4. December 10th, 2011 at 07:49 | #4
  5. January 10th, 2012 at 01:09 | #5
  6. January 26th, 2012 at 13:19 | #6
  7. January 26th, 2012 at 13:19 | #7