Home > Cisco, Virtualization, VMware > Performance Of 3rd Party Virtual Switches, Namely the Cisco Nexus 1000v…

Performance Of 3rd Party Virtual Switches, Namely the Cisco Nexus 1000v…

One of the things I'm very much looking forward to with the release of Cisco Nexus 1000v virtual switch for ESX is the release of performance figures for the solution.

In my Four Horsemen presentation I highlight with interest the fact that in the physical world today we rely on dedicated, highly-optimized multi-core COTS or ASIC/FPGA-powered appliances to deliver consistent security performance in the multi-Gb/s range. 

These appliances generally deliver a single function (such as firewall, IPS, etc.) at line rate and are relatively easy to benchmark in terms of discrete performance or even when in-line with one another.

When you take the approach of virtualizing and consolidating complex networking and security functions such as virtual switches and virtual (security) appliances on the same host competing for the same compute, memory and scheduling resources as the virtual machines you're trying to protect, it becomes much more difficult to forecast and preduct performance…assuming you can actually get the traffic directed through these virtual bumps in the proper (stateful) order.

Recapping Horsemen #2 (Pestilence,) VMware's recently published performance results (grain of NaCl taken) for ESX 3.5 between two linux virtual machines homed to the same virtual switch/VLAN/portgroup in a host shows throughput peaks of up to 2.5 Gb/s.  Certainly the performance at small packet rates are significantly less but let's pick the 64KB-64KB sampled result shown below for a use case:
Vmware-performance
Given the performance we see above (internal-to-internal) it will be interesting to see how the retooling/extension of the networking functions to accomodate 3rd party vSwitches, DVS, API's, etc. will affect performance and what overhead these functions impose on the overall system.  Specifically, it will be very interesting to see how VMware's vSwitch performance compares to Cisco's Nexus 1000v vSwitch in terms of "apples to apples" performance such as the test above.*

It will be even more interesting to see what happens when vNetwork API's (VMsafe) API calls are made in conjunction with vSwitch interaction, especially since the performance processing will include the tax of any third party fast path drivers and accompanying filters.  I wonder if specific benchmarking test standards will be designed for such comparison?

Remember, both VMware's and Cisco's switching "modules" are software — even if they're running in the vKernel, so capacity, scale and performance are a function of arbitrated access to hardware via the hypervisor and any hardware-assist present in the underlying CPU.

What about it, Omar?  You have any preliminary figures (comparable to those above) that you can share with us on the 1000v that give us a hint as to performance?

/Hoff

* Further, if we measure performance that benchmarks traffic including
physical NICs, it will be interesting to see what happens when we load
a machine up with multiple 10Gb/s Ethernet NICs at production loads
trafficked by the vSwitches.

Categories: Cisco, Virtualization, VMware Tags:
  1. October 21st, 2008 at 21:19 | #1

    Hoff:
    These are some good points to bring up and I think it will see how these different approaches pan out. As always, I think you will see different options with different trade-offs, which is one of the reasons we created VN-link with both a hardware and software based option.
    Unfortunately, right now, I don't have anything to share. We will, however, be starting some formal testing shortly and I promise I'll share as soon as we have something to publish to give folks an idea of what to expect. As we get closer to actual shipping, we'll have some more formal docs that will give customers some guidance on performance and impact on the server.
    Omar Sultan
    Cisco Systems

  2. October 22nd, 2008 at 05:58 | #2

    Thanks, Omar…and you perfectly setup the follow-on post for me regarding the hardware version of the offering with the initiator, but I was hoping you'd have some #'s for the software version for us 😉
    I buy the trade-off scenario you mention. Depending upon performance/latency, I'd see where some might — assuming there's a big delta in performance between the hardware and software versions — go with one versus the other.
    It all depends on what those #'s look like, eh?
    It will be interesting to see how/if you either integrate the equivalent of "cut through" in the initiatiator so that intra-VM traffic doesn't have to ping-pong out of the VM, out the physical NIC, to the switch and back again…
    Looking forward to those performance #'s for both.
    Thanks!
    /Hoff
    (P.S. Hope your vacation was good…)

  1. No trackbacks yet.