Bypassing the Hypervisor For Performance & Network “Simplicity” = Bypassing Security?
As part of his coverage of Cisco’s UCS, Alessandro Perilli from virtualization.info highlighted this morning something I’ve spoken about many times since it was a one-slider at VMworld (latest, here) but that we’ve not had a lot of details about: the technology evolution of Cisco’s Nexus 1000v & VN-Link to the “Initiator:”
Chad Sakac, Vice President of VMware Technology Alliance at EMC, adds more details on his personal blog:
…[The Cisco] VN-Link can apply tags to ethernet frames – and is something Cisco and VMware submitted together to the IEEE to be added to the ethernet standards.
It allows ethernet frames to be tagged with additional information (VN tags) that mean that the need for a vSwitch is eliminated. the vSwitch is required by definition as you have all these virtual adapters with virtual MAC addresses, and they have to leave the vSphere host on one (or at most a much smaller number) of ports/MACs. But, if you could somehow stretch that out to a physical switch, that would mean that the switch now has “awareness” of the VM’s attributes in network land – virtual adapters, ports and MAC addresses. The physical world is adapting to andgaining awareness of the virtual world…
Bundle that with Scott Lowe’s interesting technical exploration of some additional elements of UCS as it relates to abstracting — or more specifically completely removing virtual networking from the hypervisor — and things start to get heated. I’ve spoken about this in my Four Horsemen presentation:
Today, in the VMware space, virtual machines are connected to a vSwitch because connecting them directly to a physical adapter just isn’t practical. Yes, there is VMDirectPath, but for VMDirectPath to really work it needs more robust hardware support. Otherwise, you lose useful features like VMotion. (Refer back to my VMworld 2008 session notes from TA2644.) So, we have to manage physical switches and virtual switches—that’s two layers of management and two layers of switching. Along comes the Cisco Nexus 1000V. The 1000V helps to centralize management but we still have two layers of switching.
That’s where the “Palo” adapter comes in. Using VMDirectPath “Gen 2″ (again, refer to my TA2644 notes) and the various hardware technologies I listed and described above, we now gain the ability to attach VMs directly to the network adapter and eliminate the virtual switching layer entirely. Now we’ve both centralized the management and eliminated an entire layer of switching. And no matter how optimized the code may be, the fact that the hypervisor doesn’t have to handle packets means it has more cycles to do other things. In other words, there’s less hypervisor overhead. I think we can all agree that’s a good thing
So here’s what I am curious about. If we’re clawing back networking form the hosts and putting it back into the network, regardless of flow/VM affinity AND we’re bypassing the VMM (where the dvfilters/fastpath drivers live for VMsafe,) do we just lose all the introspection capabilities and the benefits of VMsafe that we’ve been waiting for? Does this basically leave us with having to shunt all traffic back out to the physical switches (and thus physical appliances) in order to secure traffic? Note, this doesn’t necessarily impact the other components of VMsafe (memory, CPU, disk, etc.) but the network portion it would seem, is obviated.
Are we trading off security once again for performance and “efficiency?” How much hypervisor overhead (as Scott alluded to) are we really talking about here for network I/O?
Anyone got any answers? Is there a simple answer to this or if I use this option, do I just give up what I’ve been waiting 2 years for in VMsafe/vNetworking?