Home > Virtualization > Return Of the Big, Honkin’ SuperNIC and Bait and (Virtual) Switch

Return Of the Big, Honkin’ SuperNIC and Bait and (Virtual) Switch

I’m going to highlight a prediction I had on a forthcoming security
offering from yet-to-be-named security solution providers for
virtualized environments as well as something I overheard at RSA.

In the next few days, I’m going to be releasing my post on the
evolution of some really concerning performance and configuration
limitations of security solutions in virtualized environments and this
will make a lot more sense, but until then, grok this…

Here’s Item #1 – Return of the Big, Honkin’ NIC Card…

Remember back when 3Com released this little beauty?

3comnic3Com® 10/100 Secure Server NIC

Server IPSec and 3DES Encryption at Wire Speeds


The 3Com® 10/100 Secure Server NIC is custom-designed for servers that
demand high performance and end-to-end security. Its onboard security
processor works with Windows 2000 or XP to offload key processing
tasks, reducing the load imposed on the CPU.

It never really took off and has long since been discontinued, but
here’s where I reckon we’re going to see a rebirth (like bellbottoms)
of something similar from security vendors, either as a NIC or an
offload card sitting in the virtual host.

In a virtualized server, most of the emerging security solutions are
going to take the form of agents/applications running in VM’s or as
virtual appliances in the host.  This is all going to be run in
software, with limitations on memory, CPU and I/O.  Imagine every flow
whether inter-host or intra-VM having to bounce back and forth across
the vSwitch and the security functions in software.


Despite API’s like VMsafe, which allow for hooks on a per-VM basis to
"redirect" traffic to a VM/VA for disposition in software, imagine if
instead of just having IPSec on a NIC, we also had DPI, firewall, IDP,
AV and other security functions also.

Rather than doing all of this stuff in software, the
agents/applications or virtual appliances could offload or allow the
hardware to perform them on their behalf.  This could take the form of
FPGA’s or custom silicon like Cavium’s multi-core Octeon security

This is where the argument of "hey, all we need is COTS multicore hardware to scale" simply falls apart.

It’s not at all an original idea, as we’ve had offload/acceleration cards in appliances/’servers for a long time, but when the performance and
configuration limitations of virtual hosts arise, I predict we’ll see these things crop
up as a "solution" that is "new." 😉

Here’s item #2 – Bait and (Virtual) Switch

I’ve talked previously about virtualization platform providers like VMware ultimately providing a way of modularizing/isolating the vSwitch functionality in the VMM and allowing third parties to instantiate their own vSwitch instead. 

Further, I’ve written about how I/O virtualization is likely to change the way and where the virtual networking is performed. 

Intel is rumored (was this news at RSA, I can’t tell?) to be taking
another approach which is that they intend to embed the vSwitch
functionality directly into the underlying CPU chipsets.  This makes the vSwitch not so much ‘v’ (virtual) any longer.
You’ll have the network switching fabric and functions in the CPU itself.

I’m sure that if Intel is considering this, then AMD would not be far behind.

Thus some version of an upcoming CPU would provide this capability
natively, interfacing with the NIC card (or the super NIC above) and
the VMM.  This brings up some really interesting questions, no?

More later.


Categories: Virtualization Tags:
  1. April 14th, 2008 at 17:06 | #1

    I dig the idea of what you're saying with #1, but think that moving to a fully hardware offloaded model defeats a good bit of what benefits virtualization brings. I personally feel that we will eventually get to a point where there are only three pieces of primary hardware that handle offloading (bus aside): CPU (compute), ASIC (or something along the lines, even it turns out to be back to CPU) for network, and TPM (security). Let everything else be managed by software. CPU runs the hypervisor, packets come through the ASIC, TPM is the "checks and balances"; those, IMO, are the three components that need bus speeds. Everything else can suffer in software.
    But you know I'm right there with you on #2: software switches are Real Bad(TM). Those should be the first to move into hardware, long before we start thinking about managing a hypervisor on the CPU.

  2. April 14th, 2008 at 17:48 | #2

    I think you're missing the point of all this, which is the absolutely ludicrous state that the VM/Virtual appliance model is introducing. Wait until you see my post in the next day or so. I swear, if you don't say "rot roh" at the end of it, I'll eat my (small and made of potatoes) hat.
    I spoke to Ptacek today for an article he's writing and he thought the points were interesting. 😉
    I'm not endorsing the idea of the Big Honkin' NIC card, I just know in my gut it's going to show up soon, unfortunately. The entire benefit model of virtualization becomes defeated thanks to the security models we have now — and VMsafe while good and a life extension isn't going to be the cure for cancer.
    The vSwitch moving into the CPU is going to pose yet another paradigm shift (ooooh, I hate saying that) that we're not ready for yet.
    I'm all about the TPM. I talk about it (and the final end state) in my VirtSec preso. I'm going to post the latest version once I deliver it in Germany at the end of next week.

  3. April 15th, 2008 at 14:40 | #3

    I look forward to your follow-on posts on this idea as well as the VirtSec preso. Will you also include a picture of you eating said potato hat? 😉

  4. April 15th, 2008 at 15:37 | #4

    The post is already up. See: http://rationalsecurity.typepad.com/blog/2008/04/
    I think it passes the mustard. If not, let me know. I'll go to to the deli forthwith. 😉

  1. No trackbacks yet.