Archive

Archive for the ‘Cisco’ Category

Quick Bit: Virtual & Cloud Networking – Where It ISN’T Going…

May 26th, 2009 No comments

In my Four Horsemen presentation, I made reference to one of the challenges with how the networking function is being integrated into virtual environments.  I’ve gone on to highlight how this is exacerbated in Cloud networking, also.

Specifically, as it comes to understanding how the network plays in virtual and Cloud architectures, it’s not where the network *is* in the increasingly complex virtualized, converged and unified computing architectures, it’s where networking *isn’t.*

What do I mean by this?  Here’s a graphical representation that I built about a year ago.  It’s well out-of-date and overly-simplified, but you get the picture:

virtualnetwork-whereThere’s networking at almost every substrate level — in the physical and virtual construct.  In our never-ending quest to balance performance, agility, resiliency and security, we’re ending up with a trade-off I call simplexity: the most complex simplicity in networking we’ve ever seen.   I wrote about this in a blog post last year titled “The Network Is the Computer…(Is the Network, Is the Computer…)

If you take a look at some of the more recent blips to appear on the virtual and Cloud networking  radar, you’ll see examples such as:

This list is far from inclusive.  Yes, I know I’ve left off blade server manufacturers and other players like HP (ProCurve) and Juniper, etc.  as well as ADC vendors like f5.  It’s not that I don’t appreciate their solutions, it’s just that I have a couple of free cycles to write this, and the list above appear on the top of my stack.

I plan on writing in more detail about the impact some of these technologies are having on next generation datacenters and Cloud deployments, as it’s a really interesting subject for me coming from my background at Crossbeam.

The most startling differences are in the approach of either putting the networking (and all its attendant capabilities) back in the hands of the network folks or allowing the server/virtual server admins to continue to leverage their foothold in the space and manage the network as a component of the converged and virtualized solution as a whole.

My friend @aneel (Twitter)  summed it up really well this morning when comparing the Blade Network Technology VMready offering and the Cisco Nexus 100ov:

huh.. where cisco uses nx1kv to put net control more in hands of net ppl, bnt uses vmready to put it further in server/virt admin hands

Looking at just the small sampling of solutions above, we see the diversity in integrated networking, external fabrics, converged fabrics (including storage) and add-on network processors.

It’s going to be a wild ride kids.  Buckle up.

/Hoff

The Forthcoming Citrix/Xen/KVM Virtual Networking Stack…What Does This Mean to VMware/Cisco 1000v?

May 8th, 2009 8 comments

I was at Citrix Synergy/Virtualization Congress earlier this week and at the end of the day on Wednesday, Scott Lowe tweeted something interesting when he said:

In my mind, the biggest announcement that no one is talking about is virtual switching for XenServer. #CitrixSynergy

I had missed the announcements since I didn’t get to many of the sessions due to timing, so I sniffed around based on Scott’s hints and looked for some more meat.

I found that Chris Wolf covered the announcement nicely in his blog here but I wanted a little more detail, especially regarding approach, architecture and implementation.

Imagine my surprise when Alessandro Perilli and I sat down for a quick drink only to be joined by Simon Crosby and Ian Pratt.  Sometimes, membership has its privileges 😉

I asked Simon/Ian about the new virtual switch because I was very intrigued, and since I had direct access to the open source, it was good timing.

Now, not to be a spoil-sport, but there are details under FreiNDA that I cannot disclose, so I’ll instead riff off of Chris’ commentary wherein he outlined the need for more integrated and robust virtual networking capabilities within or adjunct to the virtualization platforms:

Cisco had to know that it was only a matter of time before competition for the Nexus 1000V started to emerge, and it appears that a virtual switch that competes with the Nexus 1000V will come right on the heels of the 1000V release. There’s no question that we’ve needed better virtual infrastructure switch management, and an overwhelming number of Burton Group clients are very interested in this technology. Client interest has generally been driven by two factors:

  • Fully managed virtual switches would allow the organization’s networking group to regain control of the network infrastructure. Most network administrators have never been thrilled with having server administrators manage virtual switches.
  • Managed virtual switches provide more granular insight into virtual network traffic and better integration with the organization’s existing network and security management tools

I don’t disagree with any of what Chris said, except that I do think that the word ‘compete’ is an interesting turn of phrase.

Just as the Cisco 1000v is a mostly proprietary (implementation of a) solution bound to VMware’s platform, the new Citrix/Xen/KVM virtual networking capabilities — while open sourced and free — are bound to Xen and KVM-based virtualization platforms, so it’s not really “competitive” because it’s not going to run in VMware environments. It is certainly a clear shot across the bow of VMware to address the 1000v, but there’s a tradeoff here as it comes to integration and functionality as well as the approach to what “networking” means in a virtualized construct.  More on that in a minute.

I’m going to take Chris’ next chunk out of order in order to describe the features we know about:

I’m expecting Citrix to offer more details of the open source Xen virtual switch in the near future, but in the mean time, here’s what I can tell you:

  • The virtual switch will be open source and initially compatible with both Xen- and KVM-based hypervisors
  • It will provide centralized network management
  • It will support advanced network management features such as Netflow, SPAN, RSPAN, and ERSPAN
  • It will initially be available as a plug-in to XenCenter
  • It will support security features such as ACLs and 802.1x

This all sounds like good stuff.  It brings the capabilities of virtual networking and how it’s managed to “proper” levels.  If you’re wondering how this is going to happen, you *cough* might want to take a look at OpenFlow…being able to enforce policies and do things similar to the 1000v with VMware’s vSphere, DVS and the up-coming VN-Link/VN-tag is the stuff I can’t talk about — even though it’s the most interesting.  Suffice it to say there are some very interesting opportunities here that do not require proprietary networking protocols that may or may not require uplifts or upgrades of routers/switches upstream.  ’nuff said. 😉

Now the next section is interesting, but in my opinion is a bit of reach in certain sections:

For awhile I’ve held the belief that the traditional network access layer was going to move to the virtual infrastructure. A large number of physical network and security appliance vendors believe that too, and are building or currently offering products that can be deployed directly to the virtual infrastructure. So for Cisco, the Nexus 1000V was important because it a) gave its clients functionality they desperately craved, but also b) protected existing revenue streams associated with network access layer devices. Throw in an open source managed virtual switch, and it could be problematic for Cisco’s continued dominance of the network market. Sure, Cisco’s competitors can’t go at Cisco individually, but by collectively rallying around an open source managed virtual switch, they have a chance. In my opinion, it won’t be long before the Xen virtual switch can be run via software on the hypervisor and will run on firmware on SR-IOV-enabled network interfaces or converged network adapters (CNAs).


This is clearly a great move by Citrix. An open source virtual switch will allow a number of hardware OEMs to ship a robust virtual switch on their products, while also giving them the opportunity to add value to both their hardware devices (e.g., network adapters) and software management suites. Furthermore, an open source virtual switch that is shared by a large vendor community will enable organizations to deploy this virtual switch technology while avoiding vendor lock-in.

Firstly, I totally agree that it’s fantastic that this capability is coming to Xen/KVM platforms.  It’s a roadmap item that has been missing and was, quite honestly, going to happen one way or another.

You can expect that Microsoft will also needto respond to this some point to allow for more integrated networking and security capabilities with Hyper-V.

However, let’s compare apples to apples here.

I think it’s interesting that Chris chose to toss in the “vendor lock-in” argument as it pertains to virtual networking and virtualization for the following reasons:

  • Most enterprise networking environments (from the routing & switching perspective) are usually provided by a single vendor.
  • Most enterprises choose a virtualization platform from a single vendor

If you take those two things, then for an environment that has VMware and Cisco, that “lock-in” is a deliberate choice, not foisted upon them.

If an enterprise chooses to invest based upon functionality NOT available elsewhere due to a tight partnership between technology companies, it’s sort of goofy to suggest lock-in.  We call this adoption of innovation.  When you’re a competitor who is threatened because don’t have the capability you call it lock-in. ;(

This virtual switch announcement does nothing to address “lock-in” for customers who choose to run VMware with a virtual networking stack other than VMware’s or Cisco’s…see what I mean.  it doesn’t matter if the customer has Juniper switches or not in this case…until you can integrate an open source virtual switch into VMware the same way Cisco did with the 1000v (which is not trivial,) then we are where we are.

Of course the 1000v was a strategic decision by Cisco to help re-claim the access layer that was disappering into the virtualized hosts and make Cisco more relevant in a virtualized environment.  It sets the stage, as I have mentioned, for the longer term advancements of the entire Nexus and NG datacenter switching/routing products including the VN-Link/VN-Tag — with some features being proprietary and requiring Cisco hardware and others not.

I just don’t buy the argument that an open virtual switch “… could be problematic for Cisco’s continued dominance of the network market.” when the longtime availablity of open source networking products (including routers like Vyatta) haven’t made much of a dent in the enterprise against Cisco.

Customers want “open enough” and solutions that are proven and time tested.  Even the 1000v is brand new.  We haven’t even finished letting the paint dry there yet!

Now, I will say that if IBM or HP want to stick their thumb in the pie and extend their networking reach into the host by integrating this new technology with their hardware network choices, it offers a good solution — so long as you don’t mind *cough* “lock-in” from the virtualization platform provider’s perspective (since VMware is clearly excluded — see how this is a silly argument?)

The final point about “security inspection” and comparing the ability to redirect flows at a kernel/network layer to a security VA/VM/appliance  is only one small part of what VMware’s VMsafe does:

Citrix needed an answer to the Nexus 1000V and the advanced security inspection offered by VMsafe, and there’s no doubt they are on the right track with this announcement.

Certainly, it’s the first step toward better visibility and does not require API modification of the security virtual appliances/machines like VMware’s solution in it’s full-blown implementation does, but this isn’t full-blown VM introspection, either.

Moreso, it’s a way of ensuring a more direct method of gaining better visibility and control over networking in a virtualized environment.  Remember that VMsafe also includes the ability to provide interception and inspection of virtualized memory, disk, CPU execution as well as networking.  There are, as I have mentioned Xen community projects to introduce VM introspection, however.

So yes, they’re on the right track indeed and will give people pause when evaluating which virtualization and network vendor to invest in should there be a greenfield capability to do so.  If we’re dealing with environments that already have Cisco and VMware in place, not so much.

/Hoff

VMware’s Licensing – A “Slap In The Face For Cisco?” Hey Moe!

May 4th, 2009 2 comments

3stooges-slapI was just reading a post by Alessandro at virtualization.info in which he was discussing the availability of trial versions of Cisco’s Nexus 1000v virtual switch solution for VMware environments:

Starting May 21, we’ll see if the customers will really consider the Cisco virtual switch a must-have and will gladly pay the premium price to replace the basic VMware virtual switch they used for so many years now.  As usual in virtualization, it really depends on who’s your interlocutor inside the corporate. The guys at the security department may have a slightly different opinion on this product than the virtualization guys.

Clearly the Nexus 1000v is just the first in a series of technology and architectural elements that Cisco is introducing to integrate more tightly into virtualized and Cloud environments.  The realities of adoption of the 1000v come down to who is making the purchasing decisions, how virtualization is being addressed as an enterprise architecture issue,  how the organization is structured and what pain points might be felt from the current limitations associated with VMware’s vSwitch from both a technological and operational perspective.

Oh, it also depends on price, too 😉

Alessandro also alludes to some complaints in pricing strategy regarding how the underlying requirement for the 1000v, the vNetwork Distributed switch, is also a for-pay item.  Without the vNDS, the 1000v no workee:

Some VMware customers are arguing that the current packaging and price may negatively impact the sales of Nexus 1000V, which becomes now much less attractive.

I don’t pretend to understand all the vagaries of the SKU and cost structures of VMware’s new vSphere, but I was intrigued by the following post from the vinternals blog titled VMware slaps enterprise and Cisco in face, opens door for competitors,:

And finally, vNetwork Distributed Switch. This is where the slap in the face for Cisco is, because the word on the street is that no one even cares about this feature. It is merely seen as an enabler for the Cisco Nexus 1000V. But now, I have to not only pay $600 per socket for the distributed switch, but also pay Cisco for the 1000V!?!?! A large slice of Cisco’s potential market just evaporated. Enterprises have already jumped through the necessary security, audit and operational hoops to allow vSwitches and port groups to be used as standard in the production environment. Putting Cisco into the virtual networking stack is nowhere near a necessity. I wonder what Cisco are going to do now, start rubbishing VMware’s native vSwitches? That will go down well. Oh and yeh, looks like you pretty much have only 1 licensing option for Cisco’s Unified Computing System now. Guess that “20% reduction in capital expense” just flew out the window.

Boy, what a downer! Nobody cares about vNDS?  It’s “…merely seen as an enabler for the Cisco Nexus 1000V?” Evaporation of market? I think those statements are a tad melodramatic, short-sighted and miss the point.

The “necessary security, audit and operational hoops to allow vSwitches and port groups to be used as standard in the production environment” may have been jumped through, but they represent some serious issues at scale and I maintain that these hoops barely satisfy these requirements based on what’s available, not what is needed, especially in the long term.  The issues surrounding compliance, separation of duties, change control/management as well as consistent and stateful policy enforcement are huge problems that are being tolerated today, not solved.

The reality is that vNDS and the 1000v represent serious operational, organizational and technical shifts in the virtualization environment. These are foundational building blocks of a converged datacenter, not point-product cash cows being built to make a quick buck.   The adoption and integration are going to take time, as will vSphere upgrades in general.  Will people pay for them?  If they need more scalable, agile, and secure environments, they will.  Remember the Four Horsemen? vSphere and vNetworking go a long way toward giving enterprises more choice in solving these problems and vNDS/1000v are certainly pieces of this puzzle. The network simply must become more virtualization (and application and information-) aware in order to remain relevant.

However, I don’t disagree in general that  “…putting Cisco into the virtual networking stack is nowhere near a necessity,” for most enterprises, especially if they have very simple requirements for scale, mobility and security.  In environments that are designing their next evolution of datacenter architecture, the integration between Cisco, VMware, and EMC are critical. Virtualization context, security and policy enforcement are pretty important things.  vNetworking/VNDS/1000v/VN-Link are all enablers.

Lastly, there is also no need for Cisco to “…start rubbishing VMware’s native vSwitches” as the differences are pretty clear.  If customers see value in the solution, they will pay for it. I don’t disagree that the “premium” needs to be assessed and the market will dicate what that will be, but this doom and gloom is premature.

Time will tell if these bets pay off.  I am putting money on the fact that they will.

Don’t think that Cisco and VMware aren’t aware of how critical one are to the other and there’s no face slapping going on.

/Hoff

Bypassing the Hypervisor For Performance & Network “Simplicity” = Bypassing Security?

March 18th, 2009 4 comments

As part of his coverage of Cisco’s UCS, Alessandro Perilli from virtualization.info highlighted this morning something I’ve spoken about many times since it was a one-slider at VMworld (latest, here) but that we’ve not had a lot of details about: the technology evolution of Cisco’s Nexus 1000v & VN-Link to the “Initiator:”

Chad Sakac, Vice President of VMware Technology Alliance at EMC, adds more details on his personal blog:

…[The Cisco] VN-Link can apply tags to ethernet frames –  and is something Cisco and VMware submitted together to the IEEE to be added to the ethernet standards.

It allows ethernet frames to be tagged with additional information (VN tags) that mean that the need for a vSwitch is eliminated.   the vSwitch is required by definition as you have all these virtual adapters with virtual MAC addresses, and they have to leave the vSphere host on one (or at most a much smaller number) of ports/MACs.   But, if you could somehow stretch that out to a physical switch, that would mean that the switch now has “awareness” of the VM’s attributes in network land – virtual adapters, ports and MAC addresses.   The physical world is adapting to andgaining awareness of the virtual world…

 

Bundle that with Scott Lowe’s interesting technical exploration of some additional elements of UCS as it relates to abstracting — or more specifically completely removing virtual networking from the hypervisor — and things start to get heated.  I’ve spoken about this in my Four Horsemen presentation:

Today, in the VMware space, virtual machines are connected to a vSwitch because connecting them directly to a physical adapter just isn’t practical. Yes, there is VMDirectPath, but for VMDirectPath to really work it needs more robust hardware support. Otherwise, you lose useful features like VMotion. (Refer back to my VMworld 2008 session notes from TA2644.) So, we have to manage physical switches and virtual switches—that’s two layers of management and two layers of switching. Along comes the Cisco Nexus 1000V. The 1000V helps to centralize management but we still have two layers of switching.

That’s where the “Palo” adapter comes in. Using VMDirectPath “Gen 2″ (again, refer to my TA2644 notes) and the various hardware technologies I listed and described above, we now gain the ability to attach VMs directly to the network adapter and eliminate the virtual switching layer entirely. Now we’ve both centralized the management and eliminated an entire layer of switching. And no matter how optimized the code may be, the fact that the hypervisor doesn’t have to handle packets means it has more cycles to do other things. In other words, there’s less hypervisor overhead. I think we can all agree that’s a good thing

 

So here’s what I am curious about. If we’re clawing back networking form the hosts and putting it back into the network, regardless of flow/VM affinity AND we’re bypassing the VMM (where the dvfilters/fastpath drivers live for VMsafe,) do we just lose all the introspection capabilities and the benefits of VMsafe that we’ve been waiting for?  Does this basically leave us with having to shunt all traffic back out to the physical switches (and thus physical appliances) in order to secure traffic?  Note, this doesn’t necessarily impact the other components of VMsafe (memory, CPU, disk, etc.) but the network portion it would seem, is obviated.

Are we trading off security once again for performance and “efficiency?”  How much hypervisor overhead (as Scott alluded to) are we really talking about here for network I/O?

Anyone got any answers?  Is there a simple  answer to this or if I use this option, do I just give up what I’ve been waiting 2 years for in VMsafe/vNetworking?

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

The UFC and UCS: Cisco Is Brock Lesnar

March 17th, 2009 7 comments

Lesnar vs. Mir...My favorite sport is mixed martial arts (MMA.)

MMA is a combination of various arts and features athletes who come from a variety of backgrounds and combine many disciplines that they bring to the the ring.  

You’ve got wrestlers, boxers, kickboxers, muay thai practitioners, jiu jitsu artists, judoka, grapplers, freestyle fighters and even the odd karate kid.

Mixed martial artists are often better versed in one style/discipline than another given their strengths and background but as the sport has evolved, not being well-rounded means you run the risk of being overwhelmed when paired against an opponent who can knock you out, take you down, ground and pound you, submit you or wrestle/grind you into oblivion.  

The UFC (Ultimate Fighting Championship) is an organization which has driven the popularity and mainstream adoption of MMA as a recognizable and sanctioned sport and has given rise to some of the most notable MMA match-ups in recent history.

One of those match-ups included the introduction of Brock Lesnar — an extremely popular “professional” wrestler — who has made the  transition to MMA.  It should be noted that Brock Lesnar is an aberration of nature.  He is an absolute monster:  6’3″ and 276 pounds.  He is literally a wall of muscle, a veritable 800 pound gorilla.

In his first match, he was paired up against a veteran in MMA and former heavyweight champion, Frank Mir, who is an amazing grappler known for vicious submissions.  In fact, he submitted Lesnar with a nasty kneebar as Lesnar’s ground game had not yet evolved.  This is simply part of the process.  Lesnar’s second fight was against another veteran, Heath Herring, who he manhandled to victory.  Following the Herring fight, Lesnar went on to fight one of the legends of the sport and reigning heavyweight champion, Randy Couture.  

Lesnar’s skills had obviously progressed and he looked great against Couture and ultimately won by a TKO.

So what the hell does the UFC have to do with the Unified Computing System (UCS?)

Cisco UCS Components

Cisco UCS Components

 

Cisco is to UCS as Lesnar is to the UFC.

Everyone wrote Lesnar off after he entered the MMA world and especially after the first stumble against an industry veteran.

Imagine the surprise when his mass, athleticism, strength, intelligence and tenacity combined with a well-versed strategy paid off as he’s become an incredible force to be reckoned with in the MMA world as his skills progressed.  Oh, did I mention that he’s the World Heavyweight Champion now?

Cisco comes to the (datacenter) cage much as Lesnar did; an 800 pound gorilla incredibly well-versed in one  set of disciplines, looking to expand into others and become just as versatile and skilled in a remarkably short period of time.  Cisco comes to win, not compete. Yes, Lesnar stumbled in his first outing.  Now he’s the World Heavyweight Champion.  Cisco will have their hiccups, too.

The first elements of UCS have emerged.  The solution suite with the help of partners will refine the strategy and broaden the offerings into a much more well-rounded approach.  Some of Cisco’s competitors who are bristling at Cisco’s UCS vision/strategy are quick to criticize them and reduce UCS to simply an ill-executed move “…entering the server market.”  

I’ve stated my opinions on this short-sighted perspective:

Yes, yes. We’ve talked about this before here. Cisco is introducing a blade chassis that includes compute capabilities (heretofore referred to as a ‘blade server.’)  It also includes networking, storage and virtualization all wrapped up in a tidy bundle.

So while that looks like a blade server (quack!,) walks like a blade server (quack! quack!) that doesn’t mean it’s going to be positioned, talked about or sold like a blade server (quack! quack! quack!)
What’s my point?  What Cisco is building is just another building block of virtualized INFRASTRUCTURE. Necessary infrastructure to ensure control and relevance as their customers’ networks morph.

My point is that what Cisco is building is the natural by-product of converged technologies with an approach that deserves attention.  It *is* unified computing.  It’s a solution that includes integrated capabilities that otherwise customers would be responsible for piecing together themselves…and that’s one of the biggest problems we have with disruptive innovation today: integration.

 

The knee-jerk dismissals witnessed since yesterday by the competition downplaying the impact of UCS are very similar to how many people reacted to Lesnar wherein they suggested he was one dimensional and had no core competencies beyond wrestling, discounting his ability to rapidly improve and overwhelm the competition.  

Everyone seems to be focused on the 5100 — the “blade server” — and not the solution suite of which it is a single piece; a piece of a very innovative ecosystem, some Cisco, some not.  Don’t get lost in the “but it’s just a blade server and HP/IBM/Dell can do that” diatribe.  It’s the bigger picture that counts.

The 5100 is simply that — one very important piece of the evolving palette of tools which offer the promise of an integrated solution to a radically complex set of problems.

Is it complete?  Is it perfect?  Do we have all the details? Can they pull it off themselves?  The answer right now is a simple “No.”  But it doesn’t have to be.  It never has.

There’s a lot of work to do, but much like a training camp for MMA, that’s why you bring in the best partners with which to train and improve and ultimately you get to the next level.

All I know is that I’d hate to be in the Octagon with Cisco just like I would with Lesnar.

/Hoff

Cisco Is NOT Getting Into the Server Business…

February 13th, 2009 5 comments

Walklikeaduck
Yes, yes. We've talked about this before here. Cisco is introducing a blade chassis that includes compute capabilities (heretofore referred to as a 'blade server.')  It also includes networking, storage and virtualization all wrapped up in a tidy bundle.

So while that looks like a blade server (quack!,) walks like a blade server (quack! quack!) that doesn't mean it's going to be positioned, talked about or sold like a blade server (quack! quack! quack!)

What's my point?  What Cisco is building is just another building block of virtualized INFRASTRUCTURE. Necessary infrastructure to ensure control and relevance as their customers' networks morph.

My point is that what Cisco is building is the natural by-product of converged technologies with an approach that deserves attention.  It *is* unified computing.  It's a solution that includes integrated capabilities that otherwise customers would be responsible for piecing together themselves…and that's one of the biggest problems we have with disruptive innovation today: integration.

While the analysts worry about margin erosion and cannibalizing the ecosystem (which is inevitable as a result of both innovation and consolidation,) this is a great move for Cisco, especially when you recognize that if they didn't do this, the internalization of network and storage layers within the virtualization platforms  would otherwise cause them to lose relevance beyond dumb plumbing in virtualized and cloud environments.

Also, let us not forget that one of the beauties of having this "end-to-end" solution from a security perspective is the ability to leverage policy across not only the network, but compute and storage realms also.  You can whine (and I have) about the quality of the security functionality offered by Cisco, but the coverage you're going to get with centralized policy that has affinity across the datacenter (and beyond,) iis  going to be hard to beat.

(There, I said it…OMG, I'm becoming a fanboy!)

And as far as competency as a "server" vendor, c'mon. Firstly, you can't swing a dead cat without hitting a commoditzed PC architecture that Joe's Crab Shack could market as a solution and besides which, that's what ODM's are for.  I'm sure we'll see just as much "buy and ally" with the build as part of this process. 

What's the difference between a blade chassis with intel line processors and integrated networking and a switch these days?  Not much.

So, what Cisco may lose in margin in the "server" sale, they will by far make up with the value people will pay for with converged compute, network, storage, virtualization, management, VN-Link, the Nexus 1000v, security and the integrated one-stop-shopping you'll get.  And if folks want to keep buying their HP's and IBM's, they have that choice, too.

QUACK!

/Hoff
Categories: Cisco, Cloud Computing, Cloud Security Tags:

Servers and Switches and VMM’s, Oh My! Cisco’s California “Server Switch”

December 21st, 2008 4 comments

Cisco-Virtualization-Wow
From the desk of Cisco's "Virtualization Wow!" Campaign: When is a switch with a server not a virtualization platform?  When it's a server with a switch as a virtualization platform, of course! 😉

I can't help but smile at the announcement that Cisco is bringing to market a blade-based chassis which bundles together Intel's Nehalem-based server processors, the Nexus 5000 switch, and VMware's virtualization and management platform.  From InformationWeek:

Cisco's system, code-named California, likely will be introduced in the
spring, according to the sources. It will meld Cisco's Nexus 5000 switch that converges storage
and data network traffic, blade servers that employ Intel Nehalem
processors, and virtualization management with help from VMware. 

This news was actually broken back in the beginning of December by virtualization.info but I shamefully missed it.  It looked like a bunch of others did, too.

This totally makes sense as virtualization has driven convergence across the compute, network and storage realms and has highlighted the fact that the provisioning, automation, and governance — up and down the stack — demands a unified approach for management and support.

For me, this is the natural come-about of what I wrote about in July of 2007 in a blog post titled "Cisco & VMware – The Revolution Will Be…Virtualized?":

This [convergence of network, compute and virtualization, Ed.] is interesting for sure and if you look at the way in which the
demand for flexibility of software combined with generally-available
COTS compute stacks and specific network processing where required, the
notion that Cisco might partner with VMWare or a similar vendor such as
SWSoft looks compelling.  Of course with functionality like KVM in the Linux kernel, there's no reason they have to buy or ally…

Certainly there are already elements of virtualization within
Cisco's routing, switching and security infrastructure, but many might
argue that it requires a refresh in order to meet the requirements of
their customers.  It seems that their CEO does.

When I last blogged about Cisco's partnership with VMware and (what is now called) the Nexus 1000v/VN-Link, I made reference to the fact that I foresaw the extraction of the VM's from the servers and suggested that we would see VM's running in the Nexus switches themselves.  Cisco representatives ultimately put a stake in the sand and said this would never happen in the comments of that blog post.

Now we know what they meant and it makes even more sense.

So the bundling of the Nexus 5000* (with the initiator,) the upcoming protocol for VM-Flow affinity tagging, the integrated/converged compute and storage capabilities, and Intel's SR-IOV/MR-IOV/IOMMU technologies in the Nehalem, all supported by the advances with vNetworking/VN-Link makes this solution a force to be reckoned with.

Other vendors, especially those rooted in servers and networking such as HP and IBM, are likely to introduce their own solutions, but given the tight coupling of the partnership, investment and technology development between Intel, VMware and Cisco, this combo will be hard to beat. 

Folks will likely suggest that Cisco has no core competency in building, selling or supporting "servers," but given the channel and partnership with VMware — with virtualization abstracting that hardware-centric view in the first place — I'm not sure this really matters.  We'll have to see how accurate I am on this call.

Regardeless of the semantic differences of where the CPU execution lives (based on my initial prediction,) all the things I've been talking about that seemed tangentially-related but destined to come together seem to have.  Further, here we see the resurgence (or at least redefinition of Big Iron, all over again…)

Remember the Four Horsemen slides and the blog post (the network is the computer, is the network, is the…) where I dared you to figure out where "the network" wasn't in the stack?  This is an even more relevant question today. 

It's going to be a very interesting operational and organizational challenge from a security perspective when your server, storage, networking and virtualization platform all come from a single source.

California Dreamin'…

/Hoff

* Not that the 1000v ain't cool, but that little slide that only appeared once at VMworld about the 5000v and the initiator was just too subtely delicious not to be the real juice in the squeeze. The 1000v obviously has its place and will be a fantastic solution, but for folks who are looking for a one-stop shop for their datacenter blueprint designs heavily leveraging virtualization, this makes nothing but sense.

Categories: Cisco, Virtualization, VMware Tags:

Virtual Routing – The Anti-Matter of Network SECURITY…

December 16th, 2008 10 comments

Here's a nod to Rich Miller who pointed over (route the node, not the packet) to a blog entry from Andreas Antonopoulos titled "Virtual Routing – The anti-matter of network routing."

The premise, as brought up by Doug Gourlay from Cisco at the C-Scape conference was seemingly innocuous but quite cool:

"How about using netflow information to re-balance servers in a data center"

Routing: Controlling the flow of network traffic to an optimal path between two nodes

Virtual-Routing or Anti-Routing: VMotioning nodes (servers) to optimize the flow of traffic on the network.

Using netflow information, identify those nodes (virtual servers)
that have the highest traffic "affinity" from a volume perspective (or
some other desired metric, like desired latency etc) and move (VMotion,
XenMotion) the nodes around to re-balance the network. For example,
bring the virtual servers exchanging the most traffic to hosts on the
same switch or even to the same host to minimize traffic crossing
multiple switches. Create a whole-data-center mapping of traffic flows,
solve for least switch hops per flow and re-map all the servers in the
data center to optimize network traffic.

My first reaction was, yup, that makes a lot of sense from a network point of view, and given who made the comment, it does make sense. Then I choked on my own tongue as the security weenie in me started in on the throttling process, reminding me that while this is fantastic from an autonomics perspective, it's missing some serious input variables.

Latency of the "network" and VM spin-up aside, the dirty little secret is that what's being described here is a realistic and necessary component of real time (or adaptive) infrastructure.  We need to get ultimately to the point where within context, we have the ability to do this, but I want to remind folks that availability is only one leg of the stool.  We've got the other nasty bits to concern ourselves with, too.

Let's look at this from two perspectives: the network plumber and the security wonk

From the network plumbers' purview, this sounds like an awesome idea; do what is difficult in non-virtualized environments and dynamically adjust and reallocate the "location" of an asset (and thus flows to/from it) in the network based upon traffic patterns and arbitrary metrics.  Basically, optimize the network for the lowest latency and best performance or availability by moving VM's around and re-allocating them across the virtual switch fabric (nee DVS) rather than adjusting how the traffic gets to the static nodes.

It's a role reversal: the nodes become dynamic and the network becomes more static and compartmentalized.  Funny, huh?

The security wonk is unavailable for comment.  He's just suffered a coronary event.  Segmented network architecture based upon business policy, security, compliance and risk tolerances make it very difficult to perform this level of automation via service governors today, especially in segmented network architecture based upon asset criticality, role or function as expressed as a function of (gulp) compliance, let's say. 

Again, the concept works great in a flat network where asset grouping is, for the most part, irrelevant (hopefully governed by a policy asserting such) where what you're talking about is balancing the compute with network and storage, but the moment you introduce security, compliance and risk management as factors into the decision fabric, things get very, very difficult.

Now, if you're Cisco and VMware, the
models for how the security engines that apply policy consistently
across these fluid virtualized networks is starting to take shape, but what we're
missing are the set of compacts or contracts that consistently define
and enforce these policies no matter where they move (and control *if* they can move) and how they factor these requirements into
the governance layer.

The standardization of governance approaches — even at the network layer — is lacking. 
There are lots of discrete tools available but the level of integration
and the input streams and output telemetry are not complete.

If you take a look, as an example, at CIRBA's exceptional transformational analytics and capacity management solution, replete with their multi-dimensional array of business process, technical infrastructure and resource mapping, they have no input for risk assessment data, compliance or "security" as variables.

When you look at the utility brought forward by the dynamic, agile and flexible capabilities of virtualized infrastructure, it's hard not to extrapolate all the fantastic things we could do. 

Unfortunately, the crushing weight of what happens when we introduce security, compliance and risk management to the dance means we have a more sobering discussion about those realities.

Here's an example reduced to the ridiculous: we have an interesting time architecting networks to maximize throughput, reduce latency and maximize resilience in the face of what can happen with convergence issues and flapping when we have a "routing" problem.

Can you imagine what might happen when you start bouncing VM's around the network in response to maximizing efficiency while simultaneously making unavailable the very resources we seek to maximize the availability of based upon disassociated security policy violations?  Fun, eh?

While we're witnessing a phase shift in how we design and model our networks to support more dynamic resources and more templated networks, we can't continue to mention the benefits and simply assume we'll catch up on the magical policy side later.

So for me, Virtual Routing is the anti-matter of network SECURITY, not network routing…or maybe more succinctly, perhaps security doesn't matter at all?

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Arista Networks: Cloud Networking?

October 24th, 2008 1 comment

ChildScratchingHead
Arista Networks is a company stocked with executives whose pedigrees read like the who's-who from the networking metaverse.  The CEO of Arista is none other than Jayshree Ullal, the former senior Vice President at Cisco responsible for their Data Center, Switching and Services and Andres von Bechtolsheim from Sun/Granite/Cisco serves as Chief Development Officer and Chairman.

I set about to understand what business Arista was in and what problems they aim to solve given their catchy (kitchy?) tagline of Cloud Networking™

Arista makes 10GE switches utilizing a Linux-based OS they call EOS which provides high-performance networking. 

The EOS features a "…multi-process state sharing architecture that completely
separates networking state from the processing itself. This enables
fault recovery and incremental software updates on a fine-grain process
basis without affecting the state of the system."

I read through the definition/criteria that describes Arista's Cloud Networking value proposition: scalability, low latency, guaranteed delivery, extensible management and self-healing resiliency.

These seem like a reasonable set of assertions but I don't see much of a difference between these requirements and the transformative requirements of internal enterprise networks today, especially with the adoption of virtualization and real time infrastructure. 

Pawing through their Cloud Networking Q&A, I was struck by the fact that the fundamental assumptions being made by Arista around the definition of Cloud Computing are very myopic and really seem to echo the immaturity of the definition of the "cloud" TODAY based upon the industry bellweathers being offered up as examples of leaders in the "cloud" space.

Let's take a look at a couple of points that make me scratch my head:

Q1:     What is Cloud Computing?    
A1: Cloud Computing is hosting applications and data in large centralized datacenters and accessing them from anywhere on the web, including wireless and mobile devices. Typically the applications and data is distributed to  make them scalable and fault tolerant. This has been pioneered by applications such as Google Apps and Salesfore.com, but by now there are
hundreds of services and applications that are available over the net, including platform services such as Amazon Elastic Cloud and Simple Storage Service.

That's  a very narrow definition of cloud computing and seems to be rooted in examples of large, centrally-hosted providers today such as those quoted.  This definition seems to be at odds with other cloud computing providers such as 3tera and others who rely on distributed computing resources that may or may not be centrally located.

Q4:     Is Enterprise Cloud Computing the same as Server Virtualization? 
A4:     They are not. Server Virtualization means running multiple virtualized operating systems on a single physical server using a Hypervisor, such as VMware, HyperV, or KVM/XVM .  Cloud computing is delivering scalable applications that run on a remote pool of servers and are available to users from anywhere. Basically all cloud computing applications today run directly on a physical server without the use of virtualization or Hypervisors. However, virtualization is a great building block for enterprise cloud computing environments that use dynamic resource allocation across a pool of servers.

While I don't disagree that consolidation through server virtualization is not the same thing as cloud computing, the statement that "basically all cloud computing applications today run directly
on a physical server without the use of virtualization or Hypervisors" is simply untrue.

Q5:     What is Cloud Networking?  
A5:     Cloud Networking is the networking infrastructure required to support cloud computing, which requires fundamental improvement in network scalability, reliability, and latency beyond what traditional enterprise networks have offered.  In each of these dimension the needs of a cloud computing network are at least an order of magnitude greater than for traditional enterprise networks.

I don't see how that assertion has been formulated or substantiated.

I'm puzzled when I look at Arista's assertion that existing and emerging networking solutions from the likes of Cisco are not capable of providing these capabilities while they simultaneously seem to shrug off the convergence of storage and networking.  Perhaps they simply plan on supporting FCoE over 10GE to deal with this?

Further,  ignoring the (initial) tighter coupling of networkng with virtualization to become more virtualization-aware with the likes of what we see from the Cisco/VMware partnership delivering VN-Link and the Nexus 1000v, Ieaves me shaking my head in bewilderment.

Further, with the oft-cited example of Amazon's cloud model as a reference case for Arista, they seem to ignore the fact that EC2 is based upon Xen and is now offering both virtualized Linux and Windows VM support for their app. stack.

It's unclear to me what problem they solve that distinguishes them from entrenched competitors/market leaders in the networking space unless the entire value proposition is really hinged on lower cost.  Further, I couldn't find much information on who funded (besides the angel round from von Bechtolsheim) Arista and I can't help but wonder if this is another Cisco "spin-in" that is actually underwritten by the Jolly Green Networking Giant.

If you've got any useful G2 on Arista (or you're from Arista and want to chat,) please do drop me a line…

/Hoff

Categories: Cisco, Cloud Computing, Virtualization Tags:

Performance Of 3rd Party Virtual Switches, Namely the Cisco Nexus 1000v…

October 20th, 2008 2 comments

One of the things I'm very much looking forward to with the release of Cisco Nexus 1000v virtual switch for ESX is the release of performance figures for the solution.

In my Four Horsemen presentation I highlight with interest the fact that in the physical world today we rely on dedicated, highly-optimized multi-core COTS or ASIC/FPGA-powered appliances to deliver consistent security performance in the multi-Gb/s range. 

These appliances generally deliver a single function (such as firewall, IPS, etc.) at line rate and are relatively easy to benchmark in terms of discrete performance or even when in-line with one another.

When you take the approach of virtualizing and consolidating complex networking and security functions such as virtual switches and virtual (security) appliances on the same host competing for the same compute, memory and scheduling resources as the virtual machines you're trying to protect, it becomes much more difficult to forecast and preduct performance…assuming you can actually get the traffic directed through these virtual bumps in the proper (stateful) order.

Recapping Horsemen #2 (Pestilence,) VMware's recently published performance results (grain of NaCl taken) for ESX 3.5 between two linux virtual machines homed to the same virtual switch/VLAN/portgroup in a host shows throughput peaks of up to 2.5 Gb/s.  Certainly the performance at small packet rates are significantly less but let's pick the 64KB-64KB sampled result shown below for a use case:
Vmware-performance
Given the performance we see above (internal-to-internal) it will be interesting to see how the retooling/extension of the networking functions to accomodate 3rd party vSwitches, DVS, API's, etc. will affect performance and what overhead these functions impose on the overall system.  Specifically, it will be very interesting to see how VMware's vSwitch performance compares to Cisco's Nexus 1000v vSwitch in terms of "apples to apples" performance such as the test above.*

It will be even more interesting to see what happens when vNetwork API's (VMsafe) API calls are made in conjunction with vSwitch interaction, especially since the performance processing will include the tax of any third party fast path drivers and accompanying filters.  I wonder if specific benchmarking test standards will be designed for such comparison?

Remember, both VMware's and Cisco's switching "modules" are software — even if they're running in the vKernel, so capacity, scale and performance are a function of arbitrated access to hardware via the hypervisor and any hardware-assist present in the underlying CPU.

What about it, Omar?  You have any preliminary figures (comparable to those above) that you can share with us on the 1000v that give us a hint as to performance?

/Hoff

* Further, if we measure performance that benchmarks traffic including
physical NICs, it will be interesting to see what happens when we load
a machine up with multiple 10Gb/s Ethernet NICs at production loads
trafficked by the vSwitches.

Categories: Cisco, Virtualization, VMware Tags: