Archive

Archive for May, 2009

The Forthcoming Citrix/Xen/KVM Virtual Networking Stack…What Does This Mean to VMware/Cisco 1000v?

May 8th, 2009 8 comments

I was at Citrix Synergy/Virtualization Congress earlier this week and at the end of the day on Wednesday, Scott Lowe tweeted something interesting when he said:

In my mind, the biggest announcement that no one is talking about is virtual switching for XenServer. #CitrixSynergy

I had missed the announcements since I didn’t get to many of the sessions due to timing, so I sniffed around based on Scott’s hints and looked for some more meat.

I found that Chris Wolf covered the announcement nicely in his blog here but I wanted a little more detail, especially regarding approach, architecture and implementation.

Imagine my surprise when Alessandro Perilli and I sat down for a quick drink only to be joined by Simon Crosby and Ian Pratt.  Sometimes, membership has its privileges 😉

I asked Simon/Ian about the new virtual switch because I was very intrigued, and since I had direct access to the open source, it was good timing.

Now, not to be a spoil-sport, but there are details under FreiNDA that I cannot disclose, so I’ll instead riff off of Chris’ commentary wherein he outlined the need for more integrated and robust virtual networking capabilities within or adjunct to the virtualization platforms:

Cisco had to know that it was only a matter of time before competition for the Nexus 1000V started to emerge, and it appears that a virtual switch that competes with the Nexus 1000V will come right on the heels of the 1000V release. There’s no question that we’ve needed better virtual infrastructure switch management, and an overwhelming number of Burton Group clients are very interested in this technology. Client interest has generally been driven by two factors:

  • Fully managed virtual switches would allow the organization’s networking group to regain control of the network infrastructure. Most network administrators have never been thrilled with having server administrators manage virtual switches.
  • Managed virtual switches provide more granular insight into virtual network traffic and better integration with the organization’s existing network and security management tools

I don’t disagree with any of what Chris said, except that I do think that the word ‘compete’ is an interesting turn of phrase.

Just as the Cisco 1000v is a mostly proprietary (implementation of a) solution bound to VMware’s platform, the new Citrix/Xen/KVM virtual networking capabilities — while open sourced and free — are bound to Xen and KVM-based virtualization platforms, so it’s not really “competitive” because it’s not going to run in VMware environments. It is certainly a clear shot across the bow of VMware to address the 1000v, but there’s a tradeoff here as it comes to integration and functionality as well as the approach to what “networking” means in a virtualized construct.  More on that in a minute.

I’m going to take Chris’ next chunk out of order in order to describe the features we know about:

I’m expecting Citrix to offer more details of the open source Xen virtual switch in the near future, but in the mean time, here’s what I can tell you:

  • The virtual switch will be open source and initially compatible with both Xen- and KVM-based hypervisors
  • It will provide centralized network management
  • It will support advanced network management features such as Netflow, SPAN, RSPAN, and ERSPAN
  • It will initially be available as a plug-in to XenCenter
  • It will support security features such as ACLs and 802.1x

This all sounds like good stuff.  It brings the capabilities of virtual networking and how it’s managed to “proper” levels.  If you’re wondering how this is going to happen, you *cough* might want to take a look at OpenFlow…being able to enforce policies and do things similar to the 1000v with VMware’s vSphere, DVS and the up-coming VN-Link/VN-tag is the stuff I can’t talk about — even though it’s the most interesting.  Suffice it to say there are some very interesting opportunities here that do not require proprietary networking protocols that may or may not require uplifts or upgrades of routers/switches upstream.  ’nuff said. 😉

Now the next section is interesting, but in my opinion is a bit of reach in certain sections:

For awhile I’ve held the belief that the traditional network access layer was going to move to the virtual infrastructure. A large number of physical network and security appliance vendors believe that too, and are building or currently offering products that can be deployed directly to the virtual infrastructure. So for Cisco, the Nexus 1000V was important because it a) gave its clients functionality they desperately craved, but also b) protected existing revenue streams associated with network access layer devices. Throw in an open source managed virtual switch, and it could be problematic for Cisco’s continued dominance of the network market. Sure, Cisco’s competitors can’t go at Cisco individually, but by collectively rallying around an open source managed virtual switch, they have a chance. In my opinion, it won’t be long before the Xen virtual switch can be run via software on the hypervisor and will run on firmware on SR-IOV-enabled network interfaces or converged network adapters (CNAs).


This is clearly a great move by Citrix. An open source virtual switch will allow a number of hardware OEMs to ship a robust virtual switch on their products, while also giving them the opportunity to add value to both their hardware devices (e.g., network adapters) and software management suites. Furthermore, an open source virtual switch that is shared by a large vendor community will enable organizations to deploy this virtual switch technology while avoiding vendor lock-in.

Firstly, I totally agree that it’s fantastic that this capability is coming to Xen/KVM platforms.  It’s a roadmap item that has been missing and was, quite honestly, going to happen one way or another.

You can expect that Microsoft will also needto respond to this some point to allow for more integrated networking and security capabilities with Hyper-V.

However, let’s compare apples to apples here.

I think it’s interesting that Chris chose to toss in the “vendor lock-in” argument as it pertains to virtual networking and virtualization for the following reasons:

  • Most enterprise networking environments (from the routing & switching perspective) are usually provided by a single vendor.
  • Most enterprises choose a virtualization platform from a single vendor

If you take those two things, then for an environment that has VMware and Cisco, that “lock-in” is a deliberate choice, not foisted upon them.

If an enterprise chooses to invest based upon functionality NOT available elsewhere due to a tight partnership between technology companies, it’s sort of goofy to suggest lock-in.  We call this adoption of innovation.  When you’re a competitor who is threatened because don’t have the capability you call it lock-in. ;(

This virtual switch announcement does nothing to address “lock-in” for customers who choose to run VMware with a virtual networking stack other than VMware’s or Cisco’s…see what I mean.  it doesn’t matter if the customer has Juniper switches or not in this case…until you can integrate an open source virtual switch into VMware the same way Cisco did with the 1000v (which is not trivial,) then we are where we are.

Of course the 1000v was a strategic decision by Cisco to help re-claim the access layer that was disappering into the virtualized hosts and make Cisco more relevant in a virtualized environment.  It sets the stage, as I have mentioned, for the longer term advancements of the entire Nexus and NG datacenter switching/routing products including the VN-Link/VN-Tag — with some features being proprietary and requiring Cisco hardware and others not.

I just don’t buy the argument that an open virtual switch “… could be problematic for Cisco’s continued dominance of the network market.” when the longtime availablity of open source networking products (including routers like Vyatta) haven’t made much of a dent in the enterprise against Cisco.

Customers want “open enough” and solutions that are proven and time tested.  Even the 1000v is brand new.  We haven’t even finished letting the paint dry there yet!

Now, I will say that if IBM or HP want to stick their thumb in the pie and extend their networking reach into the host by integrating this new technology with their hardware network choices, it offers a good solution — so long as you don’t mind *cough* “lock-in” from the virtualization platform provider’s perspective (since VMware is clearly excluded — see how this is a silly argument?)

The final point about “security inspection” and comparing the ability to redirect flows at a kernel/network layer to a security VA/VM/appliance  is only one small part of what VMware’s VMsafe does:

Citrix needed an answer to the Nexus 1000V and the advanced security inspection offered by VMsafe, and there’s no doubt they are on the right track with this announcement.

Certainly, it’s the first step toward better visibility and does not require API modification of the security virtual appliances/machines like VMware’s solution in it’s full-blown implementation does, but this isn’t full-blown VM introspection, either.

Moreso, it’s a way of ensuring a more direct method of gaining better visibility and control over networking in a virtualized environment.  Remember that VMsafe also includes the ability to provide interception and inspection of virtualized memory, disk, CPU execution as well as networking.  There are, as I have mentioned Xen community projects to introduce VM introspection, however.

So yes, they’re on the right track indeed and will give people pause when evaluating which virtualization and network vendor to invest in should there be a greenfield capability to do so.  If we’re dealing with environments that already have Cisco and VMware in place, not so much.

/Hoff

Cloud Security Will NOT Supplant Patching…Qualys Has Its Head Up Its SaaS

May 4th, 2009 4 comments

“Cloud Security Will  Supplant Patching…”

What a sexy-sounding claim in this Network World piece which is titled with the opposite suggestion from the title of my blog post.  We will still need patching.  I agree, however, that how it’s delivered needs to change.

Before we get to the issues I have, I do want to point out that the article — despite it’s title —  is focused on the newest release of Qualys’ Laws of Vulnerability 2.0 report (pdf,) which is the latest version of the Half Lives of Vulnerability study that my friend Gerhardt Eschelbeck started some years ago.

In the report, the new author, Qualys’ current CTO Wolfgang Kandek, delivers a really disappointing statistic:

In five years, the average time taken by companies to patch vulnerabilities had decreased by only one day, from 60 days to 59 days, at a time when the number of flaws and the speed at which they are being exploited has accelerated from weeks to, in some cases, days. During the same period, the number of IP scanned on an anonymous basis by the company from its customer base had increased from 3 million to a statistically significant 80 million, with the number of vulnerabilities uncovered rocketing from 3 million to 680 million. Of the latter, 72 million were rated by Qualys as being of ‘critical’ severity.

That lack of progress is sobering, right? So far I’m intrigued, but then that article goes off the reservation by quoting Wolfgang as saying:

Taken together, the statistics suggested that a new solution would be needed in order to make further improvement with the only likely candidate on the horizon being cloud computing. “We believe that cloud security providers can be held to a higher standard in terms of security,” said Kandek. “Cloud vendors can come in and do a much better job.”  Unlike corporate admins for whom patching was a sometimes complex burden, in a cloud environment, patching applications would be more technically predictable – the small risk of ‘breaking’ an application after patching it would be nearly removed, he said.

Qualys has its head up its SaaS.  I mean that in the most polite of ways… 😉

Let me make a couple of important observations on the heels of those I’ve already made and an excellent one Lori MacVittie made today in here post titled “The Real Meaning Of Cloud Security Revealed:

  1. I’d like a better definition of the context of “patching applications.”  I don’t know whether Kandek mean applications in an enterprise or those hosted by a Cloud Provider or both?
  2. There’s a difference between providing security services via the Cloud versus securing Cloud and its application/data.  The quotes above mix the issues.  A “Cloud Security” provider like Qualys can absolutely provide excellent solutions to many of the problems we have today associated with point product deployments of security functions across the enterprise. Anti-spam and vulnerability management are excellent examples.  What that does not mean is that the applications that run in an enterprise can be delivered and deployed more “securely” thanks to the efforts of the same providers.
  3. To that point, the Cloud is not all SaaS-based.  Not every application is going to be or can be moved to a SaaS.  Patching legacy applications (or hosting them for that matter) can be extremely difficult.  Virtualization certainly comes into play here, but by definition, that’s an IaaS/PaaS opportunity, not a SaaS one.
  4. While SaaS providers who do “own the entire stack” are in a better position through consolidated multi-tenancy to transfer the responsibility of patching “their” infrastructure and application(s) on your behalf, it doesn’t really mean they do it any better on an application-by-application basis.  If a SaaS provider only has 1-2 apps to manage (with lots of customers) versus an enterprise with hundreds (and lost of customers,) the “quality” measurements as it relates to management of defect (from any perspective) would likely look better were you the competent SaaS vendor mentioned in this article.  You can see my point here.
  5. If you add in PaaS and IaaS as opposed to simply SaaS (as managed by a third party.) then the statement that “…patching applications would be more technically predictable – the small risk of ‘breaking’ an application after patching it would be nearly removed” is false.

It’s really, really important to compare apples to apples here. Qualys is a fantastic company with a visionary leader in Phillipe Courtot.  I was an early adopter of his SaaS service.  I was on his Customer Advisory Board.  However, as I pointed out to him at the Jericho event where I was a panelist, delivering a security function via the Cloud is not the same thing as securing it and SaaS is merely one piece of the puzzle.

I wrote a couple of other blogs about this topic:

/Hoff

Just What the Hell Is a Hoffacc[h]ino, Anyway?

May 4th, 2009 5 comments

hoffacinoYou may have heard of it.

It’s quite possibly the fundamental underpinning of the entire security industry; a veritable life-source for over-worked security folk.  It’s apparently critical to the success of Cloud, as you can see from the picture to the right.

What is this mystical thing?  The Hoffacchino. Or, Hoffaccino, if you prefer.

You may hear it muttered and wonder “Just what the hell is a Hoffac[h]ino, anyway?”

Go to your local Starbucks and order the following:

The Hoffacc[h]ino

Venti Starbucks Doubleshot on ice. 6 shots, 3 Splenda (can sub sugar,) no classic (syrup,) breve (that’s 1/2 and 1/2 for those of you who don’t speak Strabucktalian.)

I cannot take responsibility for substitutions, because the recipe above took dozens of iterations to perfect for balance acidity, sweetness, caffeine, creaminess and mouth feel.

It’s like an americano over ice (without water to dilute) with splenda and 1/2 and 1/2, and it’s shaken which makes a big difference for some reason.

Now you know.

Everyone groans when they hear it.  Then they try it.  Then they’re hooked.

Sorry.

/Hoff

Categories: Jackassery Tags:

VMware’s Licensing – A “Slap In The Face For Cisco?” Hey Moe!

May 4th, 2009 2 comments

3stooges-slapI was just reading a post by Alessandro at virtualization.info in which he was discussing the availability of trial versions of Cisco’s Nexus 1000v virtual switch solution for VMware environments:

Starting May 21, we’ll see if the customers will really consider the Cisco virtual switch a must-have and will gladly pay the premium price to replace the basic VMware virtual switch they used for so many years now.  As usual in virtualization, it really depends on who’s your interlocutor inside the corporate. The guys at the security department may have a slightly different opinion on this product than the virtualization guys.

Clearly the Nexus 1000v is just the first in a series of technology and architectural elements that Cisco is introducing to integrate more tightly into virtualized and Cloud environments.  The realities of adoption of the 1000v come down to who is making the purchasing decisions, how virtualization is being addressed as an enterprise architecture issue,  how the organization is structured and what pain points might be felt from the current limitations associated with VMware’s vSwitch from both a technological and operational perspective.

Oh, it also depends on price, too 😉

Alessandro also alludes to some complaints in pricing strategy regarding how the underlying requirement for the 1000v, the vNetwork Distributed switch, is also a for-pay item.  Without the vNDS, the 1000v no workee:

Some VMware customers are arguing that the current packaging and price may negatively impact the sales of Nexus 1000V, which becomes now much less attractive.

I don’t pretend to understand all the vagaries of the SKU and cost structures of VMware’s new vSphere, but I was intrigued by the following post from the vinternals blog titled VMware slaps enterprise and Cisco in face, opens door for competitors,:

And finally, vNetwork Distributed Switch. This is where the slap in the face for Cisco is, because the word on the street is that no one even cares about this feature. It is merely seen as an enabler for the Cisco Nexus 1000V. But now, I have to not only pay $600 per socket for the distributed switch, but also pay Cisco for the 1000V!?!?! A large slice of Cisco’s potential market just evaporated. Enterprises have already jumped through the necessary security, audit and operational hoops to allow vSwitches and port groups to be used as standard in the production environment. Putting Cisco into the virtual networking stack is nowhere near a necessity. I wonder what Cisco are going to do now, start rubbishing VMware’s native vSwitches? That will go down well. Oh and yeh, looks like you pretty much have only 1 licensing option for Cisco’s Unified Computing System now. Guess that “20% reduction in capital expense” just flew out the window.

Boy, what a downer! Nobody cares about vNDS?  It’s “…merely seen as an enabler for the Cisco Nexus 1000V?” Evaporation of market? I think those statements are a tad melodramatic, short-sighted and miss the point.

The “necessary security, audit and operational hoops to allow vSwitches and port groups to be used as standard in the production environment” may have been jumped through, but they represent some serious issues at scale and I maintain that these hoops barely satisfy these requirements based on what’s available, not what is needed, especially in the long term.  The issues surrounding compliance, separation of duties, change control/management as well as consistent and stateful policy enforcement are huge problems that are being tolerated today, not solved.

The reality is that vNDS and the 1000v represent serious operational, organizational and technical shifts in the virtualization environment. These are foundational building blocks of a converged datacenter, not point-product cash cows being built to make a quick buck.   The adoption and integration are going to take time, as will vSphere upgrades in general.  Will people pay for them?  If they need more scalable, agile, and secure environments, they will.  Remember the Four Horsemen? vSphere and vNetworking go a long way toward giving enterprises more choice in solving these problems and vNDS/1000v are certainly pieces of this puzzle. The network simply must become more virtualization (and application and information-) aware in order to remain relevant.

However, I don’t disagree in general that  “…putting Cisco into the virtual networking stack is nowhere near a necessity,” for most enterprises, especially if they have very simple requirements for scale, mobility and security.  In environments that are designing their next evolution of datacenter architecture, the integration between Cisco, VMware, and EMC are critical. Virtualization context, security and policy enforcement are pretty important things.  vNetworking/VNDS/1000v/VN-Link are all enablers.

Lastly, there is also no need for Cisco to “…start rubbishing VMware’s native vSwitches” as the differences are pretty clear.  If customers see value in the solution, they will pay for it. I don’t disagree that the “premium” needs to be assessed and the market will dicate what that will be, but this doom and gloom is premature.

Time will tell if these bets pay off.  I am putting money on the fact that they will.

Don’t think that Cisco and VMware aren’t aware of how critical one are to the other and there’s no face slapping going on.

/Hoff

See You At Virtualization Congress ’09 / Citrix Synergy In Vegas…

May 3rd, 2009 No comments

I’ll be at the Virtualization Congress ’09 / Citrix Synergy at the MGM Grand in Las Vegas for a couple of days this week.

I am presenting on Cloud Computing Security on May 6th at 11:30am-12:20pm – Mozart’s The Marriage of Figaro: The Complexity and Insecurity of the Cloud – VC105

This ought to be a funny presentation for about the first 5 minutes…you’ll see why 😉

I’m also on a panel with Dave Shackleford (Configuresoft) & Michael Berman (Catbird) moderated by the mastermind of all things virtualization, Alessandro Perelli,  on May 6th at 5: Securing the Virtual Data Center (on Earth and on Clouds) – VC302

If you’re around, ping me via DM on Twitter (@beaker) or hit me up via email [choff @ packetfilter.com]

Of course, it’s entirely likely you’ll find Crosby and I chatting it up somewhere 😉

See you there!

/Hoff

Cloud Fiction: Say ‘Cloud’ Again. I Dare You, I Double Dare You…

May 1st, 2009 No comments

julesOverheard in the backroom of an audit meeting:

Brett: No, no, I just want you to know… I just want you to know how sorry we are that things got so fucked up with us and the Cloud thing. We got into this thing with the best intentions and I never…
Jules: [Jules shoots the man on the couch] I’m sorry, did I break your concentration? I didn’t mean to do that. Please, continue, you were saying something about best intentions. What’s the matter? Oh, you were finished! Well, allow me to retort. What do these Clouds look like?
Brett: Cloud, what?
Jules: What country are you from?
Brett: Cloud what? What? Wh – ?
Jules: “Cloud” ain’t no country I’ve ever heard of. They speak English in Cloud?
Brett: Cloud, what?
Jules: English, motherfucker, do you speak it?
Brett: Yes! Yes!
Jules: Then you know what I’m sayin’!
Brett: Yes!
Jules: Describe what the Cloud looks like!
Brett: Cloud what?
Jules: Say ‘Cloud, what’ again. Say ‘Cloud, what’ again, I dare you, I double dare you motherfucker, say Cloud one more Goddamn time!

Don’t be a square, Daddy-o.

Categories: Cloud Computing, Cloud Security Tags:

IBM Creates the “CloudBurst” Physical Appliance To Run a Virtual Appliance In a “Private Cloud!?”

May 1st, 2009 2 comments

Charles Babcock at InformationWeek wrote an article titled “IBM Launches Appliance For Private Cloud Computing” in which he details IBM’s plans to bundle VMware with their WebSphere Application Server on an x86 platform, stir in chargeback/billing capability, call it “Hypervisor Edition” and sell it as an “appliance” that runs in “Private Clouds” for $45,000.

Bundling hardware with a virtualization platform as an appliance isn’t a new concept as everyone including Cisco is doing that.  However, the notion of bundling hardware with a virtualization platform and a virtual appliance and then labeling THAT an appliance “to disperse those applications to the cloud” is an ironic twist of marketing.

Tarting it up and calling it a “Cloud appliance” (the WebSphere CloudBurst Appliance to be specific) that “…plugs into Private Clouds” is humorous:

IBM this week announced its WebSphere CloudBurst Appliance for deploying applications to a private cloud. IBM is the first major vendor to produce a cloud appliance for its customers, a sign of how the concepts of private cloud computing are getting a hearing in the deepest recesses of the enterprise.

Private clouds are scalable compute resources established in the enterprise data center that have been configured by IT to run a virtual machine upon demand. In some cases, business users are empowered to select an application and submit it as a virtualized workload to be run in the cloud.

The WebSphere Appliance stores and secures virtualized images of applications on a piece of IBM xSeries hardware that’s ready to be plugged into a private cloud, Tom Rosamilia, general manager of the applications and integration middleware division, said in an interview. That image will be cast in a VMware ESX Server file format for now; other hypervisor formats are likely to follow, he said. The WebSphere Application Server Hypervisor Edition is also preloaded on the appliance and can run the virtualized image upon demand. The Hypervisor Edition is also new and both it and the appliance will become available by the end of the second quarter.

Hypervisor Edition is a version of the WebSphere Application Server designed to run virtualized applications on IBM’s x86-based server series. The appliance with application server will be priced at $45,000, Rosamilia said.

Having an application ready to run on a hardware appliance represents a number of short cuts for the IT staff, Rosamilia said. Once an application is configured carefully to run with its operating system and middleware, that version of the application is “freeze dried with its best practices into a virtualized image,” or a complete instance of the application with the software on which it depends.

Additional instances of the application can be started up as needed from this freeze-dried image without danger of configuration error, Rosamilia noted. The application is a service, awaiting its call to run in a virtual machine while on the WebSphere appliance. When it is run, the appliance logs the resources use and who used them for chargeback purposes, one of the requirements for successful private cloud operation, according to private cloud proponents.

Rosamilia said enterprises that have applications that are already configured as a service or sets of services will find those applications fitting easily into a cloud infrastructure. An appliance approach makes it simple “to disperse those applications to the cloud” with a lower set of skills than IT currently needs to configure and deploy an application in the data center.

So now, for the first time ever, you can leverage virtualization to run a “freeze-dried” VM application/service on an x86 server appliance in the datacenter Private Cloud! Awesome. You heard it here second.

Is it any wonder people are confused by Private Clouds? Selling software disguised as a virtual machine, coupled to hardware, but abstracted by a hypervisor as a bundled “appliance” ISN’T Cloud Computing. It’s box pushing.

Not that I should be surprised.

<sigh>

/Hoff

Categories: Cloud Computing, Cloud Security Tags: