Archive

Archive for April, 2008

Poetic Virtual Security

April 30th, 2008 3 comments

Shakespeare
I was at Starbucks with my four year old.  She was laying down the Dr. Seuss
with aplomb so I was inspired to dig deep and show her how the old man can
ebb and flow.

I swear to $diety that upon hearing this she rolled her eyes and said something like "Dad, you had me at ‘virtualization.’ "  At that point she quickly pointed to my iPhone and asked if I would purchase the latest Hannah Montana song on iTunes…<sigh>

You can see more of my poetic ramblings here (scroll down after the jump.)


When debating the future of secure virtualization
It’s wise to reflect on its very creation

Some say poor code is the reason it’s here
while others use doubt and (un)certainty’s fear

Economically speaking the V-word’s a boon
operationally, though, it showed up too soon

Duties, once separate, are now all a-blended
one moat, lots of castles — the model’s up-ended

Competency and skillsets come into play
Who owns the stack?  Well, that’s hard to say

Can an admin whose mad skillz focus on the OS,
really be trusted to manage this mess?

The virtual sysadmin owns the keys to the kingdom
but it’s hard to fix hosts when you can’t even ping ‘dem!

Operational silos have now become worse
since the virtual admins control all the purse

The network and security wonks try to fudge it
but switches and firewalls just don’t get budget

Security, network, storage, and host
if you push the wrong button it all becomes toast

Our current security solutions don’t cope
but the dealers keep pushing their VirtSec straight dope

I don’t want to come off like a VirtSec despiser,
but to protect our crown jewels it’s all HYPErvisor

Don’t worry my friends, no need to be scared
your whole infrastructure will be VMware’d

…or Xen’d, or sPath’d or perhaps Hyper-V’d
virtualization, I’m told, will solve everyone’s need

Organizational issues are really what matter
there’s no real need to make our vendors much fatter

Focus first on improving your present situation
like assessing your risk and host segmentation

Get a grip on the basics and work up from there
don’t give into the hype, doubt, confusion or fear

That’s it boys and girls till I rhyme once again
Stay happy, stay secure, and now…

EOM

Categories: Poetry, Virtualization Tags:

All Your Virtualized PCI Compliance Are Belong To Us…

April 29th, 2008 16 comments

Rubberglove
Another interesting example I use in my VirtSec presentations when discussing the challenges of what I describe as Phase 2 of virtualization — virtualizing critical applications and things like Internet-facing infrastructure in DMZ’s — is the notion of compliance failures based on existing and upcoming revisions to regulatory requirements.

Specifically, I use PCI/DSS to illustrate that in many cases were one to take a highly-segmented and stratified "defense-in-depth" architecture that is today "PCI compliant" and virtualize it given presently available options, you’d likely find yourself out of compliance given the current state of technology solutions and auditing standards used to assess against.

Then again, you might just pass with flying colors while being totally insecure.

Here’s a fantastic example from Eric Siebert over at the TechTarget Virtualization blog.  Check this out, it’s a doozie!

Having just survived another annual PCI compliance audit, I was again surprised that the strict standards for securing servers that must be followed contain nothing specific concerning virtual hosts and networks. Our auditor focused on guest virtual machines (VMs), ensuring they had up-to-date patches, locked-down security settings and current anti-virus definitions. But ironically, the host server that the virtual machines were running on went completely ignored. If the host server was compromised, it wouldn’t matter how secure the VMs were because they could be easily accessed. Host servers should always be securely locked down to protect the VMs which are running on them.

It seems that much of the IT industry has yet to react to the virtualization trend, having been slow in changing procedures to adjust to some of the unconventional concepts that virtualization introduces. When I told our auditor that the servers were virtual, the only thing he wanted to see was some documentation stating that the remote console sessions to the VMs were secure. It’s probably just a matter of time before specific requirements for virtual servers are introduced. In fact, a recent webinar takes up this issue of whether or not virtualized servers can be considered compliant, addressing section 2.2.1 of the PCI DSS which states, “Implement only one primary function per server”; that is to say, web servers, database servers and DNS should be implemented on separate servers. Virtual servers typically have many functions running on a single physical server, which would make them noncompliant.

So let’s assume that what Eric talks about in section 2.2.1 of PCI/DSS holds true, that basically means two things: (1) PCI/DSS intimates that virtualization cannot provide the same level of security as non-virtualized infrastructure and (2) you won’t be able to virtualize infrastructure governed by PCI/DSS if you expect to be compliant.

Now, this goes toward the stuff Mogull and I were talking about in terms of assessing risk and using the notion of "zone defense" for asset segmentation in virtualized infrastructure. 

Here’s a snippet from my VirtSec preso on the point:

Riskdrivensegmentation_3
Further, as I mentioned in my post titled "Risky Business — The Next Audit Cycle: Bellweather Test for Critical Production Virtualized Infrastructure," this next audit cycle is going to be interesting for many companies…

Yippeee!

/Hoff

Categories: PCI, Virtualization Tags:

Clouding the Issue: Separating “Securing Virtualization” from “Virtualizing Security”

April 29th, 2008 4 comments

My goal in the next couple of posts is to paint some little vignettes highlighting some of the more interesting points I raise in my presentation series "Virtualization: Floor Wax, Dessert Topping and the End Of Information Security As We Know It."

The first issue up for discussion is the need to recognize and separate two concerns which are unfortunately most often intertwined when companies are considering virtualization and its impact to their IT operations and security programs. 

My goal here is not to try and explain away every nuance of this slide or push a conclusion on anybody, but instead plant the seeds and set the premise for discussion’s sake.

SeparateissuesThe slide to the left sums up the point reasonably well, but here’s the associated scaled-down narrative that accompanies this slide:

Companies need to approach addressing each of these issues by assessing the risk associated with each separately and then juxtaposed.

Treating them as a single concern — as most do — leads to an unfortunate series of chicken-egg debates that usually do not address the things that really matter in the first place.

The point here is that while these concerns are very much related and both important, the order in which they are addressed is often critical.

Specifically, one can take an incredibly secure solution and yet still manage to deploy it in an incredibly insecure manner.  Even if the virtualization platform one chooses is (by some mythical standard) impervious to
compromise (*cough*,) given specific configuration constraints,
deviations from those constraints can lead to exposure.

If the manner in which virtualization platforms are configured, managed, monitored and secured after you’ve already deployed them are not consistent with the rigor and diligence we’ve applied to our non-virtualized infrastructure (and by observation they are not,) worrying about how secure or insecure your VMM platforms are is a waste of synaptic processes.

My experience has shown that most organizations have simply plowed ahead
and accepted or ignored the risk associated with deploying virtualization
platforms, accepting on blind faith the claims of virtualization vendors and assuming that the VMM providing the abstraction layer between
hardware and software is at least as secure (if not more so) as a non-virtualized installation of the operating system.

This is usually done because the economic benefits of virtualization which are absolutely quantifiable far outweigh the perceived risks associated with virtualization which are not (or are at least difficult to produce.)

I’m unsure how exactly most companies are assessing risk against their virtualized environments formally
since many of them admit to not having a risk assessment methodology in
place to do so.

It would seem that most folks simply look at the
known vulnerabilities associated with a vendor’s VMM and the current
threatscape and make a swag as to the resultant residual risk given any
compensating controls that might be in place.  In many cases, however, the "risk" we’re debating is based upon threats and vulnerabilities that may not even exist, so we’re academically making judgment calls based on possibility versus probability.

Yikes.

How many times have you entered into debate with *someone* in IT, security, audit or the business arguing about "securing virtualization" after someone’s seen a "Blue Pill" presentation when in all honestly the company has already deployed hundreds of VM’s and still hasn’t segmented the network or built a risk assessment framework to quantify the business impact?

See what I mean?

/Hoff

Categories: Virtualization Tags:

Off Topic: Southwest Airlines Monitoring Twitter For Customer Service/Brand Protection

April 29th, 2008 7 comments

Customerservice
Planes, Trains and Automobiles

My Southwest Airlines flight from New Hampshire to Philly yesterday sucked the big one.  Flying into Philly is always a gamble but yesterday I went all in and flew SWA for the first time instead of US Scareways.

My flight was supposed to take off at 5:20 PM.  It actually took off at around 7:45 PM.  Due to "weather," once we arrived over PHL airspace, those of us in the bovine express class then endured 30 minutes of low-earth orbit in a holding pattern awaiting vector approach clearance to land once we got there.

Upon landing, we waited almost 30 minutes for our luggage only to find that they had to go back for a second load since the first wasn’t large enough of a sweep to claim them all.  The baggage came…and went.  Mine wasn’t amongst them.  It was now 10:30pm.  At this point, one of my VP’s who was also traveling to the same locale wisely left.  Cue the violins.

I filed a claim next to a woman who was going apeshit over her drenched and soiled suitcases.  The migrant baggage helper person said that another flight was due in shortly (about 45 minutes) and I could wait to see if it was on that flight.  I made some remark about pitching a pup tent in baggage claim.  I could hear crickets chirping…

This was all friendly and helpful enough.  There was no reason to get medieval as the poor souls behind the counter can’t even track bags to tell if they landed — or so they say.  Upon filing my claim, I asked that my bag just be returned to NH or delivered to my hotel given the fact that I was staying only one night before returning home.  They would try the latter as the last run to "local" hotels was around midnight.

I was prepared for the old fake-finger-teeth-brushing and washcloth-the-armpits routine to get me through my meeting if need be.  Wow.

It was now almost 11pm.  I still had to collect my rental car and drive 45 minutes to my hotel.

As I was walking out, I saw a strange man return my bag to the carousel. I reckoned that if he took it, loaded it with explosives and put it back, that hopefully I would suffer a quick death.  No such luck.

I picked it up and wrung it out.  It was soaked.

I shrugged it off, got the rental and got to my hotel in one piece.

Corporate accounts payable, Nina speaking. Just a moment…

Of course I twittered the entire experience with my normal (lack of) withholding.  I didn’t address the tweet to @southwestair or anything, but I obviously mentioned them by name.

This morning I was quite amazed to see that someone (not something) from Southwest was monitoring Twitter feeds and responded to me.  I can tell it isn’t a bot because of the responses to the rather colloquial nature of some of my tweets.  Check it out:

Swatwitter

The plea to let them try again to earn my loyalty and prove that "Southwest=Awesomeness" came from a statement that "Southwest=Suckage."  😉

It’s pretty interesting that they have people monitoring Twitter for brand/reputation purposes — it comes across as a customer service effort, also.   I know it’s not as profound as some of the remarkable Twitter stories of late, but it was cool.

Cool and frightening at the same time.  So, thanks for the attention, SWA.  We’ll see how you do on my return flight today.

Anyone else have an experience such as this?

/Hoff

Update: The flight back was great.  It arrived early, to boot.  I have to say that my Southwest Twitter experience wasn’t just a single fire and forget incident as "they" twittered back again to check up on me:

Swatwitter2

😉

Categories: Twitter Tags:

On Schneier, the RSA Conference’s Swan Song and the Rise Of the Non-Con…

April 26th, 2008 5 comments

Bruce Schneier has artfully committed electrons to decay in an article he recently "penned" for Wired in which he has once again trumpeted the impending death of Information Security as we know it and illustrating the changing why’s, how’s, when’s and who’s that define the security industry singularity that is sure to occur.

While I thoroughly enjoyed Bruce’s opinion on the matter and will address it in a follow-on post dedicated to the meme, the real gem that sparkled for me in this article was his use of how the behemoth RSA Security conference is actually a bellweather for the security industry:


Last week was the RSA Conference, easily the largest information
security conference in the world. More than 17,000 people descended on
San Francisco’s Moscone Center to hear some of the more than 250 talks,
attend I-didn’t-try-to-count parties, and try to evade over 350
exhibitors vying to sell them stuff.


Talk to the exhibitors, though, and the most common complaint is that the attendees aren’t buying.

It’s not the quality of the wares. The show floor is filled with
new security products, new technologies, and new ideas. Many of these
are products that will make the attendees’ companies more secure in all
sorts of different ways. The problem is that most of the people
attending the RSA Conference can’t understand what the products do or
why they should buy them. So they don’t.

The RSA Conference won’t die, of course. Security is too important for
that. There will still be new technologies, new products and new
startups.
But it will become inward-facing, slowly turning into an
industry conference. It’ll be security companies selling to the
companies who sell to corporate and home users — and will no longer be
a 17,000-person user conference.

What attracted me to the last paragraph and a rather profound point draped in subtlety that I think Bruce missed was reinforced by my recent experiences in Boston and Munich which framed RSA, which quite honestly I could almost care less about attending ever again…

Specifically, I recently attended and spoke at both SourceBoston (in Boston) and Troopers08 (in Munich, Germany.)  These are boutique security conferences with attendee counts in approximately the 200 person range.  They are intimate gatherings of a blended and balanced selection of security practitioners, academics, technologists, researchers and end-users who get together and communicate.

These events offer a glimpse into the future of what security conferences can and should provide: collaborative, open, educational, enlightening and fun events without the pretentiousness or edge of confabs trying too hard to be either too "professional" or "alternative" in their appear and nature.

Further, these events lack the marketing circle-jerk and vendor-centric detritus that Bruce alluded to.  What you get is a fantastic balance of high-level as well as in-the-weeds presentations on all manner of things security: politics, culture, technology, futurism, hacking, etc.  It’s an amazing balance with a refreshing change of pace.  People go to all the presentations because they know they are going to learn something.

These sorts of events have really been springing to life for years, yet we’ve seen them morph and become abstracted from the reason we attended them in the first place.  Some of them like BlackHat, DefCon, and ShmooCon have all "grown up" and lost that intimacy, becoming just another excuse to get together and socialize in one place with people you haven’t seen in a while. 

Some like HITB, CanSecWest, and ToorCon might appear too gritty or technical to attract a balanced crowd and the expectations for presenters is the one-upmanship associated with an overly-sensationalized exploit or the next move in the fanboy-fanned flaming game of vendor 0day whack-a-mole.  Others are simply shows that are small or regional in nature that folks just don’t know about but remain spectacular in their lineups.

My challenge to you is to discover these shows — these "Non-Cons" as I call them.  They offer fantastic networking, collaborative and learning opportunities and you’ll be absolutely blown away with some of the big names presenting at them.

Don’t turn up your nose simply because of locale and use the excuse that you’re saving your budget for RSA or InfoSec.  When is the last time you actually *learned* anything at those shows?  It costs thousands to attend RSA.  Many of the Non-Cons cost a measly couple of hundred dollars.

Take a close look at where your favorite InfoSec folks are presenting.  If five of them happen to be converging on, say, Ohio <wink, wink> for 2-3 days at a security conference you’ve never heard of, it’s probably not because of the beaches…

/Hoff

Categories: Security Conferences Tags:

Travel: Off to Munich for Troopers08

April 21st, 2008 No comments

Troo

I’m off to Munich for the rest of this week to keynote day two of Troopers08, hosted by my friend Enno Rey and the team at ERNW.

My talk is titled "Virtualization: Floor Wax, Dessert Topping and the End of Information Security As We Know It."

I’m sure I’m going to get hassled because I didn’t finish my VRRP fuzzing parameters for SPIKE before the weenies @ ERNW did (OK, I have an excuse — I didn’t even start) but it’s bound to be a great conference and a good time.

I got this email from Enno yesterday.  He’s German and thus obviously quite serious about this:

For those interested:

a) there will be a 10K (kilometers) run in the morning of 04/23 and 04/24, at 7 AM each. no competing here, just get some fresh air (planned time: 60 minutes). We’ve not yet figured out the exact route, given it’s airport area there shouldn’t be too many hills or stuff.
If you want to run on 04/25 or have a "double round" one of the days, pls drop me personal note.

b) the hotel seems to have a decent gym. We asked them to have it open 24h during the con and they confirmed this.

The friggin’ beer capital of the Universe and he wants us to run 10Km in the morning.

Yeah, right.

I’m looking for a local Brazilian Jiu Jitsu acacdemy, however…

Catch you all on the flipside…so long as the German customs officers don’t realize that MacOS X comes with NMAP which we all *know* is a hacking tool…<gulp!>

/Hoff

Categories: Travel Tags:

Ghost In the Machine: IBM’s New “Phantom” VirtSec Solution (?)

April 21st, 2008 1 comment

Phantom
I had another post-RSA press release show up in my mailbox today from IBM again pitching their "…breakthrough research initiative from IBM X-Force and IBM Research,
code-named "Phantom", which offers businesses a new means of securing
virtualized server environments."

Besides the rumblings at RSA, I haven’t been briefed on this as of yet, but let’s explore what we have thus far, keeping it mind that this is described as an "initiative" and not a "product:"

At Phantom’s core is industry-leading network and host intrusion protection used to guard the virtual environment and the machines from the inside out. The new technology sits in a secure, isolated partition and integrates with the hypervisor – the layer of management software that coordinates calls between operating systems and computer hardware.

In this description, Phantom is confusingly framed more as a product/solution rather than an initiative and it gets a little fuzzy as to how this qualifies as integration with the hypervisor besides just sitting on top of it, but perhaps this is one of the secrets-in-stealth that defines the breakthroughs mentioned above or perhaps sadly yet another unfortunate translation from Klingon?

If one were to take a quick first-pass, it sounds like they’ve taken their software-based IBM/ISS IPS solution and turned it into a virtual appliance (that would be the "secure, isolated partition") that runs alongside the VM’s in a physical host?  This is basically what every other vendor on the planet is currently doing.  Integration with SiteProtector and interaction with the hardware-based physical appliances would make sense, too.

Playing futurist, in terms of the more broadly-reaching "initiative" angle, it might leverage some of the research IBM has already done on their secure hypervisor (sHype) or more appropriately rHype (which I believe is Xen-based) as well as the many other virtualization efforts they’ve hatched to date.

If IBM were going to commercialize this into productized offerings, besides supporting their own hypervisor(s) and virtualization platforms/operating systems first, I’d guess they would aim for supporting VMware first since that’s where the dollars are.  Or not.

IBM’s Phantom initiative aims to create virtualization security technology to efficiently monitor and disrupt malicious communications between virtual machines without being compromised. 

In addition, full visibility of virtual hardware resources would allow Phantom to monitor the execution state of virtual machines, protecting them against both known and unknown threats before they occur.

Roger.  Protect intra-vm traffic.  And because they can protect "…against both known and unknown threats before they occur" it’s psychic to boot! 😉

It is also designed to increase the security posture of the hypervisor – a critical point of vulnerability; because once an attacker gains control of the hypervisor, they gain control of all of machines running on the virtualized platform. For the first time, the hypervisor, the gateway to the virtualized world and all that lays above it, can be locked down.

I’m interested in this part because as most vendor’s pitches go, when one digs down deeper, what this really means is that *today* if one can control traffic between the VM’s which transit the vSwitch, one can potentially prevent a compromise of a VM leading to a launchpad for an attack on the hypervisor.

What’s confusing here is that despite the fact that most hypervisor platform providers consciously limit what is exposed (even in an abstracted state) by the hypervisor, vendors continue to insist that they are "integrated" with and will "lock down" the hypervisor itself.  We saw that in the dissection of the Catbird "HyperVisorShield" announcement I wrote about earlier.

Protecting the hypervisor today is really a by-product of protecting the VM’s.

Here’s another extract from additional coverage of Phantom:

Phantom is a joint effort between IBM’s X-Force threat analysis team and the company’s research division. It aims to lock down the hypervisor software that IBM systems use to manage virtual machines. "What we’re doing through Phantom is we’re implementing an IPS (intrusion prevention system)– an IPS that sits at the hypervisor layer," said Kris Lovejoy, director of strategy for IBM corporate security.

The researchers are also building tools that can lock down the hypervisor itself, Lovejoy added. "The hypervisor layer was built for optimum performance, not necessarily effective security," she said. "Our customers are just looking for assurance that their virtualized infrastructure is not going to be the single point of failure."

Aha!  See vendors in their press releases continue to reference THE hypervisor in a singular, monolithic manner that seems to imply that their solutions will protect and lockdown any and all hypervisors.  I know this point may not be lost on all people, but it’s become very difficult to figure out what many of these VirtSec products actually do and which platforms they support.

I think this last paragraph really intimates that in this case we’re talking about IBM’s hypervisor(s) — perhaps based upon sHype/rHype or other IBM virtualization platforms — at least at first.

I’m not knocking IBM or doubting their efforts as they’ve been at the virtualization game a long time and with the acquisition of ISS, they got a bunch of good talent and a decent product base.  I *am* just weary of claims that seem to apply research and "initiatives" in such broad strokes that it becomes difficult to sort the wheat from the chaff.

Looking forward to learning more about Phantom.

/Hoff

Categories: Virtualization Tags:

Truly the Biggest Thing At RSA…

April 18th, 2008 2 comments

What was the biggest thing at RSA this year?

Information Centricity?  Been there, done that.
Security Innovation?  SO last Tuesday.
DLP?  Nope.
NAC? Nah-Uh.
GRC? Not so much.

The biggest thing at RSA this year was, of course, my conference badge:

Hoffrsabadge

Categories: Jackassery Tags:

BeanSec! Tonight. Wednesday, April 16th – 6PM to ?

April 16th, 2008 2 comments

Beansec3_2
Yo!  BeanSec! is once again upon us.  Wednesday, April 16th, 2008.

BeanSec! is an informal meetup of information security
professionals, researchers and academics in the Greater Boston area
that meets the third Wednesday of each month. 

I say again, BeanSec! is hosted the third Wednesday of every month.  Add it to your calendar.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join
up”, present a zero-day exploit, or defend your dissertation to attend.
Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139.  Look for the Elephant
on the left door next to the Central Kitchen entrance.  Come upstairs.
We sit on the left hand side…

Don’t worry about being "late" because most people just show up when
they can.  6:30 is a good time to aim for.  We’ll try and save you a
seat.  There is a parking garage across the street and 1 block down or
you can try the streets (or take the T)

In case you’re wondering, we’re getting about 30-40 people on
average per BeanSec!  Weld, 0Day and I have been at this for just over
a year and without actually *doing* anything, it’s turned out swell.

We’ve had some really interesting people of note attend lately (I’m
not going to tell you who…you’ll just have to come and find out.)  At
around 9:00pm or so, the DJ shows up…as do the rather nice looking
people from the Cambridge area, so if that’s your scene, you can geek
out first and then get your thang on.

The food selection is basically high-end finger-food appetizers and
the drinks are really good; an attentive staff and eclectic clientèle
make the joint fun for people watching.  I’ll generally annoy you into
participating somehow, even if it’s just fetching napkins. 😉

See you there.

/Hoff

Categories: BeanSec! Tags:

The Four Horsemen Of the Virtualization Security Apocalypse

April 15th, 2008 12 comments

4horsemen[For those of you directed here for my Blackhat 2008 presentation of the same name, the slides will be posted with a narrative shortly.  This was the post that was the impetus for my preso.

If you’d like to see a "mini-me" video version of the presentation given right after my talk, check it out from Dark Reading here.  You’ll also notice this is quite different than Ellen Messmer’s version of what I presented…]

I’ve written and re-written this post about 10 times, trying to make it simpler and more concise.  It stems from my initial post on the matter of performance implications in virtualized security environments here.

After a convo with Ptacek today discussing the same for a related article he’s writing, I think I’ve been able to boil it down to somewhere near its essence. It’s still complex and unwieldy, but it’s the best I can do for now.

Short of the notions I’ve discussed previously regarding instantiating the vSwitches into hardware and loading physical servers with accelerators and offloaders for security functions, there aren’t a lot of people talking about this impending set of challenges or the solutions in the short or long term.

This should be cause for alarm.

These issues are nasty.  Combined with the organizational issues of who actually owns and manages "security" in the virtualized context, this stuff makes me want to curl up in a fetal position.

So here they are, the nasty little surprises awaiting us all carried forth by the four horsemen of the virtualization security apocalypse named conquest, war, famine and death:

  • Virtualized Security Screws the Capacity Planning Pooch (Conquest)
  • The Network Is the Compu…oh, crap.  Never mind, it’s broken. (Death)
  • Episode 7: Revenge of the UTM.  Behold the vUTM! (War)
  • Spinning VM straw into budgetary gold (Famine)

In order to ameliorate these shortcomings, we’re going to have to see some seriously different approaches and rapid acceleration of solution roadmaps.  There are some startups as well as established players all jockeying to solve one or more of these problems, but they’re not going to tell you about them because, quite frankly, they are difficult to describe and may cause TPOW syndrome (Temporary Purchase Order Withholding.)

So here they are in all their splendor.  The gifts of the four horsemen, just in time to pour salt in your virtualized wounds:

  1. Virtualized Security Screws the Capacity Planning Pooch (Conquest)
    If we look at today’s most common implementation methodologies for deploying security in a virtualized environment, we end up recognizing that it comes down to two fundamental approaches: (a) install software/agents from the usual suspects in the VM’s or (b) deploy security functions as virtual appliances (VA) within the physical host.

    If we look at measuring performance overhead due to option (a) I wager we’d all have a reasonably easy time of measuring and calculating what the performance hit would be.  Further, monitoring is accomplished with the tools we have today. This is a per-VM impact that can be modeled across physical hosts and in response to overall system load. No real problem here.

    Now, if we look at option (b) which is the choice of almost all emerging solutions in the VirtSec space, the first horseman’s steed just took a crap on Main street. 

    For example, let’s say that we have one (or more — see #2 and #3 below) monolithic security VA whose job it is is to secure all traffic to and from external sources to any VM in the physical host as well as all intra-VM traffic.

    You see the problem, right?  Setting aside the notion of how much memory/CPU to allocate to the VA so as not to drop packets due to overload, capacity planning completely depends upon the traffic levels, the number of VM’s on the system (which can be dynamic,) the way the virtual and physical networks are configured (also dynamic) as well as the efficiency of the software/OS combo in the VA.  Lest we forget access to system buses, hardware and the tax that comes with virtualizing these solutions.

    The very real chance exists of either overrunning the VA and dropping packets which will lead to retransmissions, etc. or simply losing valuable landscape to add VM’s because the "extra" CPU/memory you thought you had is now allocated to the security VA…

    Measuring security VA performance is a crapshoot, too.  Sure there’s VMMark, but methinks that we already have enough crap floating about in how vendors measure performance of physical appliances whose resources they control.  Can you imagine the first marketing campaigns that are sure to be launched on the first 10Gb/s virtual appliance…Oh my.

  2. The Network Is the Compu…oh, crap.  Never mind, it’s broken. (Death)
    Virtualization offers some fantastic benefits, not the least of which is the capability to provide for resilience and on-demand scalability/high-availability.  If a physical server is overloaded, one might automagically allow the VMotion of critical VM’s to other lighter-loaded physical hosts.  If a process/application/VM fails on one host, spin it back up somewhere else.  Great stuff.

    Except, we’ve got a real problem when we try to apply this dynamic portability to security applications running in VA’s.  Security applications are incredibly topology sensitive. For the most part, they expect the network configuration to remain static – interfaces, VLAN’s, MAC addies, routes, IP addresses of protected nodes, etc.  If you go moving security VA’s around, they may no longer be inline with the assets they protect!

    Further, the policies that define the ACL’s that govern the disposition of traffic also don’t grok.

    But wait, there’s more!

    Replicating certain operating conditions within a virtualized environment is going to be tricky when the VirtServer admins don’t have any idea of what VRRP and multicast MAC addies are (that the security applications depend upon) and how that might affect load balancing firewall cluster members within the same physical host.  Mutliwhat?

    An example might be that you want to implement high availability load balancing for a "cluster" of firewall VA’s within a single physical host so that you don’t have to VMotion an entire server’s worth of VM’s over to another if the security VA which is inline fails (we can address HA/LB across two physical hosts later.)  It’s going to be really interesting trying to replicate in a virtualized construct what we’ve spent years gluing together in the physical world: vSwitch behavior, port groups, NIC teaming, etc.

    Lastly, I’m skipping ahead a little and treading on issue #3 below, but if one were to deploy multiple security VA’s within a single physical host to provide the desired functionality across protected VM’s, how does one ensure that traffic flow is appropriately delivered to the correct VA’s at the correct time with the correct disposition reflected up and downstream?

    There are some really difficult challenges to overcome when
    attempting to "combine" security functions in-line with one another.
    In fact, this concept is what gave birth to UTM — combining multiple
    security functions into a single platform and optimize both the control
    effectiveness, simplify management and reduce cost.

    Most UTM vendors on the market either write their own security
    stacks and integrate them, take open source code and/or OEM additional
    technologies to present what is marketed as a single "engine" against
    which traffic is cracked once and inspected based upon intelligent
    classification.  Let’s just take that at face value…and with a
    healthy grain of salt.

    My last company, Crossbeam, took a different approach.  Crossbeam
    provides a network and (security) application virtualization platform (the
    X-Series security
    service switch) and allows an operator to combine a number of discrete
    third party ISV security solutions in software in specific serialized
    and parallelized processing order based upon policy. You pick the
    firewall, IPS, AV, AS, URL filter, WAF, etc. of your choosing and
    virtualize those combinations of functions across your network as a
    service layer.

    This is the same model I am trying to illustrate in the case of server virtualization with security VA’s except that the Crossbeam example utilizes an external proprietary chassis solution.

    Here’s an overly-simplified illustration of four security
    applications as deployed within an X-series: an IPS, IDS, firewall, web
    application firewall (WAF).  These applications are instantiated once
    in the system and virtualized across the network segments connected to
    them governed by policy:

    Trafficflow_3
    Note for the purpose of simplicity I’m showing a flow path from ingress
    to egress that is symmetrical.

    Technically, egress flows could
    actually take a different path through other software stacks which
    makes the notion of "state" and how you define it (via the "network" or
    the "application") pretty darn important.  I’m also leaving out the
    complexity of VLAN configurations in this example.

    What’s interesting here is that each of these applications can often
    be configured from a network perspective as a layer 2 or layer 3
    "device," so how the networking is configured and expects to be
    presented with traffic, act on it, and potentially pass it on is really
    important.  Ensuring that flows and state are appropriately directed to
    the correct security function and is presented in the correct "format"
    with low latency and high throughput is much easier said than done.

    Can you imagine trying to do this in a virtualized instance on a server across multiple security VA’s?  There’s really no control plane to effect this, no telemetry, and the vSwitch isn’t really designed as a fabric to provide much more than layer 2 connectivity.

    Fun for the entire family!  Kid tested, virtualization approved!

  3. Episode 7: Revenge of the UTM.  Behold the vUTM! (War)
    "The farce is strong with this one…"  OK, so this is a dandy.  The models today that talk about VA installations position the deployment of a single security vendor’s VA solution.  What that means is combined with the issues raised in points (1) and (2) above, we’re sort of expecting to not embrace the best-of-breed approach and instead of deploying a CHKP firewall VA, an ISS IDP VA, a McAfee Anti-malware VA, etc., we’ll just deploy a single vendor’s monolithic security stack to service the entire physical host?

    Does this model sound familiar?  See #2 above.  Well, either you’re going to do that and realize that your security ultimately sucks harder than a Dyson or you’re going to do the nasty and start to deploy multiple vendor’s security VA’s in the same physical host.

    See the problem there?  Horseman #3 reminds you of the points already raised above.  You’re going to be adding security VA’s which takes away the capacity to add valuable VM’s dynamically:

    Virtsechost
    …and then you’re going to have to deal with the issues in #2 above.  Or, you’ll just settle for "good enough" and deploy what amounts to a single UTM VA and be done with it.  Until it runs out of steam or you get your butt handed to you on a plate when you’re pwned.

    You could plumb in a Crossbeam or even less complex single-vendor appliance solutions, but then you’re going to find yourself playing ping-pong with traffic in and out of each VM, through the physical NICs, and in/out of the appliances.  Latency is going to kill you.  Saturation of the pipe is going to kill you.  Your virtual server admin is going to kill you, especially since he won’t have the foggiest idea of what the hell you’re going on about.

    Further, if you’re thinking VMsafe’s going to save you trouble in either #2 or #3, it ain’t.  VMsafe sets its hooks on a per VM basis and then redirects to a VA/VM within each physical host.  It’s settings in the first release are quite coarse and you can’t make API calls outside of the physical hosts, so the "redirects" to external appliances won’t work.  Even if they did, there’s no control plane to deal with the "serialization" I demonstrate above.

  4. Spinning VM straw into budgetary gold (Famine)
    By this point you probably recognize that you’re going to be deploying the same old security  software/agents to each VM and then adding at least one VA to each physical host, and probably more.  Also, you’re likely not going to do away with the hardware-based versions of these appliances on the physical networks.

    That also means you’re going to be adding additional monitoring points on the network and who is going to do that?  The network team?  The security team?  The, gulp, virtual server admin team?

    What does this mean?  With all this consolidation, you’re going to end up spending MORE on security in a virtualized world instead of less.

There is lots of effort going on to try to force-fit entire existing markets of solutions in order to squeeze a little more life out of investments made in products, but expect some serious pain in the short term because you’re going to be dealing with all of this for the next couple of years for sure.

I hope this has opened your eyes to some of the challenges we’re going to face moving forward.

Finally, let us solemnly remember that:

Killkitty

Categories: Virtualization Tags: