Archive

Archive for the ‘Technology Review’ Category

All Your COTS Multi-Core CPU’s With Non-Optimized Security Software Are Belong To Us…

September 24th, 2007 3 comments

Intel_quadcore{No, I didn’t forget to spell-check the title.  Thanks for the 20+ emails on the topic, though.  For those of you not familiar with the etymology of "..all your base are belong to us," please see here…}

I’ve been in my fair share of "discussions" regarding the perceived value in the security realm of proprietary custom hardware versus that of COTS (Commercial Off The Shelf) multi-core processor-enabled platforms.

Most of these debates center around what can be described as a philosophical design divergence suggesting that given the evolution and availability of muti-core Intel/AMD COTS processors, the need for proprietary hardware is moot. 

Advocates of the COTS approach are usually pure software-only vendors who have no hardware acceleration capabilities of their own.  They often reminisce about the past industry failures of big-dollar fixed function ASIC-based products (while seeming to ignore re-purposeable FPGA’s) to add weight to the theory that Moore’s Law is all one needs to make security software scream.

What’s really interesting and timely about this discussion is the notion of how commoditized the OEM/ODM appliance market has become when compared to the price/performance ratio offered by COTS hardware platforms such as Dell, HP, etc. 

Combine that with the notion of how encapsulated "virtual appliances" provided by the virtualization enablement strategies of folks like VMWare will be the next great equalizer in the security industry and it gets much more interesting…


There’s a Moore’s Law for hardware, but there’s also one for software…didja know?


Sadly, one of the most overlooked points in the multi-core argument is
that exponentially faster hardware does not necessarily translate to
exponentially improved software
performance.  You may get a performance  bump by forking multiple single-threaded instances of software or even pinning them to CPU/core affinity by spawning networking off to one processor/core and (example) firewalling to another, but that’s just masking the issue.

This is just like tossing a 426 Hemi in a Mini Cooper; your ability to use all that horsepower is limited by what torque/power you can get to the ground when the chassis isn’t designed to efficiently harness it.

For the record, as you’ll see below, I advocate a balanced approach: use proprietary, purpose-built hardware for network processing and offload/acceleration functions where appropriate and ride the Intel multi-core curve on compute stacks for general software execution with appropriately architected software crafted to take advantage of the multi-core technology.

Here’s a good example.

Cbspms
In my last position with Crossbeam, the X-Series platform modules relied on a combination of proprietary network processing featuring custom NPU’s, multi-core MIPS security processors and custom FPGA’s paired with Intel reference design multi-core compute blades (basically COTS-based Woodcrest boards) for running various combinations of security software from leading ISV’s.

What’s interesting about the bulk of the security software from these best-of-breed players you all know and love that run on those Intel compute blades, is that even with two sockets and dual cores per, it is difficult to squeeze large performance gains out of the ISV’s software.

Why?  There are lots of reasons.  Kernel vs. user mode, optimization for specific hardware and kernels, no-packet copy network drivers, memory and data plane architectures and the like.   However, one of the most interesting contributors to this problem is the fact that many of the core components of these ISV’s software were written 5+ years ago.

While these applications were born as tenants in single and dual processors, it has become obvious that developers cannot depend upon the increased clock speeds of processors or the availability of multi-core sockets alone to accelerate their single-threaded applications.

To take advantage of the increase in hardware performance, developers must redesign their applications to
run in a threaded environment as multi-core CPU architectures feature two or more processor compute engines (cores) and provide fully
parallellized hyperthreaded execution of multiple software threads.


Enter the Impending Muti-Core Crisis


But there’s a wrinkle with the pairing of this mutually-affected hardware/software growth curve that demonstrates a potential crisis with multi-core evolution.  This crisis will effect the way in which developers evaluate how to move forward with both their software and the hardware it runs on.

This comes from the blog post titled "Multicore Crisis" from SmoothSpan’s Blog:

Clockspeeds_2

The Multicore Crisis has to do with a shift in the behavior of Moore’s Law.
The law basically says that we can expect to double the number of
transistors on a chip every 18-24 months.  For a long time, it meant
that clock speeds, and hence the ability of the chip to run the same program faster,
would also double along the same timeline.  This was a fabulous thing
for software makers and hardware makers alike.  Software makers could
write relatively bloated software (we’ve all complained at Microsoft
for that!) and be secure in the knowledge that by the time they
finished it and had it on the market for a short while, computers would
be twice as fast anyway.  Hardware makers loved it because with
machines getting so much faster so quickly people always had a good
reason to buy new hardware.

Alas this trend has ground to a halt!  It’s easy to see from the
chart above that relatively little progress has been made since the
curve flattens out around 2002.  Here we are 5 years later in 2007.
The 3GHz chips of 2002 should be running at about 24 GHz, but in fact,
Intel’s latest Core 2 Extreme is running at about 3 GHz.  Doh!  I hate
when this happens!  In fact, Intel made an announcement in 2003 that
they were moving away from trying to increase the clock speed and over
to adding more cores.  Four cores are available today, and soon there
will be 8, 16, 32, or more cores.

What does this mean?  First, Moore’s Law didn’t stop working.  We
are still getting twice as many transistors.  The Core 2 now includes 2
complete CPU’s for the price of one!  However, unless you have software
that’s capable of taking advantage of this, it will do you no good.  It
turns out there is precious little software that benefits if we look at
articles such Jeff Atwood’s comparison of 4 core vs 2 core performance.  Blah!  Intel says that software has to start obeying Moore’s Law.
What they mean is software will have to radically change how it is
written to exploit all these new cores.  The software factories are
going to have to retool, in other words.

With more and more computing moving into the cloud on the twin
afterburners of SaaS and Web 2.0, we’re going to see more and more
centralized computing built on utility infrastructure using commodity
hardware.  That’s means we have to learn to use thousands of these
little cores.  Google did it, but only with some pretty radical new tooling.

This is fascinating stuff and may explain why many of the emerging appliances from leading network security vendors today that need optimized performance and packet processing do not depend solely on COTS multi-core server platforms. 

This is even the case with new solutions that have been written from the ground-up to take advantage of multi-core capabilities; they augment the products (much like the Crossbeam example above) with NPU’s, security processors and acceleration/offload engines.

If you don’t have acceleration hardware, as is the case for most pure software-only vendors, this means that a fundamental re-write is required in order to take advantage of all this horsepower.  Check out what Check Point has done with CoreXL which is their "…multi-core acceleration technology
[that] takes advantage of multi-core processors to provide high levels of
security inspection by dynamically sharing the load across all cores of
a CPU."

We’ll have to see how much more juice can be squeezed from the software and core stacking as the gap narrows on the processor performance increases (see above) as balanced against core density without a  complete re-tooling of software stacks versus doing it in hardware. 

Otherwise, combined with this smoothing/dipping of the Moore’s Law hardware curve, not retooling software will mean that proprietary processors may play an increasing role of importance as the cycle replays.

Interesting, for sure.

/Hoff

 

Categories: Technology Review Tags:

Oh SNAP! VMware acquires Determina! Native Security Integration with the Hypervisor?

August 19th, 2007 12 comments

Determinalogo
Hot on the trails of becoming gigagillionaires, the folks at VMware make my day with this.  Congrats to the folks @ Determina.

Methinks that for the virtualization world, it’s a very, very good thing.  A step in the right direction.

I’m going to prognosticate that this means that Citrix will buy Blue Lane or Virtual Iron next (see bottom of the post) since their acquisition of XenSource leaves them with the exact same problem that this acquisition for VMware tries to solve:

VMware Inc., the market leader in virtualization software, has acquired
Determina Inc., a Silicon Valley maker of host intrusion prevention
products.

…the security of virtualized
environments has been something of an unknown quantity due to the
complexity of the technology and the ways in which hypervisors interact
with the host OS. 
Determina’s technology is designed specifically to protect the OS
from malicious code, regardless of the origin of the attack, so it
would seem to be a sensible fit for VMware, analysts say.Memoryfirewall

In his analysis of the deal, Gartner’s MacDonald sounded many of
the same notes. "By potentially integrating Memory Firewall into the
ESX hypervisor, the hypervisor itself can provide an additional level
of protection against intrusions. We also believe the memory protection
will be extended to guest OSs as well: VMware’s extensive use of binary
emulation for virtualization puts the ESX hypervisor in an advantageous
position to exploit this style of protection," he wrote.

I’ve spoken a lot recently  about how much I’ve been dreading the notion that security was doomed to repeat itself with the accelerated take off of server virtualization since we haven’t solved many of the most basic security problem classes.  Malicious code is getting more targeted and more intelligent and when you combine an emerging market using hot technology without an appropriate level of security… 

Basically, my concerns have stemmed from the observation that if we can’t do a decent job protecting physically-seperate yet interconnected network elements with all the security fu we have, what’s going to happen when the "…network is the computer" (or vice versa.)  Just search for "virtualization" via the Lijit Widget above for more posts on this…

Some options for securing virtualized guest OS’s in a VM are pretty straight foward:

  1. Continue to deploy layered virtualized security services across VLAN segments of which each VM is a member (via IPS’s, routers, switches, UTM devices…)
  2. Deploy software like Virtual Iron’s which looks like a third party vSwitch IPS on each VM
  3. Integrate something like Blue Lane’s ESX plugin-in which interacts with and at the VMM level
  4. As chipset level security improves, enable it
  5. Deploy HIPS as part of every guest OS.

Each of these approaches has its own sets of pros and cons, and quite honestly, we’ll probably see people doing all five at the same time…layered defense-in-depth.  Ugh.

What was really annoying to me, however, is that it really seemed that in many cases, the VM solution providers were again expecting that we’d just be forced to bolt security ON TO our VM environments instead of BAKING IT IN.  This was looking like a sad reality.

I’ll get into details in another post about Determina’s solution, but I am encouraged by VMware’s acquisition of a security company which will be integrated into their underlying solution set.  I don’t think it’s  a panacea, but quite honestly, the roadmap for solving these sorts of problems were blowing in the wind for VMware up until this point.

"Further, by
using the LiveShield capabilities, the ESX hypervisor could be used
‘introspectively’ to shield the hypervisor and guest OSs from attacks
on known vulnerabilities in situations where these have not yet been
patched. Both Determina technologies are fairly OS- and
application-neutral, providing VMware with an easy way to protect ESX
as well as Linux- and Windows-based guest OSs."

Quite honestly, I hoped they would have bought Blue Lane since the ESX Hypervisor is now going to be a crowded space for them…

We’ll see how well this gets integrated, but I smiled when I read this.

Oh, and before anyone gets excited, I’m sure it’s going to be 100% undetectable! 😉

/Hoff

Network Security is Dead…It’s All About the Host.

May 28th, 2007 6 comments

Securitycomputer2No, not entirely as it’s really about the data, but
I had an epiphany last week. 

I didn’t get any on me, but I was really excited about the — brace yourself — future of security in a meeting I had with Microsoft.  It reaffirmed my belief that while some low-hanging security fruit will be picked off by the network, the majority of the security value won’t be delivered by it.

I didn’t think I’d recognize just how much of it — in such a short time — will ultimately make its way back into the host (OS,) and perhaps you didn’t either.

We started with centralized host-based computing, moved to client-server.  We’ve had Web1.0, are in the  beginnings of WebX.0 and I ultimately believe that we’re headed back to a centralized host-based paradigm now that the network transport is fast, reliable and cheap.

That means that a bunch of the stuff we use today to secure the "network" will gravitate back towards the host. I’ve used Scott McNealy’s mantra as he intended it to in order to provide some color to conversations before, but I’m going to butcher it here. 

While I agree that in abstract the "Network is the Computer," in order to secure it, you’re going to have to treat the "network" like an OS…hard to do.   That’s why I think more and more security will make its way back to the actual
"computer" instead.

So much of the strategy linked to large security vendors sees an increase in footprint back on the host.  It’s showing back up there today in the guise of AV, HIPS, configuration management, NAC and Extrusion Prevention, but it’s going to play a much, much loftier role as time goes on as the level of interaction and interoperability must increase.  Rather than put 10+ agents on a box, imagine if that stuff was already built in?

Heresy, I suppose.

I wager that the "you can’t trust the endpoint" and "all security will make its way into the switch" crowds will start yapping on this point, but before that happens, let me explain…

The Microsoft Factor

Vista_box_2
I was fortunate enough to sit down with some of the key players in Microsoft’s security team last week and engage in a lively bit of banter regarding some both practical and esoteric elements of where security has been, is now and will be in the near future. 

On the tail of Mr. Chambers’ Interop keynote, the discussion was all abuzz regarding collaboration and WebX.0 and the wonders that will come of the technology levers in the near future as well as the, ahem, security challenges that this new world order will bring.  I’ll cover that little gem in another blog entry.

Some of us wanted to curl up into a fetal position.  Others saw a chance to correct material defects in the way in which the intersection of networking and security has been approached.  I think the combination of the two is natural and healthy and ultimately quite predictable in these conversations.

I did a bit of both, honestly.

As you can guess, given who I was talking to, much of what was discussed found its way back to a host-centric view of security with a heavy anchoring in the continued evolution of producing more secure operating systems, more secure code, more secure protocols and strong authentication paired with encryption.

I expected to roll my eyes a lot and figured that our conversation would gravitate towards UAC and that a bulk-helping of vapor functionality would be dispensed with the normal disclaimers urging "…when it’s available one day" as a helping would be ladled generously into the dog-food bowls the Microsofties were eating from.

I am really glad I was wrong, and it just goes to show you that it’s important to consider a balanced scorecard in all this; listen with two holes, talk with one…preferably the correct one 😉

I may be shot for saying this in the court of popular opinion, but I think Microsoft is really doing a fantastic job in their renewed efforts toward improving security.  It’s not perfect, but the security industry is such a fickle and bipolar mistress — if you’re not 100% you’re a zero.

After spending all this time urging people that the future of security will not be delivered in the network proper, I have not focused enough attention on the advancements that are indeed creeping their way into the OS’s toward a more secure future as  this inertia orthogonally reinforces my point.

Yes, I work for a company that provides network-centric security offerings.  Does this contradict the statement I just made?  I don’t think so, and neither did the folks from Microsoft.  There will always be a need to consolidate certain security functionality that does not fit within the context of the host — at least within an acceptable timeframe as the nature of security continues to evolve.  Read on.

The network will become transparent.  Why?

In this brave new world, mutually-authenticated and encrypted network communications won’t be visible to the majority of the plumbing that’s transporting it, so short of the specific shunts to the residual overlay solutions that will still be present to the network in forms of controls that will not make their way to the host, the network isn’t going to add much security value at all.

The Jericho EffectJerichoeps_2

What I found interesting is that I’ve enjoyed similar discussions with the distinguished fellows of the Jericho Forum wherein after we’ve debated the merits of WHAT you might call it, the notion of HOW "deperimeterization," "reperimeterization," (or my favorite) "radical externalization,"  weighs heavily on the evolution of security as we know it.

I have to admit that I’ve been a bit harsh on the Jericho boys before, but Paul Simmonds and I (or at least I did) came to the realization that my allergic reaction wasn’t to the concepts at hand, but rather the abrasive marketing of the message.  Live and learn.

Both sets of conversations basically see the pendulum effect of security in action in this oversimplification of what Jericho posits is the future of security and what Microsoft can deliver — today:

Take a host with a secured OS, connect it into any network using whatever means you find
appropriate, without regard for having to think about whether you’re on the "inside" or "outside." Communicate securely, access and exchange data in policy-defined "zones of trust" using open, secure, authenticated and encrypted protocols.

If you’re interested in the non-butchered more specific elements of the Jericho Forum’s "10 Commandments," see here.

What I wasn’t expecting in marrying these two classes of conversation is that this future of security is much closer and notably much more possible than I readily expected…with a Microsoft OS, no less.   In fact, I got a demonstration of it.  It may seem like no big deal to some of you, but the underlying architectural enhancements to Microsoft’s Vista and Longhorn OS’s are a fantastic improvement on what we have had to put up thus far.

One of the Microsoft guys fired up his laptop with a standard-issue off-the-shelf edition of Vista,  authenticated with his smartcard, transparently attached to the hotel’s open wireless network and then took me on a tour of some non-privileged internal Microsoft network resources.

Then he showed me some of the ad-hoc collaborative "People Near Me" peer2peer tools built into Vista — same sorts of functionality…transparent, collaborative and apparently quite secure (gasp!) all at the same time.

It was all mutually authenticated and encrypted and done so transparently to him.

He didn’t "do" anything; no VPN clients, no split-brain tunneling, no additional Active-X agents, no SSL or IPSec shims…it’s the integrated functionality provided by both IPv6 and IPSec in the NextGen IP stack present in Vista.

And in his words "it just works."   Yes it does.

He basically established connectivity and his machine reached out to an reachable read-only DC (after auth. and with encryption) which allowed him to transparently resolve "internal" vs. "external" resources.  Yes, the requirements of today expect that the OS must still evolve to prevent exploit of the OS, but this too shall become better over time.

No, it obviously doesn’t address what happens if you’re using a Mac or Linux, but the pressure will be on to provide the same sort of transparent, collaborative and secure functionality across those OS’s, too.

Allow me my generalizations — I know that security isn’t fixed and that we still have problems, but think of this as a half-glass full, willya!?

One of the other benefits I got form this conversation is the reminder that as Vista and Longhorn default to IPv6 natively (they can do both v4&v6 dynamically,) as enterprises upgrade, the network hardware and software (and hence the existing security architecture) must also be able to support IPv6 natively.  It’s not just the government pushing v6, large enterprises are now standardizing on it, too.

Here are some excellent links describing the Nextgen IP stack in Vista, the native support for IPSec (goodbye VPN market,) and IPv6 support.

Funny how people keep talking about Google being a threat to Microsoft.  I think that the network giants like Cisco might have their hands full with Microsoft…look at how each of them are maneuvering.

/Hoff
{ Typing this on my Mac…staring @ a Vista Box I’m waiting to open to install within Parallels 😉 }

Network Intelligence is an Oxymoron & The Myth of Security Packet Cracking

May 21st, 2007 No comments

Cia[Live from Interop’s Data Center Summit]

Jon Oltsik crafted an interesting post today regarding the bifurcation of opinion on where the “intelligence” ought to sit in a networked world: baked into the routers and switches or overlaid using general-purpose compute engines that ride Moore’s curve.

I think that I’ve made it pretty clear where I stand.   I submit that you should keep the network dumb, fast, reliable and resilient and add intelligence (such as security) via flexible and extensible service layers that scale both in terms of speed but also choice.

You should get to define and pick what best of breed means to you and add/remove services at the speed of your business, not the speed of an ASIC spin or an acquisition of technology that is neither in line with the pace and evolution of classes of threats and vulnerabilities or the speed of an agile business. 

The focal point of his post, however, was to suggest that the real issue is the fact that all of this intelligence requires exposure to the data streams which means that each component that comprises it needs to crack the packet before processing.   Jon suggests that you ought to crack the packet once and then do interesting things to the flows.  He calls this COPM (crack once, process many) and suggests that it yields efficiencies — of what, he did not say, but I will assume he means latency and efficacy.

So, here’s my contentious point that I explain below:

Cracking the packet really doesn’t contribute much to the overall latency equation anymore thanks to high-speed hardware, but the processing sure as heck does!  So whether you crack once or many times, it doesn’t really matter, what you do with the packet does.

Now, on to the explanation…

I think that it’s fair to say that many of the underlying mechanics of security are commoditizing so things like anti-virus, IDS, firewalling, etc. can be done without a lot of specialization – leveraging prior art is quick and easy and thus companies can broaden their product portfolios by just adding a feature to an existing product.

Companies can do this because of the agility that software provides, not hardware.  Hardware can give you scales of economy as it relates to overall speed (for certain things) but generally not flexibility. 

However, software has it’s own Moore’s curve or sorts and I maintain that unfortunately its lifecycle, much like what we’re hearing @ Interop regarding CPU’s, does actually have a shelf life and point of diminishing return for reasons that you’re probably not thinking about…more on this from Interop later.

John describes the stew of security componenty and what he expects to see @ Interop this week:

I expect network intelligence to be the dominant theme at this week’s Interop show in Las Vegas. It may be subtle but its definitely there. Security companies will talk about cracking packets to identify threats, encrypt bits, or block data leakage. The WAN optimization crowd will discuss manipulating protocols and caching files, Application layer guys crow about XML parsing, XSLT transformation, and business logic. It’s all about stuffing networking gear with fat microprocessors to perform one task or another.

That’s a lot of stuff tied to a lot of competing religious beliefs about how to do it all as Jon rightly demonstrates and ultimately highlights a nasty issue:

The problem now is that we are cracking packets all over the place. You can’t send an e-mail, IM, or ping a router without some type of intelligent manipulation along the way.

<nod>  Whether it’s in the network, bolted on via an appliance or done on the hosts, this is and will always be true.  Here’s the really interesting next step:

I predict that the next bit wave in this evolution will be known as COPM for "Crack once, process many." In this model, IP packets are stopped and inspected and then all kinds of security, acceleration, and application logic actions occur. Seems like a more efficient model to me.

To do this, it basically means that this sort of solution requires Proxy (transparent or terminating) functionality.  Now, the challenge is that whilst “cracking the packets” is relatively easy and cheap even at 10G line rates due to hardware, the processing is really, really hard to do well across the spectrum of processing requirements if you care about things such as quality, efficacy, and latency and is “expensive” in all of those categories.

The intelligence of deciding what to process and how once you’ve cracked the packets is critical. 

This is where embedding this stuff into the network is a lousy idea. 

How can a single vendor possibly provide anything more than “good enough” security in a platform never designed to solve this sort of problem whilst simultaneously trying to balance delivery and security at line rate? 

This will require a paradigm shift for the networking folks that will either mean starting from scratch and integrating high-speed networking with general-purpose compute blades, re-purposing a chassis (like, say, a Cat65K) and stuffing it with nothing but security cards and grafting it onto the switches or stack appliances (big or small – single form factor or in blades) and graft them onto the switches once again.   And by the way, simply adding networking cards to a blade server isn’t an effective solution, either.  "Regular" applications (and esp. SOA/Web 2.0 apps) aren’t particularly topology sensitive.  Security "applications" on the other hand, are wholly dependent and integrated with the topologies into which they are plumbed.

It’s the hamster wheel of pain.

Or, you can get one of these which offers all the competency, agility, performance, resilience and availability of a specialized networking component combined with an open, agile and flexible operating and virtualized compute architecture that scales with parity based on Intel chipsets and Moore’s law.

What this gives you is an ecosystem of loosely-coupled BoB security services that can be intelligently combined in any order once cracked and ruthlessly manipulated as it passes through them governed by policy – and ultimately dependent upon making decisions on how and what to do to a packet/flow based upon content in context.

The consolidation of best of breed security functionality delivered in a converged architecture yields efficiencies that is spread across the domains of scale, performance, availability and security but also on the traditional economic scopes of CapEx and OpEx.

Cracking packets, bah!  That’s so last Tuesday.

/Hoff

NAC is a Feature not a Market…

March 30th, 2007 7 comments

MarketfeatureI’m picking on NAC in the title of this entry because it will drive
Alan Shimel ape-shit and NAC has become the most over-hyped hooplah
next to Britney’s hair shaving/rehab incident…besides, the pundits come a-flockin’ when the NAC blood is in the water…

Speaking of chumming for big fish, love ’em or hate ’em, Gartner’s Hype Cycles do a good job of allowing
one to visualize where and when a specific technology appears, lives
and dies
as a function of time, adoption rate and utility.

We’ve recently seen a lot of activity in the security space that I
would personally describe as natural evolution along the continuum,
but is often instead described by others as market "consolidation" due to
saturation. 

I’m not sure they are the same thing, but really, I don’t care to argue
that point.  It’s boring.  It think that anyone arguing either side is
probably right.  That means that Lindstrom would disagree with both. 

What I do want to do is summarize a couple of points regarding some of
this "evolution" because I use my blog as a virtual jot pad against which
I can measure my own consistency of thought and opinion.  That and the
chicks dig it.

Without my usual PhD Doctoral thesis brevity, here are just a few
network security technologies I reckon are already doomed to succeed as
features and not markets — those technologies that will, within the
next 24 months, be absorbed into other delivery mechanisms that
incorporate multiple technologies into a platform for virtualized
security service layers:

  1. Network Admission Control
  2. Network Access Control
  3. XML Security Gateways
  4. Web Application Firewalls
  5. NBAD for the purpose of DoS/DDoS
  6. Content Security Accelerators
  7. Network-based Vulnerability Assessment Toolsets
  8. Database Security Gateways
  9. Patch Management (Virtual or otherwise)
  10. Hypervisor-based virtual NIDS/NIPS tools
  11. Single Sign-on
  12. Intellectual Property Leakage/Extrusion Prevention

…there are lots more.  Components like gateway AV, FW, VPN, SSL
accelerators, IDS/IPS, etc. are already settling to the bottom of UTM
suites as table stakes.  Many other functions are moving to SaaS
models.  These are just the ones that occurred to me without much
thought.

Now, I’m not suggesting that Uncle Art is right and there will be no
stand-alone security vendors in three years, but I do think some of this
stuff is being absorbed into the bedrock that will form the next 5
years of evolutionary activity.

Of course, some folks will argue that all of the above will just all be
absorbed into the "network" (which means routers and switches.)  Switch
or multi-function device…doesn’t matter.  The "smoosh" is what I’m
after, not what color it is when it happens.

What’d I miss?

/Hoff

(Written from SFO Airport sitting @ Peet’s Coffee.  Drinking a two-shot extra large iced coffee)

Another Virtualized Solution for VM Security…

March 19th, 2007 10 comments

Virtualmyspace
I got an email reminder from my buddy Grant Bourzikas today pointing me to another virtualized security solution for servers from Reflex Security called Reflex VSA.  VSA stands for Virtual Security Appliance and the premise appears to be that you deploy this software within each guest VM and it provides what looks a lot like host-based intrusion prevention functionality per VM.

The functionality is defined thusly:

Reflex VSA solves the problem that traditional network security such as
IPS and firewall appliances currently can not solve: detecting and preventing attacks within a virtual server. Because Reflex VSA runs as virtualized
application inside the virtualized environment, it can detect and mitigate
        threats between virtual hosts and networks.

Reflex VSA Features:
        • Access firewall for permission enforcement for intra-host and external network
           communication
        • Intrusion Prevention with inline blocking and filtering for virtualized networks
        • Anomaly, signature, and rate-based threat detection capability
       
        • Network Discovery to discover and map all virtual machines and applications
        • Reflex Command Center, providing a centralized configuration and management
           console, comprehensive reporting tools, and real-time event aggregation and
           correlation
   

Reflex_vsa_deploy
It does not appear to wrap around or plug-in to the HyperVisor natively, so I’m a little confused as to the difference between deploying VSA and whatever HIPS/NIPS agent a customer might already have deployed on "physical" server instantiations.

Blue Lane’s product addresses this at the HyperVisor layer and it would be interesting to me to have the pundits/experts argue the pros/cons of each approach. {Ed. This is incorrect.  Blue Lane’s product runs as a VM/virtual appliance also.  With the exposure via API of the hypervisor/virtual switches, products like Blue Lane and Reflex would take advantage to be more flexible, effective and higher performing.}

I’m surprised most of the other "security configuration management" folks haven’t already re-branded their agents as being "Virtualization Compliant" to attack this nascent marketspace. < :rolleyes here: >

It’s good to see that folks are at least owning up to the fact that intra-VM communications via virtual switches are going to drive a spin on risk models, detection and mitigation tools and techniques.  This is what I was getting at in this blog entry here.

I would enjoy speaking to someone from Reflex to understand their positioning and differentiation better, but isn’t this just HIPS per VM?  How’s that different than firewall, AV, etc. per VM?

/Hoff