The Downside of All-in-one Assumptions…

Assume
I read with some interest a recent Network Computing web posting by Don MacVittie  titled "The Downside of All-in-One Security."  In this post, Don makes some comments that I don’t entirely agree with, so since I can’t sleep, I thought I’d perform an autopsy to rationalize my discomfort.

I’ve posted before regarding Don’s commentary on UTM (this older story is basically the identical story as the one I’m commenting on today?) in which he said:

Just to be entertaining, I’ll start by pointing out that most readers I talk to wouldn’t
consider a UTM at this time. That doesn’t mean most organizations
wouldn’t, there’s a limit to the number I can stay in regular touch
with and still get my job done, but it does say something about the
market.

All I can say is that I don’t know how many readers Don talks to, but the overall UTM market to which he refers can’t be the same UTM market which IDC defines as being set to grow to $2.4 billion in 2009, a 47.9 percent CAGR from 2004-2009.  Conversely, the traditional firewall and VPN appliance market is predicted to decline to $1.3 billion by 2009 with a negative CARG of 4.8%.

The reality is that UTM players (whether perimeter or Enterprise/Service Provider class UTM) continue to post impressive numbers supporting this growth — and customers are purchasing these solutions.  Perhaps they don’t purchase "UTM" devices but rather "multi-function security appliances?" 🙂 

I’m just sayin’…

Don leads of with:


Unified Threat Management (UTM) products combine multiple security
functions, such as firewall, content inspection and antivirus, into a
single appliance. The assumption is UTM reduces management hassles by
reducing the hardware in your security infrastructure … but you know
what happens when you assume.

No real problems thus far.  My response to the interrogative posited by the last portion of Don’s intro is: "Yes, sometimes when you assume, it turns out you are correct."  More on that in a moment…


You can slow the spread of security appliances by collapsing many
devices into one, but most organizations struggle to manage the
applications themselves, not the hardware that runs them.

Bzzzzzzzzttttt.  The first half of the sentence is absolutely a valid and a non-assumptive benefit to those deploying UTM.  The latter half makes a rather sizeable assumption, one I’d like substantiated, please.

If we’re talking about security appliances, today there’s little separation between the application and the hardware that runs them.  That’s the whole idea behind appliances.

In many cases, these appliances use embedded software, RTOS in silicon, or very tightly couple the functional and performance foundations of the solution to the binding of the hardware and software combined.

I can’t rationalize someone not worrying about the "hardware," especially when they deploy things like HA clusters or a large number of branch office installations. 

You mean to tell me that in large enterprises (you notice that Don forces me to assume what market he’s referring to because he’s generalizing here…) that managing 200+ firewall appliances (hardware) is not a struggle?  Don talks about the application as an issue.  What about the operating system?  Patches?  Alerts/alarms?  Logs?  It’s hard enough to do that with one appliance.  Try 200.  Or 1000!

Content
inspection, antivirus and firewall are all generally controlled by
different crowds in the enterprise, which means some arm-wrestling to
determine who maintains the UTM solution.

This is may be an accurate assumption in a large enterprise but in a small company (SME/SMB) it’s incredibly likely that the folks managing the CI, AV and firewall *are* the same people/person.  Chances are it’s Bob in accounting!


Then there’s bundling. Some vendors support best-of-breed security
apps, giving you a wider choice. However, each application has to crack
packets individually–which affects performance.

So there’s another assumptive generalization that somehow taking traffic and vectoring it off at high speed/low latency to processing functions highly tuned for specific tasks is going to worsen performance.  Now I know that Don didn’t say it would worsen performance, he said it  "…affect(s) performance," but we all know what Don meant — even if we have to assume. 😉

Look, this is an over-reaching and generalized argument and the reality is that even "integrated" solutions today perform replay and iterative inspection that requires multiple packet visitations with "individual packet cracking" — they just happen to do it in parallel — either monolithically in one security stack or via separate applications.  Architecturally, there are benefits to this approach.

Don’t throw the baby out with the bath water…

How do you think stand-alone non-in-line IDS/IPS works in conjunction with firewalls today in non-UTM environments?  The firewall gets the packet as does the IDS/IPS via a SPAN port, a load balancer, etc…they crack the packets independently, but in the case of IDS, it doesn’t "affect" the firewall’s performance one bit.  Using this analogy, in an integrated UTM appliance, this example holds water, too.

Furthermore, in a UTM approach the correlation for disposition is usually done on the same box, not via an external SEIM…further saving the poor user from having to deploy yet another appliance.  Assuming, of course, that this is a problem in the first place. 😉

I’d like some proof points and empirical data that clearly demonstrates this assumption regarding performance.  And don’t hide behind the wording.  The implication here is that you get "worse" performance.   With today’s numbers from  dual CPU/multi-core processors, huge busses, NPU’s and dedicated hardware assist, this set of assumptions flawed.

Other vendors tweak
performance by tightly integrating apps, but you’re stuck with the
software they’ve chosen or developed.

…and then there are those vendors that tweak performance by tightly integrating the apps and allow the customer to define what is best-of-breed without being "stuck with the software [the vendor has] chosen or developed."  You get choice and performance.  To assume otherwise is to not perform diligence on the solutions available today.  If you need to guess who I am talking about…


For now, the single platform model isn’t right for enterprises large
enough to have a security staff.

Firstly, this statement is just plain wrong.  It *may* be right if you’re talking about deploying a $500 perimeter UTM appliance (or a bunch of them) in the core of a large enterprise, but nobody would do that.  This argument is completely off course when you’re talking about Enterprise-class UTM solutions.

In fact, if you choose the right architecture, assuming the statement above regarding separate administrative domains is correct, you can have the AV people manage the AV, the firewall folks manage the firewalls, etc. and do so in a very reliable, high speed and secure consolidated/virtualized fashion from a UTM architecture such as this.

That said, the sprawl created by
existing infrastructure can’t go on forever–there is a limit to the
number of security-only ports you can throw into the network. UTM will
come eventually–just not today

So, we agree again…security sprawl cannot continue.  It’s an overwhelming issue for both those who need "good enough" security as well as those who need best-of-breed. 

However, your last statement leaves me scratching my head in confused disbelief, so I’ll just respond thusly:

UTM isn’t "coming," it’s already arrived.  It’s been here for years without the fancy title.  The same issues faced in the datacenter in general are the same facing the microcosm of the security space — from space, power, and cooling to administration, virtualization and consolidation — and UTM helps solve these challenges.  UTM is here TODAY, and to assume anything otherwise is a foolish position.

My $0.02 (not assuming inflation)

/Chris

  1. No comments yet.
  1. No trackbacks yet.