Archive

Archive for the ‘Networking’ Category

The Downside of All-in-one Assumptions…

July 16th, 2006 No comments

Assume
I read with some interest a recent Network Computing web posting by Don MacVittie  titled "The Downside of All-in-One Security."  In this post, Don makes some comments that I don’t entirely agree with, so since I can’t sleep, I thought I’d perform an autopsy to rationalize my discomfort.

I’ve posted before regarding Don’s commentary on UTM (this older story is basically the identical story as the one I’m commenting on today?) in which he said:

Just to be entertaining, I’ll start by pointing out that most readers I talk to wouldn’t
consider a UTM at this time. That doesn’t mean most organizations
wouldn’t, there’s a limit to the number I can stay in regular touch
with and still get my job done, but it does say something about the
market.

All I can say is that I don’t know how many readers Don talks to, but the overall UTM market to which he refers can’t be the same UTM market which IDC defines as being set to grow to $2.4 billion in 2009, a 47.9 percent CAGR from 2004-2009.  Conversely, the traditional firewall and VPN appliance market is predicted to decline to $1.3 billion by 2009 with a negative CARG of 4.8%.

The reality is that UTM players (whether perimeter or Enterprise/Service Provider class UTM) continue to post impressive numbers supporting this growth — and customers are purchasing these solutions.  Perhaps they don’t purchase "UTM" devices but rather "multi-function security appliances?" ๐Ÿ™‚ 

I’m just sayin’…

Don leads of with:


Unified Threat Management (UTM) products combine multiple security
functions, such as firewall, content inspection and antivirus, into a
single appliance. The assumption is UTM reduces management hassles by
reducing the hardware in your security infrastructure … but you know
what happens when you assume.

No real problems thus far.  My response to the interrogative posited by the last portion of Don’s intro is: "Yes, sometimes when you assume, it turns out you are correct."  More on that in a moment…


You can slow the spread of security appliances by collapsing many
devices into one, but most organizations struggle to manage the
applications themselves, not the hardware that runs them.

Bzzzzzzzzttttt.  The first half of the sentence is absolutely a valid and a non-assumptive benefit to those deploying UTM.  The latter half makes a rather sizeable assumption, one I’d like substantiated, please.

If we’re talking about security appliances, today there’s little separation between the application and the hardware that runs them.  That’s the whole idea behind appliances.

In many cases, these appliances use embedded software, RTOS in silicon, or very tightly couple the functional and performance foundations of the solution to the binding of the hardware and software combined.

I can’t rationalize someone not worrying about the "hardware," especially when they deploy things like HA clusters or a large number of branch office installations. 

You mean to tell me that in large enterprises (you notice that Don forces me to assume what market he’s referring to because he’s generalizing here…) that managing 200+ firewall appliances (hardware) is not a struggle?  Don talks about the application as an issue.  What about the operating system?  Patches?  Alerts/alarms?  Logs?  It’s hard enough to do that with one appliance.  Try 200.  Or 1000!

Content
inspection, antivirus and firewall are all generally controlled by
different crowds in the enterprise, which means some arm-wrestling to
determine who maintains the UTM solution.

This is may be an accurate assumption in a large enterprise but in a small company (SME/SMB) it’s incredibly likely that the folks managing the CI, AV and firewall *are* the same people/person.  Chances are it’s Bob in accounting!


Then there’s bundling. Some vendors support best-of-breed security
apps, giving you a wider choice. However, each application has to crack
packets individually–which affects performance.

So there’s another assumptive generalization that somehow taking traffic and vectoring it off at high speed/low latency to processing functions highly tuned for specific tasks is going to worsen performance.  Now I know that Don didn’t say it would worsen performance, he said it  "…affect(s) performance," but we all know what Don meant — even if we have to assume. ๐Ÿ˜‰

Look, this is an over-reaching and generalized argument and the reality is that even "integrated" solutions today perform replay and iterative inspection that requires multiple packet visitations with "individual packet cracking" — they just happen to do it in parallel — either monolithically in one security stack or via separate applications.  Architecturally, there are benefits to this approach.

Don’t throw the baby out with the bath water…

How do you think stand-alone non-in-line IDS/IPS works in conjunction with firewalls today in non-UTM environments?  The firewall gets the packet as does the IDS/IPS via a SPAN port, a load balancer, etc…they crack the packets independently, but in the case of IDS, it doesn’t "affect" the firewall’s performance one bit.  Using this analogy, in an integrated UTM appliance, this example holds water, too.

Furthermore, in a UTM approach the correlation for disposition is usually done on the same box, not via an external SEIM…further saving the poor user from having to deploy yet another appliance.  Assuming, of course, that this is a problem in the first place. ๐Ÿ˜‰

I’d like some proof points and empirical data that clearly demonstrates this assumption regarding performance.  And don’t hide behind the wording.  The implication here is that you get "worse" performance.   With today’s numbers from  dual CPU/multi-core processors, huge busses, NPU’s and dedicated hardware assist, this set of assumptions flawed.

Other vendors tweak
performance by tightly integrating apps, but you’re stuck with the
software they’ve chosen or developed.

…and then there are those vendors that tweak performance by tightly integrating the apps and allow the customer to define what is best-of-breed without being "stuck with the software [the vendor has] chosen or developed."  You get choice and performance.  To assume otherwise is to not perform diligence on the solutions available today.  If you need to guess who I am talking about…


For now, the single platform model isn’t right for enterprises large
enough to have a security staff.

Firstly, this statement is just plain wrong.  It *may* be right if you’re talking about deploying a $500 perimeter UTM appliance (or a bunch of them) in the core of a large enterprise, but nobody would do that.  This argument is completely off course when you’re talking about Enterprise-class UTM solutions.

In fact, if you choose the right architecture, assuming the statement above regarding separate administrative domains is correct, you can have the AV people manage the AV, the firewall folks manage the firewalls, etc. and do so in a very reliable, high speed and secure consolidated/virtualized fashion from a UTM architecture such as this.

That said, the sprawl created by
existing infrastructure can’t go on forever–there is a limit to the
number of security-only ports you can throw into the network. UTM will
come eventually–just not today

So, we agree again…security sprawl cannot continue.  It’s an overwhelming issue for both those who need "good enough" security as well as those who need best-of-breed. 

However, your last statement leaves me scratching my head in confused disbelief, so I’ll just respond thusly:

UTM isn’t "coming," it’s already arrived.  It’s been here for years without the fancy title.  The same issues faced in the datacenter in general are the same facing the microcosm of the security space — from space, power, and cooling to administration, virtualization and consolidation — and UTM helps solve these challenges.  UTM is here TODAY, and to assume anything otherwise is a foolish position.

My $0.02 (not assuming inflation)

/Chris

IDS/IPS – Finger Lickin’ Good!

June 13th, 2006 6 comments

Colonelsanders
[Much like Colonel Sander’s secret recipe, the evolution of "pure" IPS is becoming an interesting combo bucket of body parts — all punctuated, of course, by a secret blend of 11 herbs and spices…]

So, the usual suspects are at it again and I find myself generally agreeing with the two wisemen, Alan Shimel and Mike Rothman.  If that makes me a security sycophant, so be it.  I’m not sure, but I think these two guys (and Michael Farnum) are the only ones who read my steaming pile of blogginess — and of course Alex Neihaus who is really madly in rapture with my prose… ๐Ÿ˜‰

Both Alan and Mike are discussing the relative evolution from IDS/IPS into "something else." 

Alan references a specific evolution from IDS/IPS to UTM — an even more extensible version of the tradtional perimeter UTM play — with the addition of post-admission NAC capabilities.  Interesting.

The interesting thing here is that NAC typically isn’t done "at the perimeter" — unless we’re talking the need to validate access via VPN, so I think that this is a nod towards the fact that there is, indeed, a convergence of thinking that demonstrates the movement of "perimeter UTM" towards Enterprise UTM deployments that companies are choosing to purchase in order to manage risk.

Alan seems to be alluding to the fact that these Enterprises are considering deployments internally of IPS with NAC capabilities.  I think that is a swell idea.  I also think he’s right.  NAC and about 5-6 other key, critical applications that are a natural fit for anything supposed to provide Unified Threat Management…that’s what UTM stands for, afterall.

Mike alludes to the reasonable assertion that IDS/IPS vendors are only riding the wave preceeding the massive ark building that will result in survival of the fittest, where the definition of "fit" is based upon what the customer wants (this week):

Of course the IDS/IPS vendors are going there because customers want
them to. Only the big of the big can afford to support all sorts of
different functions on different boxes with different management (see No mas box). The great unwashed want the IDS/IPS built into something bigger and simpler.

True enough.  Agreed.  However, there are vendors — big players — such as Cisco and Juniper that
won’t use the term UTM because it implies that their IDS and IPS
products, stacked with additional functions, are in fact turkeys (following up with the poultry analogies) and
that there exists a guilt by association that suggests the fact that
UTM is still considered a low-end solution.  The ASP of most UTM
products is around the $1500 range, so why fight for scraps.

So that leads me to the point I’ve made before wherein I contrast the differences in approach and the ultimate evolution of UTM:

Historically, UTM is defined as an approach to network security in
which multiple logically complimentary security applications, such as
firewall, intrusion detection and antivirus, are deployed together on a
single device. This reduces operational complexity while protecting the
network from blended threats.

For large networks where security requirements are much broader and
complex, the definition expands from the device to the architectural
level. In these networks, UTM is a โ€œsecurity services layerโ€ within the
greater network architecture. This maintains the operational simplicity
of UTM, while enabling the scalable and intelligent delivery of
security services based on the requirements of the business and
network. It also enables enterprises and service providers to adapt to
new threats without having to add additional security infrastructure.

My point here is that just as firewalls added IDS and ultimately became IPS, IPS has had added to it Anti-X and become UTM — but, Perimeter UTM.   The thing missing there is the flexibility and extensibility of these platforms to support more functions and features.

However, as both Mike and Alan point out, UTM is also evolving into architectures that allow for virtualized
security service layers to be deployed from more scaleable platforms
across the network.The next logical evolution has already begun.

When I go out on the road to speak and address large audiences of folks who manage security, most relay the fact that most of them simply do not trust IPS devices with automated full blocking turned on.  Why?  Because they lack context.  While integrated VA/VM and passive/active scanning adds to the data collected, is that really actionalble intelligence?  Can these devices really make reasonable judgements as to the righteousness of the data they see?

Not without BA functionality, they can’t.  And I don’t mean today’s NBA (a la Gartner: Network Behavior Analysis) or NBAD (a la Arbor/Mazu: Network Behavioral Anomaly Detection) technology, either. 

[Put on your pads, boys, ‘cos here we go…]

NBA(D) as it exists today is nothing more than a network troubleshooting and utilization tool, NOT a security function — at least not in its current form and not given the data it collects today.  Telling me about flows across my network IS, I admit, mildly interesting, but without the fast-packet cracking capabilities to send flow data *including* content, it’s not very worthwhile (yes, I know that newer version of NetFlow will supposedly do this, but at what cost to the routers/switches that will have to perform this content inspection?)

NBA(D) today takes xFlow and looks at traffic patterns/protocol usage, etc. to determine if, within the scope of limited payload analysis, something "bad" has occured.

That’s nice, but then what?  I think that’s half the picture.  Someone please correct me, but today netflow comes primarily from routers and switches; when do firewalls start sending netflow data to these standalone BA units?  Don’t you need that information in conjunction with the exports from routers/switches at a minimum to make the least substantiated decision on what disposition to enact?

ISS has partnered with Arbor (good move, actually) in order to take this first step towards integration — in their world it’s IPS+BA.  Lots of other vendors — like SourceFire — are also developing BA functionality to shore up the IPS products — truth be told, they’re becoming UTM solutions, even if they don’t want to call their products by this name.

Optenet (runs on the Crossbeam) uses BA functionality to provide the engine and/or shore up the accuracy for most of their UTM functions (including IPS) — I think we’ll see more UTM companies doing this.  I am sure of that (hint, hint.)

The dirty little secret is that despite the fact that IDS is supposedly dead, we see (as do many of the vendors — they just won’t tell you so) most people purchasing IPS solutions and putting them in IDS mode…there’s a good use of money!

I think the answer lies in the evolution from the turkeys, chickens and buzzards above to the eagle-eyed Enterprise UTM architectures of tomorrow — the integrated, consolidated and virtualized combination of UTM with NAC and NBA(D) — all operating in a harmonious array of security goodness.

Add VA/VM, Virtual patching, and the ability to control how data is created, accessed, manipulated and transported, and then we’ll be cooking with gas!  Finger lickin’ good.

But what the hell do I know — I’m a DoDo…actually, since I grew up in New Zealand, I suppose that really makes me a Kiwi.   Go figure.