Archive

Archive for June, 2006

Even M(o)ore on Purpose-built UTM Hardware

June 8th, 2006 2 comments

Eniac4
Alan Shimel made some interesting points today in regards to what he described as the impending collision between off the shelf, high-powered, general-purpose compute platforms and supplemental "content security hardware acceleration" technologies such as those made by Sensory Networks — and the ultimate lack of a sustainable value proposition for these offload systems:

I can foresee a time in the not to distant future, where a quad core,
quad proccessor box with PCI Express buses and globs of RAM deliver
some eye-popping performance.  When it does, the Sensory Networks of
the world are in trouble.  Yes there will always be room at the top of
the market for the Ferrari types who demand a specialized HW box for
their best-of-breed applications.

Like Alan, I think the opportunities that these multi-processor, multi-core systems with fast buses and large ram banks will deliver is an amazing price/performance point for applications such as security — and more specifically, multi-function security applications such as those that are used within UTM offerings.  For those systems that architecturally rely on multi-packet cracking capability to inspect and execute a set of functional security dispositions, the faster you can effect this, the better.  Point taken.

One interesting point, however, is that boards like Sensory’s are really deployed as "benign traffic accelerators" not as catch-all filters — as traffic enters a box equipped with one of these cards, the system’s high throughput potential enables a decision based on policy to send the traffic in question to the Sensory card for inspection or pass it through uninspected (accelerate it as benign — sort of like a cut-through or fast-path.)  That "routing" function is done in software, so the faster you can get that decision made, the better your "goodput" will be.

Will this differential in the ability to make this decision and offload to a card like Sensory’s be eclipsed by the uptick on the system cpu speed, multicores and lots of RAM?  That depends on one very critical element and its timing — the uptick in network connectivity speeds and feeds.  Feed the box with one or more GigE interfaces, and the probability of the answer being "yes" is probably higher.

Feed it with a couple of 10GigE interfaces, the answer may not be so obvious, even with big, fat buses.  The timing and nature of the pattern/expression matching is very important here.  Doing line rate inspection focused on content (not just header) is a difficult proposition to accomplish without adding latency.  Doing it within context is even harder so you don’t dump good traffic based on a false positive/negative.

So, along these lines, the one departure point for consideration is that the FPGA’s in cards like Sensory’s are amazingly well tuned to provide massively parallel expression/pattern matching capabilities with the flexibility of software and the performance benefits of an ASIC.  Furthermore, the ability to parallelize these operations and feed them into a large hamster wheel designed to perform these activities not only at high speed but with high accuracy *is* attractive.

The algorithms used in these subsystems are optimized to deliver a combination of scale and accuracy that are not necessarily easy to duplicate by just throwing cycles or memory at the problem as the "performance" of the effective pattern matching taxonomy requirements is as much about accuracy as it is about throughput.  Being faster doesn’t equate to being better.

These decisions rely on associative exposures to expressions that are not necessecarily orthagonal in nature (an orthogonal classification is one in which no item is a member of
more than one group, that is, the classifications are mutually
exclusive — thanks Wikipedia!)  Depending upon what you’re looking for and where you find it,  you could have multiple classifications and matches — you need to decide (and quick) if it’s "bad" or "good" and how the results relate to one another.

What I mean is that within context, you could have multiple matches that seem unrelated so flows may require iterative
inspection (of the entire byte-stream or offset) based upon "what" you’re looking for and what you find when
you do — and then be re-subjected to inspection somewhere else in the
byte-stream.

Depending upon how well you have architected the software to distribute/dedicate/virtualize these sorts of functions across multi-processors and multi-cores in a general purpose hardware solution driven by your security software, you might decide that having purpose-built hardware as an assist is a good thing to do to provide context and accuracy and let the main CPU(s) do what they do best.

Switching gears…

All that being said, signature-only based inspection is dead.  If in the near future you don’t have behavioral analysis/behavioral anomaly capabilities to help provide context in addition to (and in parallel with) signature matching, all the cycles in the world aren’t going to help…and looking at headers and netflow data alone ain’t going to cut it.  We’re going to see some very intensive packet-cracking/payload and protocol BA functions rise to the surface shortly.  The algorithms and hardware required to take multi-dimensional problem spaces and convert them down into two dimensions (anomaly/not an anomaly) will pose an additional challenge for general-purpose platforms.  Just look at all the IPS vendors who traditionally provide signature matching scurry to add NBA/NBAD.  It will happen in the UTM world, too.

This isn’t just a high end problem, either.  I am sure that someone’s going to say "the SMB doesn’t need or can’t afford BA or massively parallel pattern matching," and "good enough is good enough" in terms of security for them — but from a pure security perspective I disagree.  Need and afford are two different issues.

Using the summary argument regarding Moore’s law, as the performance of systems rise and the cost asymptotically approaches zero, then accuracy and context become the criteria for purchase.  But as I pointed out, speed does not necessarily equal accuracy.

I think you’ll continue to see excellent high performance/low cost general purpose platforms to provide innovative software-driven solutions being assisted by flexible, scalable and high performance subsystems designed to provide functional superiority via offload in one or more areas.

/Chris

UTM messaging is broken – Perimeter vs. Enterprise UTM – Film @ 11

June 8th, 2006 No comments

I need to spend 2 minutes further introducing the concept of Enterprise-class UTM.  I’ll post in greater detail as a follow-on in the next day or so.  I just got back from the Gartner show, so my head hurts and it’s 1am in the morning here in Beantown.  This (blog entry below) was an interesting if not somewhat incomplete beginning to this thought process.

Don McVittie over on the Network Computing Security Blog had some really interesting things to say about the need for, general ripeness and maturity of UTM and the operationalization of UTM technology and architecture within the context of how it is defined, considered and deployed today.

What he illustrated is exactly the breakdown where "traditional" SMB-based, perimeter-deployed UTM messaging is today.  You’d be nuts to try an deploy one of these referenced UTM appliances at the core of a large enterprise.  You’re not supposed to.  There’s simply no comparison between what you’d deploy at the core for UTM versus what you’d deploy at a remote/branch office.

That’s what Enterprise-class UTM is for.  The main idea here is that while for a small company UTM is simply a box with a set number of applications or security functions, composed in various ways and leveraged to provide the ability to "do things" to traffic as it passes through the bumps in the security stack.

In large enterprises and service providers the concept of the "box" has to extend to an *architecture* whose primary attributes are flexibility, resilience and performance.

I think that most people don’t hear that, as the marketing of UTM has eclipsed the engineering realities of management, operationalization and deployment based upon what most people think of as UTM.

Historically, UTM is defined as an approach to network security in which multiple logically complimentary security applications, such as firewall, intrusion detection and antivirus, are deployed together on a single device. This reduces operational complexity while protecting the network from blended threats.

For large networks where security requirements are much broader and complex, the definition expands from the device to the architectural level. In these networks, UTM is a “security services layer” within the greater network architecture. This maintains the operational simplicity of UTM, while enabling the scalable and intelligent delivery of security services based on the requirements of the business and network. It also enables enterprises and service providers to adapt to new threats without having to add additional security infrastructure.

You need a really capable and competent switching platform optimized for virtualized service delivery to pull this off.   That’s what this is for — the Crossbeam X80 Security Services Switch:

You plumb the X-series into the switching  infrastructure as an overlay and provide service where and when you need to manage risk by effectively implementing policies which abstract down to making all flows which match criteria within the rules, to be subject to specific security service layer combinations (firewall, IDS, AV, URL, etc…)  No forklifts, no fundamental  departures from how you manage or maintain the network or the security layer(s) defending it.  Enterprise UTM provides transparency, high performance, high availability, best-of-breed virtualized security services, and simplified deployment and management…

UTM for large networks is designed to provide solutions that deliver the key components required for a UTM security services layer:

  •     high performance and high availability,
  •     best-of-breed applications
  •     intelligent and virtualized service delivery

This enables customers to create an intelligent security services layer that delivers the right protection for any part of the network in accordance with evolving threats and business objectives. This layer is managed as a single consolidated system, thus delivering the operational and cost benefits of UTM while radically improving the overall security posture of the network.

More on the architecture which enables this in a follow-on post.  We’ll discuss the traditional embedded vs. overlay appliance model versus the X-Series perspective as well as the C-series.

Look, we’ll go through the technical details in a follow-on post.  Bear with me.

Chris

Would you buy UTM from a guy with an IUD? (Read on…)

June 6th, 2006 4 comments

[Editor’s Note: I really found myself getting suckered into the darkside of this debate when it turned into a BS marketing spin fest and personal bashing session from the other party.  I shouldn’t have responded to those elements, but I did.  I’m guilty.  It was stupid.  Stupid, but strangely satisfying.  Much like tequila.  I still got zero answers to any of the points I raised, so you decide for yourself…]

Looks like Alex Neihaus, Astaro’s VP of Marketing, can’t be bothered to address technical criticism or answer questions debating the utility of approach for Astaro’s new "virtual UTM appliance," so he feels it necessary to hang his "shingle" out in public and launch personal attacks rather than address the issue at hand.  Well, he is in marketing so why would I expect anything different.

Since my comments back to him may not actually make it up on his site, I figure I’ll respond to them here; they won’t be word for word because I forgot to copy/paste my response in terms of what I sent him. 

Jungian synchronicity always blows me away. I was just reading about "intermittent explosive disorder" this morning. It’s apparently severely undiagnosed.

Except at Crossbeam. Apparently, Christofer Hoff, their brand-spankin’ new "chief security strategist" (aka
"we want a Scoble of our own") is deeply worried about virtualization
and the impact on Crossbeam. Ergo, a demonstration of IED in the
blogosphere via a post on his personal blog.

Funny.  I was just reading about  rectal encephalic inversion and in a Freudian twist of fate, Alex’s slip is showing.

As I let him know, I’ve been at Crossbeam for 8 months and before that, I was a customer for almost 3 years deploying their products in the real world, not preaching about them from behind a web interface.  Prior to that I ran an MSSP/Security Consultancy for 9 years, did two other startups, raised venture capital, and built the global managed security service for a worldwide service provider on 4 continents serving 152 countries.  Yet I digress…

But let me say we don’t mind the heat, in fact, we appreciate it.
But next time, Chris, why not post on the Crossbeam site? Why not
change your bio on your blog to indicate your new role? And before you
impugn Richard Stiennon’s credibility, why not earn some in your new
role? I am, of course, fair game, but to wail on Richard isn’t cricket.

Crossbeam doesn’t have blog capabilities yet.  When we do, it will cross-post.  I don’t try to hide what I do or who I do it for at all.  Where exactly on the Scrapture blog does it say that you are the VP of Marketing for Astaro, Alex?

Also, this ain’t my first ride on a tuna boat, pal.  While I haven’t had the privilege of peddling FTP software for a living, I know a thing or two about security — designing, deploying, managing and using it.  As I mentioned in the comments I sent you, everyone’s a dog on the Internet, Alex, and you’re pissing on the wrong hydrant.

In terms of Richard, I know him.  I talk to him reasonably often.  We’re a client.  Nice reach.  I wasn’t impugning his honor, I was pointing out that the quote you used didn’t have a damned thing to do with Astaro.

I do appreciate you taking umbrage at the "ASIC-free" phrase in the
press release. I put that in to see if it would raise any neck hair.
It’s the crux of the issue.

You apparently think that hardware is the answer. I know it isn’t.

Firstly, Crossbeam doesn’t depend on ASIC’s in our products, so your assumption I was bristling at the comment because I need to be defensive about ASIC’s is as wrong as your assertion/innuendo that ASICs actually make things go slower. 

More importantly, your assertion that I think hardware is the answer is, again, dead wrong.  If you knew anything about Crossbeam, you’d recognize that the secret sauce in our solution is the software, not the hardware.  The hardware is nice, but it’s not the answer.

There’s always a more powerful engine. Always a more powerful
subsystem. Always better, always cheaper. Businesses built on
mainframes can be profitable, but never ubiquitous in the face of
commoditized hardware. IBM learned this in the 1990’s; Crossbeam will
learn it shortly. After you’ve sold the Fortune 500 five-hundred units,
you’ll inevitably be stuck for growth. You’ll cast about for the broad
middle.

Ummm, you know squat-all about Crossbeam — that much is obvious.  We’ve sold many more than 500 units and our customer base is split 50/50 between ISP/MO’s/Telco’s and the Fortune 2000 — doubling revenues year after year for 6 years in a space that is supposedly owned by Cisco and Juniper is a testament to the rediculousness of your statement.   I’m shivering in anticpation of our impending doom…by a bunch of VMWare images running on a DL380 no less.

That broad middle will be using commoditized hardware with
integrated, easy-to-use security solutions.

Hey!  We agree about something.  Again, if you knew anything about our product, our roadmap or our technololgy, you’d recognize this.

You’ll talk about
"enterprise ready" to people who want the UTM equivalent of a fax
machine.

Buhahahaha!  A fax just arrived for you.  It’s titled "It’s sure as hell easier to have a high-end solution and scale down than it is to have a low-end solution and scale up!"  Sound familiar?  VMWare ain’t it, bubba. 

Wail all you want how unfortunate it is that UTM is associated with SMB (I agree that’s wrong, wrong, wrong). But the answer to UTM ubiquity isn’t gonna come from the high end.

Sorry. And don’t let that IED problem get you down.

UTM is associated with low-end perimeter solutions that don’t scale and require forklifts due to the marginalization of commoditized hardware.  When you have a solution that actually scales, can sit in a network for 6 years without forklifts, and is in place at the biggest networks on the planet, step up.

Otherwise, do me a favor and respond in kind technically to my points regarding manageablility, security, and scalability…or have Gert or Markus (Astaro’s CSA and CTO) do it, at least we can have a debate about something meaningful.

 

/Chris

The world’s first “UTM Virtual Appliance”?

June 5th, 2006 5 comments

Blipping through the many blogs I read daily, I came across an interesting announcement from Astaro in which they advertise the "…world’s first UTM virtual appliance."  Intrigued, I clicked over here for a peek.

Before you read forward, you should know that I really like Astaro’s products.  I think that for SMB markets, their appliances and solutions are fantastic.  That being said, the word "virtualization" means a lot of things to a lot of people — there are some liberties taken  by Astaro that we’re going to need to analyze before this sort of thing can really be taken seriously.  More on that later.

I’m nominating this announcement for an Emmy because it’s the best use of humor in a commercial that I have seen in a LONG time.  I mean really…blade server solutions with full-on clustering and virtualized/grid computing management layers complete with virtualized storage have a hard time providing this sort of service level reliably.  You mean to tell me that MSSP’s who have SLA’s and make their lunch money providing security as a service are going to build a business on this malarkey?

Somebody call Crossbeam’s global MSSP/ISP/MO customers who provide services in the cloud to hundreds of thousands/millions of their customers and tell them they can ask for a refund because all they need is a couple of DL380’s, VMware, ASG and a set of really big huevos and you’ve got all the performance, scalability, reliability, high-availability and resiliency you need.

Ah, crap.  I’m just such a cynical bastard. 

Here’s the distillation:

  1. Take Astaro’s Security Gateway product (a very nicely-done hardened linux-based offering with re-packaged and optimized open source and OEM’d components)
  2. Create a VM (virtual machine) image that can be run under the VMWare VM Player, VM Workstation, VM Server or VM ESX
  3. Run it on a sufficiently-powered hardware platform
  4. Presto-change-o!  You’ve got a virtualized security appliance!

It’s a nice concept, but again it further magnifies the narrowly focused scope of how UTM is perceived today — a mid-market, perimeter solution where "good enough" is good enough.  It’s going to marginalize the value of what true enterprise and provider class UTM brings to the table by suggesting that you can just cobble together a bunch of VM’s, some extra hardware and whammo!  You’ve got mail!  This is the very definition of scrapture!

However, it seems as though the logic at Astaro goes something like this:

"If one Astaro gateway is "good enough," then running LOTS of virtual Astaro gateways is "even gooder!"  AND you can run hundreds of ’em on the same machine (see rediculous quote section below.)

The marketing folks over at Astaro are off to a wonderful June as they really put in the OT to milk this one for all it’s worth.  Let’s get one thing straight, there’s a real big difference between innovation and improvisation.  I’ll let you figure out what my opinion of this is.

Firstly, this concept in the security space is hardly new.  Sure, it may be the first "UTM" product to be offered in a VM, but StillSecure has been providing free downloads of its Strata Guard IPS product this way for months — you download the VMWare image and poof!  Instant IPS.

Secondly, I’m really interested in what controls one would have to put in place to secure the host operating system that is running all of these VM’s.  I mean, when you run Astaro’s hardened appliance, that’s all taken care of for you.  What happens when Johnny SysAdmin boots up VMWare Server on Windows 2K3 and loads 40 instances of his "secure" firewall?  Okay, maybe he uses linux.  Same question.  What happens when you need to patch said OS and it blows VMware sky-high?

Thirdly, how exactly do you provide for CPU/Memory/IO arbitration when running all these VMWare instances and how would an Enterprise leverage this virtual mass of UTM "appliances" without load balancing capabilities?  What about high-availability?

Fourthly, what happens to all of these VM UTM instances when the host OS takes a giant crap? 

Fifthly, the sheer number of scrapturelicious quotes in this press release is flan-friggin-tastic.:

Astaro Security Gateway for VMware allows customers to flexibly run
Astaro Security Gateway software on a VMware infrastructure. Many
hundreds or thousands of Astaro Security Gateways can be virtualized in
this way,
each delivering the network protection and cleaning for which
Astaro is famous.

…ummmm…I can only assume you meant on a hundred or a thousand boxes?

Major benefits for users include simpler deployment in large and
complex environments,
better hardware allocation and reduced hardware
expenditures because physical computers can run multiple virtual
appliances. And because Astaro’s unified threat management is
ASIC-free, performance when running in a virtual machine is maximized.

How do you actually plumb one of these things into a network?  How do you configure multi-link trunking utilizing VLANs across the host OS up to the VM instances?  This is simpler, how?  Oh, that’s right…it’s PERIMETER UTM

And then there’s the fact that because it runs on generic PC’s under a VM, you can ignore the potentially crappy performance and we don’t need no stinkin’ ASICs — they only get in the way.  That’s right, ASIC’s make security applications run SLOWER!

“The ability to virtualize gateway security services opens up major new
capabilities for managed service providers (MSPs) to deliver air-tight
security services to small- and medium-size business customers,” said
Richard Stiennon, founder, IT-Harvest Group. “MSPs can leverage their
hardware investment while providing dedicated security services to
end-user customers, resulting in superior security and manageability.”

Rich, I gotta ask…did you actually say this in regards to Astaro’s VM announcement or security virtualization in general?  Since there’s ZERO reference to Astaro in this quote, I can only assume the latter.  If so, your honor is restored.  If not, you’re buyin’ the beer at Gartner buddy because just in case you haven’t heard, IDS is dead 😉

Look, I like the fact that you can take their product, try it out under a low-cost controlled test and see if you want to buy it.  Great idea.  Suggesting that the MSSP’s of the world and going to run out in droves, buy a DL380 and solve the North American continent’s security woes is, much like my analogy, rediculous.

Folks like Fortinet, Juniper and Cisco are gunning for the low-end market and you can bet your bottom scrapture that they have the will, money and at-bat’s to recognize that there is a lot of money to be made by providing virtualized security services — either in the cloud via MSSP’s or in the Enterprise.

But don’t worry because they have those annoying things called ASICs, so I’m sure Astaro will be fine. 

Virtually yours,

/Chris

Why Perimeter UTM breaks Defense in Depth

June 4th, 2006 3 comments

I was reading Mike Rothman’s latest blog entry regarding defense in depth and got thinking about how much I agree with his position and how that would seemingly put me at odds given what I do for a living when put within the context of Unified Threat Management (UTM) solutions — the all-in-one "god boxes" that represent the silver bullet for security woes 😉

Specifically, I am responsible for crafting and evangelizing the security strategy for a company that one might (erroneously) expect trumpets that one can solve all security ills by using a single UTM device instead of a layered approach via traditional defense in depth.  I need to clear that up because there’s enough septic marketing material out there today that I can’t bear to wade through it any longer.   And yes, I work for a company that sells stuff, so you should take this with whatsoever a reasonable amount of NaCl you so desire…

There is a huge problem with how UTM today is perceived.  UTM — and in this case the classical application thereof — is really defined as a perimeter play where boxes that were once firewalls that then became firewalls+IDS that then became firewalls+IDS+IPS have now become boxes that are firewalls+IDP+Anti-everything.  IDC defines that the basic definition of a UTM appliance is one that offers firewall, IDP and Anti-virus in a single box.  There are literally tons of options here, but the reality is that almost all of these "solutions" actually fly in the face of defense in depth because the consumer of these devices is missing one critical option: Flexibility.

Flexibility regarding the choice of technology.  Flexibility regarding the choice of vendor.  Flexibility of choice regarding extensibility of functionality.

Tradtional (perimeter) UTM is, in almost all cases, a single vendor’s version of the truth.
Lots of layers — but in one box from one vendor!  That’s not defense in depth,
that’s defense in breadth.

That may be fine for the corner liquor store with DSL Internet access, but what enterprises really need are layers of appropriate defense from multiple best in breed vendors deployed via a consolidated and virtualized architecture that allows for safer, simpler networks.

On the surface, UTM seems like a fine idea — cram as many layers of security "stuff" into a box that you can sprinkle around the network in order to manage threats and vulnerabilities.  Using a single device (albeit many of them) would suggest that you have less moving parts from a management perspective and instead of putting 5 separate single-function boxes in-line with each other to protect your most important assets, now you only need one.  This is a dream for a small company where the firewall admin, network engineer and help desk also happens to be the HR director.

This approach is usually encountered at the "perimeter," which today is usually defined as the demarcated physical/logical ingress/egress dividing the "inside" from the "outside."  While there may well be a singular perimeter in small enterprises and home networks, I think it fair to say that in most large organizations, there are multiple perimeters — each surrounding mini "cores" that contain the most important assets segmented by asset criticality, role, function or policy-driven boundaries.

Call it what you will: de-perimeterization, re-perimeterization, dynamic perimiterization…just don’t call it late for supper.

In this case, I maintain that the perimeter isn’t "going way," it’s multiplying — though the diameter is decreasing.  As it does, security practitioners have two choices to deal with segmented and mini-perimeterized mini-cores:

  1. Consolidate & Virtualize
  2. Box Stack/Sprinkle

These two options seem simple enough, and when you pull back the covers, there are a couple of options you have to reconcile in either scenario:

  1. Single-vendor embedded security in the network infrastructure (routers/switches)
  2. Single-vendor overlay security via single-function, single-application devices
  3. Single-vendor overlay security via single application/multi-function UTM
  4. Multi-vendor embedded security in the network infrastructure (routers/switches)
  5. Multi-vendor overlay security via single application/muti-function security applications
  6. Multi-vendor overlay security via Best-of-breed security applications

When pairing the deployment option from the first set with a design methodology from the second, evaluating the solution leads to many challenges; balancing a single-vendor’s version of the truth (in one or more boxes) against many cooks in the kitchen in combination with either "good enough" or best in breed.  Tough decisions.

"Good enough" or "best in breed." One box or lots.  The decision should be made based upon risk, but unfortunately that seems to be a four letter word to most of the world; we ought to be managing risk, but instead, we manage threats and vulnerabilities.  It’s no wonder we are where we are…

For a taste test, go ahead and ask your network switch vendor or your favorite UTM appliance maker where their solutions to these very real business-focused security challenges exist in their UTM delivery platforms today:

  • Web Application security
  • Database security
  • XML/WS/SOA security
  • VoIP security

They don’t have any.  Probably never will.  Why?  Because selling perimeter UTM isn’t about best in breed.  It’s about selling boxes.  What happens when you need to deploy these functions or others in combination with FW, IDP, AV, Anti-spam, Anti-Spyware, URL Filtering, etc…you guessed it, another box.  Defense in depth?  Sure, but at what cost?

When defense in depth lends itself to management nightmares due to security sprawl, there is a very realistic side effect that could introduce more operational risk into the business than the risk the very controls were supposed to mitigate.  It’s a technology cirlce-jerk.

What we need is option #1 from the first set paired with option #6 from the second — consolidated and virtualized security service layers from multiple vendors in a robust platform.  This pairing needs to provide a consolidated solution set that is infrastructure agnostic and as much a competent network platform as it is an application/service delivery platform.

This needs to be done in a manner in which one can flexibly define, deploy and manage exactly the right combination of security functions as a security "service layer" deployed exactly where you need it in a network to manage risk — where the risk justifies the cost. And by cost I mean both CapEx and OpEx. 

Of course, this solution would require immense flexibility, high availability, scalability, resiliency, high performance, and ease of management.  It would need to be able to elevate the definition of the ill-fated "Perimeter UTM" to scale to the demands of "Enterprise UTM" and "Provider UTM."  This solution would need to provide true defense in depth…providing the leverage to differentiate between "network security," "information security," and "information survivability."

Guess what?  For enterprises and service providers where the definition morphs based upon what the customer defines as best in breed, defense in depth either becomes a horrific nightmare of management and cost or a crappy deployment of good enough security that constantly requires forklifts, doesn’t scale, doesn’t make us more secure, is a sinkhole for productivity and doesn’t align to the business.  Gee, I’ll take two.

We’re in this predicament because an execution-driven security reference architecture to allow businesses to deploy defense in depth in a manner consistent with managing risk has not been widely available, or at least not widely known.

I chuckled when people called me nuts for declaring that the "core"
was nothing more than a routing abstraction and that we were doomed if
we kept designing security based upon what the layout of wiring closets
looked like (core – distribution – access.)  I further chuckle
(guffaw?) now that rational, common sense risk management logic is
given some fancy name that seems to indicate it’s a new invention.  Now
as the micro-cores propagate, the perimeters multiply, the threat
vectors and attack surfaces form cross-hatches dimensionally and we
gaze out across the vast landscape of security devices which make up
defense in depth, perhaps UTM can be evaluated within the context it
deserves.

On the one hand, I’m not going to try and turn this into a commercial for my product as there are other forums for that, but on the other I won’t pretend that I don’t have a dog in this hunt, because I do.  I was a real-world customer using this architecture for almost 3  years before I took up this crusade.  Managing the security of $25 Billion of other people’s money demands
really good people, processes and technology.  I had the first two and
found the third in Crossbeam.

The reality is that nobody does what Crossbeam does and if you need the proof points to substantiate that, go to the largest telco’s, service providers, mobile operators and enterprises on the planet and check out what’s stacked next to the Juniper routers and Cisco switches…you’ll likely find one of these.

Real UTM.  Real defense in depth.  Really.

“Back From the Bleak Blog Brink of Nothingness…”

June 3rd, 2006 No comments

Yesterday I met up with Alan Shimel, StillSecure’s
Chief Strategy Officer.  I’ve known about StillSecure’s excellent
products for some time now, but I frequently read Alan’s blog and
decided that we should meet. 

Alan’s a fascinating guy, the sort of fellow that one becomes
instantly comfortable with.  You can tell he’s been through the
security sausage grinder and come out decently unscathed but with
wisdom, patience and a distilled kindness that this sort of experience
brings.

At lunch we had one of those conversations that became animated
enough that the verbal game of ping-pong encompassing the collective
turets-style outbursts of our past lives caused both of us to interject
comment after comment — even when stuffing our faces with Italian food
😉

It was truly excellent being able to sit down with someone who
really gets it and isn’t afraid to (agree to) disagree.  We seem to
share a great many views on perspectives that run the gamut of the
security panorama and it was great to meet another someone from the
blogosphere like Alan with whom I could bond intellectually.  I’ve met
some other fantastic opinionati like Mike Rothman and Pete Lindstrom under similar circumstances…you should most definitely read their blogs.

After meeting with Alan, I became inspired to retire my previous
neglected blog-bortion and commit to a full-frontal assault using
TypePad which provides a much better canvas for this sort of thing.

I’ll move a couple of entries over just for continuity’s sake.

Off to the Gartner IT Security Summit in DC from the 6th to the
8th…Crossbeam is sponsoring another amazing evening social event in
conjunction with our buddies from SourceFire.  eMail/phone me for
details.

Looking forward to more bloggage.

Chris

Categories: General Rants & Raves Tags:

Better Security Earns Credit – A Piece I wrote for Optimize Magazine

June 3rd, 2006 No comments

Here’s a piece I wrote for Optimize a few months ago.

Linky

Constant threats to our business have changed the way we prioritize
security and risk management at WesCorp, the largest corporate credit
union in the United States with $25 billion in assets and $650 million
in annual revenue.

As chief information security officer (CISO)
and director of enterprise security services, my role is to embed
security into WesCorp’s operations. The company’s goal is to use
rational information risk management to help solve business problems,
provide secure business operations, and protect our clients’ data.

We’ve
developed a business-focused "reduction of risk on investment"
approach. Because it’s difficult to consistently attach a specific
monetary value to information assets and to assess an ROI for security
initiatives, we focus on reducing risk exposure and avoiding costs by
implementing the appropriate security measures.

To effectively
prioritize our risks, WesCorp aligns with the company’s strategic
initiatives. It’s crucial to clearly understand what’s important from a
critical operational-impact viewpoint. This must be done from both
technical and business perspectives.

WesCorp uses the Octave
framework, developed by the Carnegie Mellon Software Engineering
Institute, to facilitate our information risk-management process.
Specifically, risk is defined, prioritized, and managed based on the
synergistic flow of data, including risk assessment, business
continuity, vulnerability management, threat analytics, and
regulatory-compliance initiatives. These elements provide meaningful
data that lets the company understand where it may be vulnerable, what
mitigating controls are in place, and its overall risk and security
posture. This approach lets us effectively communicate to management,
regulators, and customers how we manage risk across the enterprise.

Three recent security initiatives illustrate how we’ve reduced risk through better network and security life-cycle management.

For
some time, we’ve all been warned that the network perimeter is dead
because of the increasing number of access points for mobile workers,
vendor collaborations, and business partners. We suggest that the
perimeter is, in fact, multiplying, though the diameter of the
perimeter is collapsing. As technology gains additional footholds
throughout the enterprise, thousands of firewall-like solutions are
needed to patrol and monitor access points. The challenge is to provide
network security while allowing the free flow of information and,
therefore, business as usual. The tactical security implementations
necessary for a growing network have traditionally been expensive and
difficult to manage.

Our strategy involves segmenting the
internal network into multiple networks grouped by asset criticality,
role, and function. This provides quarantine and containment to prevent
the spread of attacks. By layering the network infrastructure on
virtual security services, we can efficiently mitigate vulnerabilities
while guaranteeing firewall-intrusion detection and prevention, virus
protection, caching, and proxy services. This network-security approach
is aligned with how the business units are structured. Instead of
deploying 30 separate devices, we’ve consolidated our hardware
platforms into a single solution with the help of Crossbeam Systems
Inc. and other vendors to recoup $1.2 million in savings.

Another security initiative involves vulnerability management. Because
vigilance is necessary to identify and isolate threats in the
enterprise, assigning vulnerability-management and remediation
activities can slow the ability to act defensively and decisively,
thereby increasing risk. We’ve set up intelligence tools to identify
direct attacks in near-real time using streamlined processes.

Using a risk-management and threat-analytics solution from Skybox
Security Inc., we set up a virtualized representation of the enterprise
and incorporated business-impact analysis and risk-assessment metrics
into our overall vulnerability-management approach.

Finally, while we developed strategies for managing data access and
reducing business risk, our concerns turned to what happens to data
after it’s accessed. We needed to focus on providing real-time, ongoing
database management, specifically, to understand and monitor
privileges, system and user behavior, metadata integrity, and the types
of content accessed.

With the help of IPLocks Inc., we
can assess the risk to critical data warehouses across our enterprise,
and integrate security life-cycle process improvements from the bottom
up. This allows for greater effectiveness in curtailing abuse, fraud,
and potential breaches.

Projects also must provide efficiency
improvements or defensive-positioning capabilities against competitors
or market forces, or demonstrate that they enable a business unit to
achieve goals that contribute to the success of the mission.
Senior-level sponsorship is key, as well.

WesCorp
has an executive-chartered operational risk-management committee
comprising senior staff from across all lines of business, including
the CIO, as well as representatives from our internal audit and
enterprise security-services teams. The committee provides oversight
and governance for our initiatives and allows for clear definitions and
actionable execution of our security and risk-management efforts.

I
report up through the VP of IT to the CIO, who ultimately reports to
the chief operating officer/CFO. I also have dotted-line relationships
to various executive committees and councils, enabling our security and
risk-management framework to be executed unencumbered.

Compliance
is a big driver of all our security and risk efforts. WesCorp, though
not a public company, is heavily regulated like financial-services
companies. We strive to demonstrate our compliance and communicate the
effectiveness of our actions. Unlike many financial-services companies,
however, we view regulatory compliance as a functional byproduct of our
risk-management efforts; a properly defined and executed strategy goes
beyond compliance and implements business improvements. We can use the
best practices of compliance requirements as guidelines to estimate how
well we’re managing our tasks.

Critical to our overall security
and risk-management strategy is effective communication with business
units. The model we’ve adopted calls for an integrated team approach
between the traditionally separate IT and security functions. Because
we’re mutually invested in each other’s successes, we have a much
easier time reengineering our business processes and implementing
technology. We also have unique business-relationship managers who
facilitate smooth communication between the business units and IT.

Security is evolving from a technology function to a core business
function because enterprises realize that a focus on the execution of
business goals means survival. Those that don’t have such a focus will
see a further erosion of their credibility and relevance. Risk
management requires common sense and protecting the right things for
the right reasons; it demands basic business knowledge and sound
judgment. Focusing solely on technology is myopic and dangerous.
Businesses that successfully manage risk are willing to think like an
entrepreneur and manage people, processes, and technology to a
leveraged advantage to reduce risk.

The security breaches
at ChoicePoint and Lexis-Nexis have reinforced the relevance,
necessity, and effectiveness of our security and risk-management
efforts. These catalytic events have galvanized us to evaluate our
program and raise awareness globally across all lines of business.
People who might otherwise not be in touch with risk-management
programs can quickly reassess and determine that security is
fundamental to business.

By integrating security and risk
directly into business processes, we gain a competitive advantage.
Because the business decides what our priorities are or should be, the
strategies we champion are automatically aligned with the business as a
whole. It’s a common-sense approach that affords uncommon comfort and
security in an increasingly at-risk business world.

Categories: Risk Management Tags:

Year One of SOX yields stand-down of enterprise information security departments?

June 3rd, 2006 No comments

From the department of really scary trends…

In a move reminiscent of the spindown of Y2K, over the last 6 months a trend has emerged in which the economics and reflexively reactive response to SOX have left an unmistakable sour taste in the mouths of the corporations down whose throats SOX was thrust.

The costs billed by consulting companies to provide SOX compliance program creation and compliance are astounding. Millions of dollars have been burned through in what goes towards yet another grudge “insurance” purchase that still does very little toward actually making things more secure.

Sadly, now that the “hard work” has been slogged through, in the eyes of those who hawk the bottom line, the relevancy and survivability of corporate information security departments has been called into question with more granular focus. Some companies
have/are contemplating taking their public companies private because the burden of “compliance” costs more than the supposed risk these programs mitigate.

…and we’re left holding the bag like bad guys.

I know of some huge Fortune X companies in several verticals that have all but spun down to minimal staff in the Enterprise Information Security space; layoffs from top security management down to SOC staffers has occured as a turn to outsourcing/off-shoring seems more fiscally favorable.

This is not the result of overall downsizing initiatives — this is a result of specific and targeted RIF’s based on an assumptive lack of need for these positions now that SOX is “over.”

Further to that, where the middle of 2003 pointed to the fact that general network spending and budgets were reduced while security budgets soared, 2005 has produced a return to investing in the network side of the house where management has bought the ad on page 3 of numerous trade mags that networks will “self-heal.”

Perhaps we’ll see a new piece from Carr on why IT SECURITY doesn’t matter…

It just goes to show that if you’re a tactical band-aid to a strategic problem, you’ll just come off in the wash.

Categories: Risk Management Tags: