Got Rational Security?

June 14th, 2006 No comments

I love Google.  I found this whilst browsing this morning:
Fgotrational_1

Categories: General Rants & Raves Tags:

IDS/IPS – Finger Lickin’ Good!

June 13th, 2006 6 comments

Colonelsanders
[Much like Colonel Sander’s secret recipe, the evolution of "pure" IPS is becoming an interesting combo bucket of body parts — all punctuated, of course, by a secret blend of 11 herbs and spices…]

So, the usual suspects are at it again and I find myself generally agreeing with the two wisemen, Alan Shimel and Mike Rothman.  If that makes me a security sycophant, so be it.  I’m not sure, but I think these two guys (and Michael Farnum) are the only ones who read my steaming pile of blogginess — and of course Alex Neihaus who is really madly in rapture with my prose… 😉

Both Alan and Mike are discussing the relative evolution from IDS/IPS into "something else." 

Alan references a specific evolution from IDS/IPS to UTM — an even more extensible version of the tradtional perimeter UTM play — with the addition of post-admission NAC capabilities.  Interesting.

The interesting thing here is that NAC typically isn’t done "at the perimeter" — unless we’re talking the need to validate access via VPN, so I think that this is a nod towards the fact that there is, indeed, a convergence of thinking that demonstrates the movement of "perimeter UTM" towards Enterprise UTM deployments that companies are choosing to purchase in order to manage risk.

Alan seems to be alluding to the fact that these Enterprises are considering deployments internally of IPS with NAC capabilities.  I think that is a swell idea.  I also think he’s right.  NAC and about 5-6 other key, critical applications that are a natural fit for anything supposed to provide Unified Threat Management…that’s what UTM stands for, afterall.

Mike alludes to the reasonable assertion that IDS/IPS vendors are only riding the wave preceeding the massive ark building that will result in survival of the fittest, where the definition of "fit" is based upon what the customer wants (this week):

Of course the IDS/IPS vendors are going there because customers want
them to. Only the big of the big can afford to support all sorts of
different functions on different boxes with different management (see No mas box). The great unwashed want the IDS/IPS built into something bigger and simpler.

True enough.  Agreed.  However, there are vendors — big players — such as Cisco and Juniper that
won’t use the term UTM because it implies that their IDS and IPS
products, stacked with additional functions, are in fact turkeys (following up with the poultry analogies) and
that there exists a guilt by association that suggests the fact that
UTM is still considered a low-end solution.  The ASP of most UTM
products is around the $1500 range, so why fight for scraps.

So that leads me to the point I’ve made before wherein I contrast the differences in approach and the ultimate evolution of UTM:

Historically, UTM is defined as an approach to network security in
which multiple logically complimentary security applications, such as
firewall, intrusion detection and antivirus, are deployed together on a
single device. This reduces operational complexity while protecting the
network from blended threats.

For large networks where security requirements are much broader and
complex, the definition expands from the device to the architectural
level. In these networks, UTM is a “security services layer” within the
greater network architecture. This maintains the operational simplicity
of UTM, while enabling the scalable and intelligent delivery of
security services based on the requirements of the business and
network. It also enables enterprises and service providers to adapt to
new threats without having to add additional security infrastructure.

My point here is that just as firewalls added IDS and ultimately became IPS, IPS has had added to it Anti-X and become UTM — but, Perimeter UTM.   The thing missing there is the flexibility and extensibility of these platforms to support more functions and features.

However, as both Mike and Alan point out, UTM is also evolving into architectures that allow for virtualized
security service layers to be deployed from more scaleable platforms
across the network.The next logical evolution has already begun.

When I go out on the road to speak and address large audiences of folks who manage security, most relay the fact that most of them simply do not trust IPS devices with automated full blocking turned on.  Why?  Because they lack context.  While integrated VA/VM and passive/active scanning adds to the data collected, is that really actionalble intelligence?  Can these devices really make reasonable judgements as to the righteousness of the data they see?

Not without BA functionality, they can’t.  And I don’t mean today’s NBA (a la Gartner: Network Behavior Analysis) or NBAD (a la Arbor/Mazu: Network Behavioral Anomaly Detection) technology, either. 

[Put on your pads, boys, ‘cos here we go…]

NBA(D) as it exists today is nothing more than a network troubleshooting and utilization tool, NOT a security function — at least not in its current form and not given the data it collects today.  Telling me about flows across my network IS, I admit, mildly interesting, but without the fast-packet cracking capabilities to send flow data *including* content, it’s not very worthwhile (yes, I know that newer version of NetFlow will supposedly do this, but at what cost to the routers/switches that will have to perform this content inspection?)

NBA(D) today takes xFlow and looks at traffic patterns/protocol usage, etc. to determine if, within the scope of limited payload analysis, something "bad" has occured.

That’s nice, but then what?  I think that’s half the picture.  Someone please correct me, but today netflow comes primarily from routers and switches; when do firewalls start sending netflow data to these standalone BA units?  Don’t you need that information in conjunction with the exports from routers/switches at a minimum to make the least substantiated decision on what disposition to enact?

ISS has partnered with Arbor (good move, actually) in order to take this first step towards integration — in their world it’s IPS+BA.  Lots of other vendors — like SourceFire — are also developing BA functionality to shore up the IPS products — truth be told, they’re becoming UTM solutions, even if they don’t want to call their products by this name.

Optenet (runs on the Crossbeam) uses BA functionality to provide the engine and/or shore up the accuracy for most of their UTM functions (including IPS) — I think we’ll see more UTM companies doing this.  I am sure of that (hint, hint.)

The dirty little secret is that despite the fact that IDS is supposedly dead, we see (as do many of the vendors — they just won’t tell you so) most people purchasing IPS solutions and putting them in IDS mode…there’s a good use of money!

I think the answer lies in the evolution from the turkeys, chickens and buzzards above to the eagle-eyed Enterprise UTM architectures of tomorrow — the integrated, consolidated and virtualized combination of UTM with NAC and NBA(D) — all operating in a harmonious array of security goodness.

Add VA/VM, Virtual patching, and the ability to control how data is created, accessed, manipulated and transported, and then we’ll be cooking with gas!  Finger lickin’ good.

But what the hell do I know — I’m a DoDo…actually, since I grew up in New Zealand, I suppose that really makes me a Kiwi.   Go figure.

Full Drive Encryption on Laptops – Time for all of us to “nut up or shut up!”

June 11th, 2006 7 comments

Laptopmitm275300
…or "He who liveth in glass houses should either learn to throw small stones or investeth in glass insurance…lots and lots of glass insurance. I, by the way, have lots and lots of glass insurance ;)"

Given all of the recently disclosed privacy/identity breaches which have been demonstrated as a result of stolen laptops inappropriately containing confidential data, we’ve had an exponential increase in posts in the security blogosphere in regards to this matter.

This is to be expected.  This is what we do.  It’s the desperate housewives complex. 😉

These posts come from the many security experts, analysts, pundits and IT Professionals bemoaning the obvious poor application of policies, procedures, technology and standards that would "prevent" this sort of thing from happening and calling for the heads of those responsible…of the very people who not only perpertrated the crime, but also those responsible for making the crime possible; the monkey who put the data on the laptop in the first place.

So, since most of us who are "security experts" or IT professionals almost always utilize laptops in our lines of work, I ask you to honestly respond in comments below to the following question:

What whole-disk encryption solution utilizing two-factor authentication do you use to prevent an exposure of data should your laptop fall into the wrong hands?  You *do* use a whole-disk encryption solution utilizing two-factor authentication to secure the data on your laptop…don’t you?

Be honest. If you don’t use a solution like this then please don’t post another thing on this topic condemning anyone else.  Ever.

Sure, you may say that you don’t keep confidential information on your laptop and that’s great.  However, if you’ve got email and you’re involved in a company as a security/IT person (or management or even as a general user,) that argument’s already in the bullshit hopper.

If you say that you use encryption for specifically identified "confidential" files and information but still use a web-browser or any Office product on a Windows platform,  for example, please reference the aforementioned bovine excrement container.  It’s filling up fast, eh?

See where this is going?  If we, the keepers of the gate, don’t implement this sort of solution and we still gabble on about how crappy these errant users are, how irresponsible their bosses, how aware we should make and liable we should hold their Board of Directors, the government, etc…

I’ll ask you the same question about that USB thumb drive you have hanging on your keychain, too.

Don’t be a hyprocrite…encrypt yo shizzle.

If you don’t already, stop telling everyone else what lousy humans they are for not doing this and instead focus on getting something like this, or at a minimum, this.

/Chris

Unfied RISK Management – Towards a Business-Driven Information Survivability Architecture

June 10th, 2006 No comments

This is Part I of a two-part series on a topic for which I coined the phrase "Unified Risk Management"
The second part of this paper will be out shortly.   You can download this paper as a .PDF from here

NOTE: This is a little long for a blog post, but it should make for an interesting read.

Abstract

Managing risk is fast becoming a lost art. As the pace of technology’s evolution and
adoption overtakes our ability to assess and manage its impact on the business,
the overrun has created massive governance and operational gaps resulting in
exposure and misalignment. This has
caused organizations to lose focus on the things that matter most: the
survivability and ultimate growth of the business.

Overwhelmed with the escalation of increasingly complex
threats, the alarming ubiquity of vulnerable systems and the constant onslaught
of rapidly evolving exploits, security practitioners are forced to choose the
unending grind of tactical practices – focused on deploying and managing
security infrastructure –  over the
strategic art of managing and institutionalizing risk-driven architecture as a business
process.

In order to understand the nature of this problem and its
resolution we have separated this discussion into two separate papers:

· In Part One (this paper), we analyze the gap between
pure technology-focused information security infrastructure and
business-driven, risk-focused information survivability
architectures.

· In Part Two (a second paper), we show how this
gap is bridged using sound risk management practices in conjunction with best
of breed consolidated Unified Threat Management (UTM) solutions as the
technology foundation of a consolidated risk management model. We will also
show how governance organizations, business stakeholders, network and security
teams can harmonize their efforts to produce a true business protection and
enablement strategy that delivers security as an on-demand service layer at the
speed of business. This is a process we
call Unified Risk Management or URM.

The Way Things Are

Today’s constantly expanding chain of technically-complex security
point solutions do not necessarily reduce or effectively manage risk; they
mitigate threats and vulnerabilities in the form of products produced by
vendors to solve specific technical problems but without context for the assets
which they are tasked to protect and at a cost that may outweigh the protected
assets’ value.

But how does one go about defining and measuring risk?

Spire Security’s Pete Lindstrom best defines being able to
measure and manage risk by first describing what it is not:

· Risk is not static; it is dynamic and fluctuates
constantly with potentially high degrees of variation.

· Risk is not about the possibility that something
bad could happen; it is about the probability that it might happen.

· Risk is not some pie-in-the-sky academic
exercise; you have all of the necessary information available to you today.

· Risk is not a vague, ambiguous concept; it is a
continuum along which you can plot many levels of tolerance and aversion.

It is clear that based upon research available today, most
organizations experience difficulty aligning threats, vulnerabilities and
controls to derive the security posture of the organization (defined as
acceptable or not by the business itself.) In fact, much of what is referred to as risk management today is
actually just complex math in disguise indicating an even more complex
extrapolation of meaningless data that drives technology purchases and
deployments based upon fear, uncertainty and doubt. Nothing sells security like a breach or new worm.

As such, security practitioners are typically forced into
polarizing decision cycles based almost exclusively on threat and vulnerability
management and not a holistic risk management approach to deploying security as
a service. They are distracted by the
market battles to claim the right to the throne of Network Security Supremacy
to the point where the equipment and methodology used to fight the war has
become more attractive than the battle itself.

In most cases, these security products are positioned as
being either integrated into the network infrastructure such as routers or
switches or bolted onto it in the form of single vendor security suite
appliances. These products typically do
not collaborate, interoperate, communicate or coordinate their defensive
activities with solutions not of a like kind.

Realistically, there is room for everyone at the
table. Network vendors see an
opportunity to continue to leverage their hold on market share by adding value
in the form of security while pure-play security vendors continue to innovate
and bring new products and solutions to market that address acute needs that
the other parties cannot. Both are
needed but for different reasons.

Neither of the extremes represents an ultimate answer. Meeting in the middle is the best answer with
an open, extensible, and scaleable network security reference architecture that
integrates as a network switch with all of the diversity and functionality
delivered by on demand best of breed security functions.

As the battle rages, multiple layers of overlapping
proprietary technologies are being pressed into service against risks which are
often not quantified, threats that are not recognized and attempt to defend
against vulnerabilities which within context may have little recognized
business impact.

In many cases, these solutions are marketed as new
technology when in fact they exist as re-badged products with additional
functions cobbled together onto outdated or commoditized hardware and software
platforms, polished up and marketed as UTM or adaptive security solutions.

It is important to make clear the definition of UTM within
the context of the mainstream security solution space offered by most vendors
today. UTM solutions are those which provide an aggregate of security
functionality comprised of at least network firewall, network intrusion
detection and prevention, and
gateway anti-virus. UTM solutions are
often extended to offer additional functionality such as VPN, URL filtering,
and anti-spam capabilities with a recognized benefit of squeezing as much
functionality from a single product offering in order to maximize the
investment and minimize the number of arterial insertion points throughout the
network.

Most of the UTM solutions on the market today provide a
single management interface which governs the overall operation of many
obfuscated moving parts which deliver the functionality advertised above.

In many cases, however, there are numerous operational and
functional compromises made when deploying typical single application/multiple
function appliances or embedded security extensions applied to routers and
switches. These compromises range from
poor performance to an inability to scale based on emerging functionality or
performance requirements. The result is what some hope is “good enough” and
implies a tradeoff favoring cost over security.

Unfortunately, this model of “good enough” security is
proving itself not good enough as these solutions can lead to cost and
management complexities that become a larger problem than the perceived threat
and vulnerabilities the solutions were designed to mitigate in the first place.

So what to do? Focus
on risk!

Prudent risk management strategy dictates that the best
method of securing an organization’s most critical assets is the rational
application of policy, technology and processes where ultimately the risk
justifies the cost.

It is within this context that the definition of
information survivability demands an introduction as it bears directly on the
risk management processes described in this paper. In their paper titled “Information
Survivability: Required Shifts in Perspective,” Allen and Sledge introduce the
concept of information survivability as a discipline which is defined as “…the
capability of a system to fulfill its mission, in a timely manner, in the
presence of attacks, failures, or accidents.”

They further juxtapose information survivability against
information security by illustrating that information security “…takes a
technology centric point of view, with each technology solving a specific set
of issues and concerns that are generally separate and distinct from one
another. Survivability takes a broader,
more enterprise-wide point of view looking at solutions that are more pervasive
than point-solution oriented.”

Information survivability thus combines elements of
business impact, continuity, contingency and disaster recovery planning with
the more narrowly-focused and technical information security practices, thereby
elevating the combined foundational elements to an enterprise-wide risk
management concern.

From this perspective, risk management is not just about
the latest threat. It is not just about
the latest vulnerability or its exploit. It is about how, within the context of the continued operation of the
business and even while under duress, the organization’s mission-critical
functions will be sustained and the most important data will be appropriately
protected.

The language of risk

One obvious illustration of this risk gap is how
disconnected today’s enterprise security and networking staffs remain even when
their business interests should be so very much closely aligned. Worse yet is the resultant misalignment of
both teams with the enterprises’ mission and appetite for risk.

As an example, while risk analysis is conducted on one side
of the house with little understanding of the network and all its moving parts,
the device sprinkling of network and security appliances are strung together on
the other side of the house with little understanding of how these solutions
will affect risk or if they align to the objectives or matters to the business
at all.

To prove this point, ask your network team if they know
what OCTAVE or CoBIT frameworks are and how current operational security
practices map to either of them. Then, ask the security team if they know how MPLS
VRF, BGP route reflectors or the spanning tree protocol function at the network
level and how these technologies might affect the enterprise’s risk posture. 

Then, ask representative business stakeholders if they can
articulate how the answers given by either of the parties clearly maps to their
revenue goals for the year and how their regulatory compliance requirements may
be affected. Where are the metrics to
support any assertion?

Thus, while both parties seek to serve the business with a
common goal of balancing security with connectivity neither speaks a common
language that can be used to articulate the motivation, governance or value of
each other’s actions to the business.

At the level of network security integration, can either
team describe the mapping of asset-based risk categories across the enterprise
to the network infrastructure? Can they tell you tomorrow what the new gaps are
at each risk category level and provide a quantifiable risk measurement across the
enterprise of the most critical assets in a matter of minutes?

This illustration defines the problem at hand; how do we
make sure that we deliver exactly what the business requires to protect the
most critical assets in a manner fitting the risk profile of the organization
and no more.

Interestingly, from an economic point of view, the failure
to create a tightly integrated risk management ecosystem results almost by
definition in a completely inefficient and ineffective solution. Without risk
management basics such as asset and data classification and zoned network
segmentation by asset class, the network has the very real potential to actually
be over-defended at risk boundaries and thus drive costs and complexity much
higher than they need to be.

Consequently, most, if not all, security controls and
prescribed protective technologies are applied somewhat indiscriminately across
the enterprise as a whole. Either too
much security is applied or many of the features of the solution are disabled
since they are not needed. Where is the
return on investment there? Do you need
URL filtering in a DMZ? Do you need
SOA/XML schema enforcement applied across user desktops? No. So
why deploy complex blanketed security technology where it is neither needed nor
justified?

For example, since all assets and the data they contain are
not created equal, it is safe to assume that the impact to the business caused
by something “bad” happening to any two assets of different criticality would
also not be equal. If this is an
accepted corollary, does it make sense to deploy solutions that provide
indiscriminant protective umbrellas over assets that may not need any
protection at all?

In many cases, this issue also plays out in a different
direction as security architectures are constrained based on the deployment of
the physical wiring closets and switch and router infrastructures. Here, the ability or willingness to add one
after the other of point solution devices in-line between key network arteries,
incrementally add specialized security blades into core network components or
even forklift switching and routing infrastructure to provide for “integrated
security” is hideously problematic.

In these cases, overly-complex solutions consist of devices
sprinkled in every wiring closet because there will probably be a
representative computing resource of every risk category in that area of the
network.

Here we are being asked to change the network to fit the
security model rather than the other way around. If the network was built to accommodate the
applications and data that traverse it, should we not be just as nimble, agile
and accommodating in our ability to defend it?

Referring back to the definition of risk management, the
prudent answer is to understand exactly where you are at risk, why, the
business impact, and exactly what is needed from a control perspective to
appropriately manage the risk. In some
cases the choice may be to assert no control at all based upon the lack of
business impact to the organization.

One might ask if the situation is not better than it was
five years ago. The answer to this question is unclear – the effects of the
more visible and noisy threats such as script kiddies have been greatly
mitigated. On the other hand, the emergence of below-the-radar,
surgically-focused, financially motivated cyber-criminals has exposed business
assets and data more than ever. The net effect is that we are not, in fact,
safer than we were because we focus only on threats and vulnerabilities and not
risk.

Security is in the network…or is it in the appliance over
there?

Let us look for a moment at how technology visions spiral
out of control when decoupled from risk in a technology centric perspective. The most
blatant example is the promise of security embedded in the network or
all-in-one single vendor appliances.

On the one hand, we are promised a technically-enlightened,
self-defending network that is resilient to attack, repels intruders,
self-heals when infected and delivers security as a service as applications and
data move about fluidly pursuant to policies enforced across every platform and
network denizen.

We also are told to expect intelligent networks that offer
solution heterogeneity irrespective of operating system or access modality,
technology agnosticism, and completely integrated identity management as a way
to evolve from being data rich but information poor, providing autonomic
response when bad things happen.

Purveyors of routing and switching products plan to branch
out from the port density penetration
foothold they currently enjoy to deliver end-to-end security functionality
embedded into the very fabric of the machinery meant to move bits with the
security, reliability and speed it deserves and which the business demands.

At the other end of the spectrum, vendors who offer
single-sourced, proprietary security suites utilizing integrated functions by
way of appliances integrated into the network suggest that they will provide
the architecture of the future.

They both suggest they will provide host-based agents that
provide immune system-like responses to attempted “infection” and will take
their orders from a central networked “nervous system” that coordinates the
activities of the various security “organs” across the zones of trust defined
by policy.

They propose the evolution of the network into a sentient
platform for the delivery of business in all its forms, aware of and able to
interact with and control the applications and data which travel over it.

Data, voice, video and mobility with all of the challenges
posed by the ubiquity of access methodologies – and of course security – are to
be provided by the network platform as the launch pad for every conceivable
level of service. The network will take the place of complex business logic
such as Extraction/Transform/Load (ETL) layers and it will deliver applications
directly and commit and retrieve data dynamically and ultimately replace tiers
of highly-specialized functions and infrastructure that exist today.

All the while, as revolutionary technology and
architectures such as web services emerge, new standards compete for relevancy
and the constant demand for increased speeds and feeds continue to evolve, the
network will have to magically scale both in performance and functionality to
absorb this change while the transparency of applications, data and access
modality blurs.

These vendors claim that security will simply be subsumed
by the “network” as a function of the delivery of the service since the
applications and data will be provided by a network platform completely aware
of that which traverses its paths. It
will be able to apply clearly articulated business processes and eliminate
complex security problems by mitigating threats and vulnerabilities before they
exploit an attack surface.

These solutions are to be “open,” and allow for
collaboration across the enterprise, protecting heterogeneous elements up and
down the stack in a cooperative defense against impact to the delivery of
applications and data.

These solutions promise to be more nimble and will be
engineered to provide adaptive security capabilities in software with hardware
assist in order to keep pace with exponential increases in requirements. These solutions will allow for quick and easy
update as threats and vulnerabilities evolve. They will provide more deployment flexibility and allow for greater
coverage and value for the security dollar as policy-driven security is applied
across the enterprise.

What’s Wrong with These Answers? Mr. Fox, meet Ms. Chicken

Today’s favorite analogy for security is offered in direct
comparison to the human immune system. The immune system of modern man is indeed a remarkable operation. It is there, inside each human being, where
individual organs function independently, innocuously and in an autonomic
fashion. When employed in a coordinated fashion as a consolidated and
cooperative system, these organs are able to fight infection by adapting and
often become more resistant to attack and infection over time.

Networks and networked systems, it is promised, will
provide this same capability to self-defend and recover from infection. Networks of the future are being described as
being able to self-diagnose and self-prescribe antigens to cure their ills, all
the while delivering applications and data transparently and securely to those
who desire it.

It is clear, however, that unfortunately there are
infections that humans do not recover from. The immune system is sometimes overwhelmed by attack from invaders that
adapt faster than it can. Pathogens
spread before detection and activate in an overwhelming fashion before anything
can be done to turn the tide of infection. Mutations occur that were unexpected, unforeseen and previously
unknown. The body is used against itself
as the defense systems attack both attacker and healthy tissue and the patient
is ultimately overcome. These illnesses
are terminal with no cure.

Potent drugs, experimental treatments and radical medical
intervention may certainly extend or prolong life for a short time, but the
victims still die. Their immune systems
fail.

If this analogy is to be realistically adopted as the basis
for information survivability and risk management best practices, then anything
worse than a bad case of the sniffles could potentially cause networks – and
businesses — to wither and die if a more reasonable and measured approach is
not taken regarding what is expendable should the worst occur. Lose a limb or lose a life? What is more important? The autonomic system
can’t make that decision.

These glimpses into the future are still a narrowly-focused
technology endeavor without the intelligence necessary to make business
decisions outside of the context of bits and bytes. Moreover, the deeper and
deeper information security is pushed down into the stack, the less and less
survivable our assets and businesses will become because the security system
cannot operate independently of the organ it is protecting.

Applying indiscriminate and sometimes unnecessary layers of
security is the wrong thing to do. It
adds complexity, drives costs, and makes manageability and transparency second
class citizens.

In both cases, these promises will simply add layer upon
layer of complexity and drive away business transparency and the due care
required to maintain it further and further from those who have the expertise
to manage it. The reality is that either
path will require a subscription to a single vendor’s version of the truth. Despite claims to the contrary, innovation,
collaboration and integration will be subject to that vendor’s interpretation
of the solution space. Core
competencies will be stretched unreasonably and ultimately something will give.

Furthermore, these vendors suggest that they will provide
ubiquitous security across heterogeneous infrastructure by deploying what can
only be described as homogenous security solutions. How can that be? What possible motivation would one vendor
have to protect the infrastructure of his fiercest competitor?

In this case, monoculture parallels also apply to security
and infrastructure the same way in which they do to networked devices and
operating systems. Either of the examples referenced can potentially introduce
operational risk associated with targeted attacks against a single-vendor
sourced infrastructure that provides both the delivery and security for the
data and applications that traverse it. We have already seen recent malicious attacks surgically designed and
targeted to do just this.

What we need is perfectly described by Evan Kaplan of
Aventail who champions the notion of a “dumb” network connectivity layer with
high speed, low latency, high resiliency, predictable throughput and
reliability and an “intelligence” layer which can deliver valued added service
via open, agile and extensible solutions.

In terms of UTM, based upon a sound risk management model,
this would provide exactly the required best of breed security value with
maximum coverage exactly where needed, when needed and at a cost that can be
measured, allocated and applied to most appropriately manage risk.

We pose the question of whether proprietary vendor-driven
threat and vulnerability focused technology solutions truly offer answers to
business problems and if this approach really makes us more secure. More importantly, we call into question the
ability for these offerings to holistically manage risk. We argue they do not and inherently
cannot.

The Solution: Unified Risk Management utilizing Unified
Threat Management

A holistic paradigm for managing risk is possible. This
model is not necessarily new, but the manner in which it is executed is. Best-of-breed, consolidated UTM provides this
execution capability. It applies
solutions from vendors whose core competencies provide the best solution to the
problem at hand. It can be linked
directly to asset and information criticality.

It offers the battle-hardened lessons and wisdom of those
who have practiced before us and adds to their work all of the benefits that
innovation, remarkable technology and the pragmatic application of common sense
brings to the table. The foundation is
already here. It does not require years
of prognostication, massive infrastructure forklifts or clairvoyant bets made
on leveraging futures. It is available
today.

This methodology, which we call Unified Risk Management
(URM), is enabled by applying a well-defined framework of risk management
practices to an open, agile, innovative and collaborative best-of-breed UTM
solution set combined in open delivery platforms which optimize the
effectiveness of deployments in complex network environments.

These tools are combined with common sense and the
extraordinary creativity and practical brilliance of leading-edge risk
management practitioners who have put these tools to work across organizational
boundaries in original and highly effective ways.

This is the true meaning of thought leadership in the high
technology world: customers and vendors working hand-in-hand to create
breakthrough capabilities without expensive equipment forklifts and without the
associated brow-beating from self-professed prophetic visionaries who
pontificate from upon high about how we have all been doing this wrong and how
a completely new upgraded infrastructure designed to sell more boxes and Ethernet
ports is required in order to succeed.

URM is all about common sense. It is about protecting the right things for
the right reasons with the right tools at the right price. It is not a marketecture. It is not a fancy sales pitch. It is the logical evolution and extension of
Unified Threat Management within context.

It is about providing choice from best-of-breed offerings
and proven guidance in order to navigate the multitude of well-intentioned
frameworks and come away with a roadmap that allows for true risk management
irrespective of the logo on the front of the machinery providing the heavy
lifting. It is, quite literally, about
“thinking outside of the box.”

URM combines risk management – asset management, risk
assessment, business impact analysis, exposure risk analytics, vulnerability
management, automated remediation –  and
the virtualization of UTM security solutions as a business process into a tight
feedback loop that allows for the precise management of risk. It iteratively feeds into and out of
reference models like Spire Security’s Pete Lindstrom’s “Four Disciplines of
Security Management” that include elements such as:

· Trust Management

· Identity Management

· Vulnerability Management

· Threat Management

This system creates a continuously iterative and highly
responsive intelligent ecosystem linked directly to the business value of the
protected assets and data.

This information provides rational and defensible metrics
that show value, the reduction of risk on investment, and by communicating
effectively in business terms, is intelligible and visible to all levels of the
management hierarchy from the compliance auditor to the security and network
technicians to the chief executive officer.

This re-invigorated investment in the practical art of risk
management holds revolutionary promise for solving many of today’s business
problems which are sadly mislabeled as information security issues.

Risk management is not rocket science, but it does take innovation,
commitment, creativity, time, the reasonable and measured application of
appropriate business-driven policy, excellent technology and the rational
application of common sense.

This tightly integrated ecosystem consists of solutions
that embody best practices in risk management. It consists of tightly-coupled
and consolidated layers of UTM-based information survivability architectures
that can apply the results of the analytics and management toolsets to
business-driven risk boundaries in minutes. It collapses the complexity of existing architectures dramatically and
applies a holistic policy driven risk posture that meets the security appetite of
the business and it does so while preserving existing investments in routing
and switching infrastructure that serves the business well.

Conclusion: On To the Recipe

In this first part of our two-part series, we have tried to
define the basis for looking at network security architectures and risk
management in an integrated way.  Key to
this understanding is a move away from processes in which disparate appliances
are thrown at threats and vulnerabilities without a rationalized linkage to the
global risk profile of the infrastructure.

In the second paper of the series we will demonstrate
exactly how the lightweight processes that form the foundation of Unified Risk
Management can be implemented and applied to a UTM architecture to create a
highly responsive, real-time enterprise fully aware of the risks to its
business and able to respond on a continual basis in accordance with the ever-changing
risk profile of its critical data, applications and assets.

Categories: Risk Management Tags:

Even M(o)ore on Purpose-built UTM Hardware

June 8th, 2006 2 comments

Eniac4
Alan Shimel made some interesting points today in regards to what he described as the impending collision between off the shelf, high-powered, general-purpose compute platforms and supplemental "content security hardware acceleration" technologies such as those made by Sensory Networks — and the ultimate lack of a sustainable value proposition for these offload systems:

I can foresee a time in the not to distant future, where a quad core,
quad proccessor box with PCI Express buses and globs of RAM deliver
some eye-popping performance.  When it does, the Sensory Networks of
the world are in trouble.  Yes there will always be room at the top of
the market for the Ferrari types who demand a specialized HW box for
their best-of-breed applications.

Like Alan, I think the opportunities that these multi-processor, multi-core systems with fast buses and large ram banks will deliver is an amazing price/performance point for applications such as security — and more specifically, multi-function security applications such as those that are used within UTM offerings.  For those systems that architecturally rely on multi-packet cracking capability to inspect and execute a set of functional security dispositions, the faster you can effect this, the better.  Point taken.

One interesting point, however, is that boards like Sensory’s are really deployed as "benign traffic accelerators" not as catch-all filters — as traffic enters a box equipped with one of these cards, the system’s high throughput potential enables a decision based on policy to send the traffic in question to the Sensory card for inspection or pass it through uninspected (accelerate it as benign — sort of like a cut-through or fast-path.)  That "routing" function is done in software, so the faster you can get that decision made, the better your "goodput" will be.

Will this differential in the ability to make this decision and offload to a card like Sensory’s be eclipsed by the uptick on the system cpu speed, multicores and lots of RAM?  That depends on one very critical element and its timing — the uptick in network connectivity speeds and feeds.  Feed the box with one or more GigE interfaces, and the probability of the answer being "yes" is probably higher.

Feed it with a couple of 10GigE interfaces, the answer may not be so obvious, even with big, fat buses.  The timing and nature of the pattern/expression matching is very important here.  Doing line rate inspection focused on content (not just header) is a difficult proposition to accomplish without adding latency.  Doing it within context is even harder so you don’t dump good traffic based on a false positive/negative.

So, along these lines, the one departure point for consideration is that the FPGA’s in cards like Sensory’s are amazingly well tuned to provide massively parallel expression/pattern matching capabilities with the flexibility of software and the performance benefits of an ASIC.  Furthermore, the ability to parallelize these operations and feed them into a large hamster wheel designed to perform these activities not only at high speed but with high accuracy *is* attractive.

The algorithms used in these subsystems are optimized to deliver a combination of scale and accuracy that are not necessarily easy to duplicate by just throwing cycles or memory at the problem as the "performance" of the effective pattern matching taxonomy requirements is as much about accuracy as it is about throughput.  Being faster doesn’t equate to being better.

These decisions rely on associative exposures to expressions that are not necessecarily orthagonal in nature (an orthogonal classification is one in which no item is a member of
more than one group, that is, the classifications are mutually
exclusive — thanks Wikipedia!)  Depending upon what you’re looking for and where you find it,  you could have multiple classifications and matches — you need to decide (and quick) if it’s "bad" or "good" and how the results relate to one another.

What I mean is that within context, you could have multiple matches that seem unrelated so flows may require iterative
inspection (of the entire byte-stream or offset) based upon "what" you’re looking for and what you find when
you do — and then be re-subjected to inspection somewhere else in the
byte-stream.

Depending upon how well you have architected the software to distribute/dedicate/virtualize these sorts of functions across multi-processors and multi-cores in a general purpose hardware solution driven by your security software, you might decide that having purpose-built hardware as an assist is a good thing to do to provide context and accuracy and let the main CPU(s) do what they do best.

Switching gears…

All that being said, signature-only based inspection is dead.  If in the near future you don’t have behavioral analysis/behavioral anomaly capabilities to help provide context in addition to (and in parallel with) signature matching, all the cycles in the world aren’t going to help…and looking at headers and netflow data alone ain’t going to cut it.  We’re going to see some very intensive packet-cracking/payload and protocol BA functions rise to the surface shortly.  The algorithms and hardware required to take multi-dimensional problem spaces and convert them down into two dimensions (anomaly/not an anomaly) will pose an additional challenge for general-purpose platforms.  Just look at all the IPS vendors who traditionally provide signature matching scurry to add NBA/NBAD.  It will happen in the UTM world, too.

This isn’t just a high end problem, either.  I am sure that someone’s going to say "the SMB doesn’t need or can’t afford BA or massively parallel pattern matching," and "good enough is good enough" in terms of security for them — but from a pure security perspective I disagree.  Need and afford are two different issues.

Using the summary argument regarding Moore’s law, as the performance of systems rise and the cost asymptotically approaches zero, then accuracy and context become the criteria for purchase.  But as I pointed out, speed does not necessarily equal accuracy.

I think you’ll continue to see excellent high performance/low cost general purpose platforms to provide innovative software-driven solutions being assisted by flexible, scalable and high performance subsystems designed to provide functional superiority via offload in one or more areas.

/Chris

UTM messaging is broken – Perimeter vs. Enterprise UTM – Film @ 11

June 8th, 2006 No comments

I need to spend 2 minutes further introducing the concept of Enterprise-class UTM.  I’ll post in greater detail as a follow-on in the next day or so.  I just got back from the Gartner show, so my head hurts and it’s 1am in the morning here in Beantown.  This (blog entry below) was an interesting if not somewhat incomplete beginning to this thought process.

Don McVittie over on the Network Computing Security Blog had some really interesting things to say about the need for, general ripeness and maturity of UTM and the operationalization of UTM technology and architecture within the context of how it is defined, considered and deployed today.

What he illustrated is exactly the breakdown where "traditional" SMB-based, perimeter-deployed UTM messaging is today.  You’d be nuts to try an deploy one of these referenced UTM appliances at the core of a large enterprise.  You’re not supposed to.  There’s simply no comparison between what you’d deploy at the core for UTM versus what you’d deploy at a remote/branch office.

That’s what Enterprise-class UTM is for.  The main idea here is that while for a small company UTM is simply a box with a set number of applications or security functions, composed in various ways and leveraged to provide the ability to "do things" to traffic as it passes through the bumps in the security stack.

In large enterprises and service providers the concept of the "box" has to extend to an *architecture* whose primary attributes are flexibility, resilience and performance.

I think that most people don’t hear that, as the marketing of UTM has eclipsed the engineering realities of management, operationalization and deployment based upon what most people think of as UTM.

Historically, UTM is defined as an approach to network security in which multiple logically complimentary security applications, such as firewall, intrusion detection and antivirus, are deployed together on a single device. This reduces operational complexity while protecting the network from blended threats.

For large networks where security requirements are much broader and complex, the definition expands from the device to the architectural level. In these networks, UTM is a “security services layer” within the greater network architecture. This maintains the operational simplicity of UTM, while enabling the scalable and intelligent delivery of security services based on the requirements of the business and network. It also enables enterprises and service providers to adapt to new threats without having to add additional security infrastructure.

You need a really capable and competent switching platform optimized for virtualized service delivery to pull this off.   That’s what this is for — the Crossbeam X80 Security Services Switch:

You plumb the X-series into the switching  infrastructure as an overlay and provide service where and when you need to manage risk by effectively implementing policies which abstract down to making all flows which match criteria within the rules, to be subject to specific security service layer combinations (firewall, IDS, AV, URL, etc…)  No forklifts, no fundamental  departures from how you manage or maintain the network or the security layer(s) defending it.  Enterprise UTM provides transparency, high performance, high availability, best-of-breed virtualized security services, and simplified deployment and management…

UTM for large networks is designed to provide solutions that deliver the key components required for a UTM security services layer:

  •     high performance and high availability,
  •     best-of-breed applications
  •     intelligent and virtualized service delivery

This enables customers to create an intelligent security services layer that delivers the right protection for any part of the network in accordance with evolving threats and business objectives. This layer is managed as a single consolidated system, thus delivering the operational and cost benefits of UTM while radically improving the overall security posture of the network.

More on the architecture which enables this in a follow-on post.  We’ll discuss the traditional embedded vs. overlay appliance model versus the X-Series perspective as well as the C-series.

Look, we’ll go through the technical details in a follow-on post.  Bear with me.

Chris

Would you buy UTM from a guy with an IUD? (Read on…)

June 6th, 2006 4 comments

[Editor’s Note: I really found myself getting suckered into the darkside of this debate when it turned into a BS marketing spin fest and personal bashing session from the other party.  I shouldn’t have responded to those elements, but I did.  I’m guilty.  It was stupid.  Stupid, but strangely satisfying.  Much like tequila.  I still got zero answers to any of the points I raised, so you decide for yourself…]

Looks like Alex Neihaus, Astaro’s VP of Marketing, can’t be bothered to address technical criticism or answer questions debating the utility of approach for Astaro’s new "virtual UTM appliance," so he feels it necessary to hang his "shingle" out in public and launch personal attacks rather than address the issue at hand.  Well, he is in marketing so why would I expect anything different.

Since my comments back to him may not actually make it up on his site, I figure I’ll respond to them here; they won’t be word for word because I forgot to copy/paste my response in terms of what I sent him. 

Jungian synchronicity always blows me away. I was just reading about "intermittent explosive disorder" this morning. It’s apparently severely undiagnosed.

Except at Crossbeam. Apparently, Christofer Hoff, their brand-spankin’ new "chief security strategist" (aka
"we want a Scoble of our own") is deeply worried about virtualization
and the impact on Crossbeam. Ergo, a demonstration of IED in the
blogosphere via a post on his personal blog.

Funny.  I was just reading about  rectal encephalic inversion and in a Freudian twist of fate, Alex’s slip is showing.

As I let him know, I’ve been at Crossbeam for 8 months and before that, I was a customer for almost 3 years deploying their products in the real world, not preaching about them from behind a web interface.  Prior to that I ran an MSSP/Security Consultancy for 9 years, did two other startups, raised venture capital, and built the global managed security service for a worldwide service provider on 4 continents serving 152 countries.  Yet I digress…

But let me say we don’t mind the heat, in fact, we appreciate it.
But next time, Chris, why not post on the Crossbeam site? Why not
change your bio on your blog to indicate your new role? And before you
impugn Richard Stiennon’s credibility, why not earn some in your new
role? I am, of course, fair game, but to wail on Richard isn’t cricket.

Crossbeam doesn’t have blog capabilities yet.  When we do, it will cross-post.  I don’t try to hide what I do or who I do it for at all.  Where exactly on the Scrapture blog does it say that you are the VP of Marketing for Astaro, Alex?

Also, this ain’t my first ride on a tuna boat, pal.  While I haven’t had the privilege of peddling FTP software for a living, I know a thing or two about security — designing, deploying, managing and using it.  As I mentioned in the comments I sent you, everyone’s a dog on the Internet, Alex, and you’re pissing on the wrong hydrant.

In terms of Richard, I know him.  I talk to him reasonably often.  We’re a client.  Nice reach.  I wasn’t impugning his honor, I was pointing out that the quote you used didn’t have a damned thing to do with Astaro.

I do appreciate you taking umbrage at the "ASIC-free" phrase in the
press release. I put that in to see if it would raise any neck hair.
It’s the crux of the issue.

You apparently think that hardware is the answer. I know it isn’t.

Firstly, Crossbeam doesn’t depend on ASIC’s in our products, so your assumption I was bristling at the comment because I need to be defensive about ASIC’s is as wrong as your assertion/innuendo that ASICs actually make things go slower. 

More importantly, your assertion that I think hardware is the answer is, again, dead wrong.  If you knew anything about Crossbeam, you’d recognize that the secret sauce in our solution is the software, not the hardware.  The hardware is nice, but it’s not the answer.

There’s always a more powerful engine. Always a more powerful
subsystem. Always better, always cheaper. Businesses built on
mainframes can be profitable, but never ubiquitous in the face of
commoditized hardware. IBM learned this in the 1990’s; Crossbeam will
learn it shortly. After you’ve sold the Fortune 500 five-hundred units,
you’ll inevitably be stuck for growth. You’ll cast about for the broad
middle.

Ummm, you know squat-all about Crossbeam — that much is obvious.  We’ve sold many more than 500 units and our customer base is split 50/50 between ISP/MO’s/Telco’s and the Fortune 2000 — doubling revenues year after year for 6 years in a space that is supposedly owned by Cisco and Juniper is a testament to the rediculousness of your statement.   I’m shivering in anticpation of our impending doom…by a bunch of VMWare images running on a DL380 no less.

That broad middle will be using commoditized hardware with
integrated, easy-to-use security solutions.

Hey!  We agree about something.  Again, if you knew anything about our product, our roadmap or our technololgy, you’d recognize this.

You’ll talk about
"enterprise ready" to people who want the UTM equivalent of a fax
machine.

Buhahahaha!  A fax just arrived for you.  It’s titled "It’s sure as hell easier to have a high-end solution and scale down than it is to have a low-end solution and scale up!"  Sound familiar?  VMWare ain’t it, bubba. 

Wail all you want how unfortunate it is that UTM is associated with SMB (I agree that’s wrong, wrong, wrong). But the answer to UTM ubiquity isn’t gonna come from the high end.

Sorry. And don’t let that IED problem get you down.

UTM is associated with low-end perimeter solutions that don’t scale and require forklifts due to the marginalization of commoditized hardware.  When you have a solution that actually scales, can sit in a network for 6 years without forklifts, and is in place at the biggest networks on the planet, step up.

Otherwise, do me a favor and respond in kind technically to my points regarding manageablility, security, and scalability…or have Gert or Markus (Astaro’s CSA and CTO) do it, at least we can have a debate about something meaningful.

 

/Chris

The world’s first “UTM Virtual Appliance”?

June 5th, 2006 5 comments

Blipping through the many blogs I read daily, I came across an interesting announcement from Astaro in which they advertise the "…world’s first UTM virtual appliance."  Intrigued, I clicked over here for a peek.

Before you read forward, you should know that I really like Astaro’s products.  I think that for SMB markets, their appliances and solutions are fantastic.  That being said, the word "virtualization" means a lot of things to a lot of people — there are some liberties taken  by Astaro that we’re going to need to analyze before this sort of thing can really be taken seriously.  More on that later.

I’m nominating this announcement for an Emmy because it’s the best use of humor in a commercial that I have seen in a LONG time.  I mean really…blade server solutions with full-on clustering and virtualized/grid computing management layers complete with virtualized storage have a hard time providing this sort of service level reliably.  You mean to tell me that MSSP’s who have SLA’s and make their lunch money providing security as a service are going to build a business on this malarkey?

Somebody call Crossbeam’s global MSSP/ISP/MO customers who provide services in the cloud to hundreds of thousands/millions of their customers and tell them they can ask for a refund because all they need is a couple of DL380’s, VMware, ASG and a set of really big huevos and you’ve got all the performance, scalability, reliability, high-availability and resiliency you need.

Ah, crap.  I’m just such a cynical bastard. 

Here’s the distillation:

  1. Take Astaro’s Security Gateway product (a very nicely-done hardened linux-based offering with re-packaged and optimized open source and OEM’d components)
  2. Create a VM (virtual machine) image that can be run under the VMWare VM Player, VM Workstation, VM Server or VM ESX
  3. Run it on a sufficiently-powered hardware platform
  4. Presto-change-o!  You’ve got a virtualized security appliance!

It’s a nice concept, but again it further magnifies the narrowly focused scope of how UTM is perceived today — a mid-market, perimeter solution where "good enough" is good enough.  It’s going to marginalize the value of what true enterprise and provider class UTM brings to the table by suggesting that you can just cobble together a bunch of VM’s, some extra hardware and whammo!  You’ve got mail!  This is the very definition of scrapture!

However, it seems as though the logic at Astaro goes something like this:

"If one Astaro gateway is "good enough," then running LOTS of virtual Astaro gateways is "even gooder!"  AND you can run hundreds of ’em on the same machine (see rediculous quote section below.)

The marketing folks over at Astaro are off to a wonderful June as they really put in the OT to milk this one for all it’s worth.  Let’s get one thing straight, there’s a real big difference between innovation and improvisation.  I’ll let you figure out what my opinion of this is.

Firstly, this concept in the security space is hardly new.  Sure, it may be the first "UTM" product to be offered in a VM, but StillSecure has been providing free downloads of its Strata Guard IPS product this way for months — you download the VMWare image and poof!  Instant IPS.

Secondly, I’m really interested in what controls one would have to put in place to secure the host operating system that is running all of these VM’s.  I mean, when you run Astaro’s hardened appliance, that’s all taken care of for you.  What happens when Johnny SysAdmin boots up VMWare Server on Windows 2K3 and loads 40 instances of his "secure" firewall?  Okay, maybe he uses linux.  Same question.  What happens when you need to patch said OS and it blows VMware sky-high?

Thirdly, how exactly do you provide for CPU/Memory/IO arbitration when running all these VMWare instances and how would an Enterprise leverage this virtual mass of UTM "appliances" without load balancing capabilities?  What about high-availability?

Fourthly, what happens to all of these VM UTM instances when the host OS takes a giant crap? 

Fifthly, the sheer number of scrapturelicious quotes in this press release is flan-friggin-tastic.:

Astaro Security Gateway for VMware allows customers to flexibly run
Astaro Security Gateway software on a VMware infrastructure. Many
hundreds or thousands of Astaro Security Gateways can be virtualized in
this way,
each delivering the network protection and cleaning for which
Astaro is famous.

…ummmm…I can only assume you meant on a hundred or a thousand boxes?

Major benefits for users include simpler deployment in large and
complex environments,
better hardware allocation and reduced hardware
expenditures because physical computers can run multiple virtual
appliances. And because Astaro’s unified threat management is
ASIC-free, performance when running in a virtual machine is maximized.

How do you actually plumb one of these things into a network?  How do you configure multi-link trunking utilizing VLANs across the host OS up to the VM instances?  This is simpler, how?  Oh, that’s right…it’s PERIMETER UTM

And then there’s the fact that because it runs on generic PC’s under a VM, you can ignore the potentially crappy performance and we don’t need no stinkin’ ASICs — they only get in the way.  That’s right, ASIC’s make security applications run SLOWER!

“The ability to virtualize gateway security services opens up major new
capabilities for managed service providers (MSPs) to deliver air-tight
security services to small- and medium-size business customers,” said
Richard Stiennon, founder, IT-Harvest Group. “MSPs can leverage their
hardware investment while providing dedicated security services to
end-user customers, resulting in superior security and manageability.”

Rich, I gotta ask…did you actually say this in regards to Astaro’s VM announcement or security virtualization in general?  Since there’s ZERO reference to Astaro in this quote, I can only assume the latter.  If so, your honor is restored.  If not, you’re buyin’ the beer at Gartner buddy because just in case you haven’t heard, IDS is dead 😉

Look, I like the fact that you can take their product, try it out under a low-cost controlled test and see if you want to buy it.  Great idea.  Suggesting that the MSSP’s of the world and going to run out in droves, buy a DL380 and solve the North American continent’s security woes is, much like my analogy, rediculous.

Folks like Fortinet, Juniper and Cisco are gunning for the low-end market and you can bet your bottom scrapture that they have the will, money and at-bat’s to recognize that there is a lot of money to be made by providing virtualized security services — either in the cloud via MSSP’s or in the Enterprise.

But don’t worry because they have those annoying things called ASICs, so I’m sure Astaro will be fine. 

Virtually yours,

/Chris

Why Perimeter UTM breaks Defense in Depth

June 4th, 2006 3 comments

I was reading Mike Rothman’s latest blog entry regarding defense in depth and got thinking about how much I agree with his position and how that would seemingly put me at odds given what I do for a living when put within the context of Unified Threat Management (UTM) solutions — the all-in-one "god boxes" that represent the silver bullet for security woes 😉

Specifically, I am responsible for crafting and evangelizing the security strategy for a company that one might (erroneously) expect trumpets that one can solve all security ills by using a single UTM device instead of a layered approach via traditional defense in depth.  I need to clear that up because there’s enough septic marketing material out there today that I can’t bear to wade through it any longer.   And yes, I work for a company that sells stuff, so you should take this with whatsoever a reasonable amount of NaCl you so desire…

There is a huge problem with how UTM today is perceived.  UTM — and in this case the classical application thereof — is really defined as a perimeter play where boxes that were once firewalls that then became firewalls+IDS that then became firewalls+IDS+IPS have now become boxes that are firewalls+IDP+Anti-everything.  IDC defines that the basic definition of a UTM appliance is one that offers firewall, IDP and Anti-virus in a single box.  There are literally tons of options here, but the reality is that almost all of these "solutions" actually fly in the face of defense in depth because the consumer of these devices is missing one critical option: Flexibility.

Flexibility regarding the choice of technology.  Flexibility regarding the choice of vendor.  Flexibility of choice regarding extensibility of functionality.

Tradtional (perimeter) UTM is, in almost all cases, a single vendor’s version of the truth.
Lots of layers — but in one box from one vendor!  That’s not defense in depth,
that’s defense in breadth.

That may be fine for the corner liquor store with DSL Internet access, but what enterprises really need are layers of appropriate defense from multiple best in breed vendors deployed via a consolidated and virtualized architecture that allows for safer, simpler networks.

On the surface, UTM seems like a fine idea — cram as many layers of security "stuff" into a box that you can sprinkle around the network in order to manage threats and vulnerabilities.  Using a single device (albeit many of them) would suggest that you have less moving parts from a management perspective and instead of putting 5 separate single-function boxes in-line with each other to protect your most important assets, now you only need one.  This is a dream for a small company where the firewall admin, network engineer and help desk also happens to be the HR director.

This approach is usually encountered at the "perimeter," which today is usually defined as the demarcated physical/logical ingress/egress dividing the "inside" from the "outside."  While there may well be a singular perimeter in small enterprises and home networks, I think it fair to say that in most large organizations, there are multiple perimeters — each surrounding mini "cores" that contain the most important assets segmented by asset criticality, role, function or policy-driven boundaries.

Call it what you will: de-perimeterization, re-perimeterization, dynamic perimiterization…just don’t call it late for supper.

In this case, I maintain that the perimeter isn’t "going way," it’s multiplying — though the diameter is decreasing.  As it does, security practitioners have two choices to deal with segmented and mini-perimeterized mini-cores:

  1. Consolidate & Virtualize
  2. Box Stack/Sprinkle

These two options seem simple enough, and when you pull back the covers, there are a couple of options you have to reconcile in either scenario:

  1. Single-vendor embedded security in the network infrastructure (routers/switches)
  2. Single-vendor overlay security via single-function, single-application devices
  3. Single-vendor overlay security via single application/multi-function UTM
  4. Multi-vendor embedded security in the network infrastructure (routers/switches)
  5. Multi-vendor overlay security via single application/muti-function security applications
  6. Multi-vendor overlay security via Best-of-breed security applications

When pairing the deployment option from the first set with a design methodology from the second, evaluating the solution leads to many challenges; balancing a single-vendor’s version of the truth (in one or more boxes) against many cooks in the kitchen in combination with either "good enough" or best in breed.  Tough decisions.

"Good enough" or "best in breed." One box or lots.  The decision should be made based upon risk, but unfortunately that seems to be a four letter word to most of the world; we ought to be managing risk, but instead, we manage threats and vulnerabilities.  It’s no wonder we are where we are…

For a taste test, go ahead and ask your network switch vendor or your favorite UTM appliance maker where their solutions to these very real business-focused security challenges exist in their UTM delivery platforms today:

  • Web Application security
  • Database security
  • XML/WS/SOA security
  • VoIP security

They don’t have any.  Probably never will.  Why?  Because selling perimeter UTM isn’t about best in breed.  It’s about selling boxes.  What happens when you need to deploy these functions or others in combination with FW, IDP, AV, Anti-spam, Anti-Spyware, URL Filtering, etc…you guessed it, another box.  Defense in depth?  Sure, but at what cost?

When defense in depth lends itself to management nightmares due to security sprawl, there is a very realistic side effect that could introduce more operational risk into the business than the risk the very controls were supposed to mitigate.  It’s a technology cirlce-jerk.

What we need is option #1 from the first set paired with option #6 from the second — consolidated and virtualized security service layers from multiple vendors in a robust platform.  This pairing needs to provide a consolidated solution set that is infrastructure agnostic and as much a competent network platform as it is an application/service delivery platform.

This needs to be done in a manner in which one can flexibly define, deploy and manage exactly the right combination of security functions as a security "service layer" deployed exactly where you need it in a network to manage risk — where the risk justifies the cost. And by cost I mean both CapEx and OpEx. 

Of course, this solution would require immense flexibility, high availability, scalability, resiliency, high performance, and ease of management.  It would need to be able to elevate the definition of the ill-fated "Perimeter UTM" to scale to the demands of "Enterprise UTM" and "Provider UTM."  This solution would need to provide true defense in depth…providing the leverage to differentiate between "network security," "information security," and "information survivability."

Guess what?  For enterprises and service providers where the definition morphs based upon what the customer defines as best in breed, defense in depth either becomes a horrific nightmare of management and cost or a crappy deployment of good enough security that constantly requires forklifts, doesn’t scale, doesn’t make us more secure, is a sinkhole for productivity and doesn’t align to the business.  Gee, I’ll take two.

We’re in this predicament because an execution-driven security reference architecture to allow businesses to deploy defense in depth in a manner consistent with managing risk has not been widely available, or at least not widely known.

I chuckled when people called me nuts for declaring that the "core"
was nothing more than a routing abstraction and that we were doomed if
we kept designing security based upon what the layout of wiring closets
looked like (core – distribution – access.)  I further chuckle
(guffaw?) now that rational, common sense risk management logic is
given some fancy name that seems to indicate it’s a new invention.  Now
as the micro-cores propagate, the perimeters multiply, the threat
vectors and attack surfaces form cross-hatches dimensionally and we
gaze out across the vast landscape of security devices which make up
defense in depth, perhaps UTM can be evaluated within the context it
deserves.

On the one hand, I’m not going to try and turn this into a commercial for my product as there are other forums for that, but on the other I won’t pretend that I don’t have a dog in this hunt, because I do.  I was a real-world customer using this architecture for almost 3  years before I took up this crusade.  Managing the security of $25 Billion of other people’s money demands
really good people, processes and technology.  I had the first two and
found the third in Crossbeam.

The reality is that nobody does what Crossbeam does and if you need the proof points to substantiate that, go to the largest telco’s, service providers, mobile operators and enterprises on the planet and check out what’s stacked next to the Juniper routers and Cisco switches…you’ll likely find one of these.

Real UTM.  Real defense in depth.  Really.

“Back From the Bleak Blog Brink of Nothingness…”

June 3rd, 2006 No comments

Yesterday I met up with Alan Shimel, StillSecure’s
Chief Strategy Officer.  I’ve known about StillSecure’s excellent
products for some time now, but I frequently read Alan’s blog and
decided that we should meet. 

Alan’s a fascinating guy, the sort of fellow that one becomes
instantly comfortable with.  You can tell he’s been through the
security sausage grinder and come out decently unscathed but with
wisdom, patience and a distilled kindness that this sort of experience
brings.

At lunch we had one of those conversations that became animated
enough that the verbal game of ping-pong encompassing the collective
turets-style outbursts of our past lives caused both of us to interject
comment after comment — even when stuffing our faces with Italian food
😉

It was truly excellent being able to sit down with someone who
really gets it and isn’t afraid to (agree to) disagree.  We seem to
share a great many views on perspectives that run the gamut of the
security panorama and it was great to meet another someone from the
blogosphere like Alan with whom I could bond intellectually.  I’ve met
some other fantastic opinionati like Mike Rothman and Pete Lindstrom under similar circumstances…you should most definitely read their blogs.

After meeting with Alan, I became inspired to retire my previous
neglected blog-bortion and commit to a full-frontal assault using
TypePad which provides a much better canvas for this sort of thing.

I’ll move a couple of entries over just for continuity’s sake.

Off to the Gartner IT Security Summit in DC from the 6th to the
8th…Crossbeam is sponsoring another amazing evening social event in
conjunction with our buddies from SourceFire.  eMail/phone me for
details.

Looking forward to more bloggage.

Chris

Categories: General Rants & Raves Tags: