Archive

Archive for June, 2006

Need a fake name, address, social security or credit card number?

June 29th, 2006 2 comments

Gatesbooking
I don’t know exactly how I stumbled across this, but I found a website that purports to offer a "public service" by providing a fake identity generator complete with social security and credit card numbers.  In reading the FAQ, the utility of this "service" as offered is:

There are a ton of uses for this service. Here are a few examples:

  • "Generate excellent test data quickly and cheaply" DB2 News & Tips
  • Persons living outside of the U.S. can use this information to gain
    access to websites that do not support their country’s addresses.
  • Use fake information when filling out forms to avoid giving out personal information.
  • Generate a false identity to use as your pseudonym on the internet.
    This allows you to keep your real life and your internet life seperate.
  • Get ideas for names to use for characters in a book or story.
  • Generated credit cards can be used to test basic
    client-/server-side validation techniques without accidently processing
    a real card.

How about one more?  Give illegal immigrants, people fraudulently attaining employment, criminals, identity thieves, and miscreants yet another avenue to more easily do things they shouldn’t.  You can even order in bulk, with SOCIAL SECURITY NUMBERS.

I suppose that by linking to this site I am attracting even more attention to it, but I just can’t understand how Corban Works whose website says they are "…dedicated to creating family-friendly websites" and makes references to the LDS (Mormon church) thinks this is a good idea?

[Editors note: I removed this link because my stats/hit counter for this post was going crazy — seems every scumbag on Earth looking for hits on ‘fake social security numbers" and the like from Google was pulling this entry up.  I don’t want to make it any easier for these idiots to do what they do.]

UTM is dead! Long live UTM! (or, Who let the dogs out?)

June 28th, 2006 1 comment

Uglydog
One of the things I spend a lot of time doing these days is talking to
analysts – both market and financial – regarding the very definition of
UTM and what it means to vendors, customers, and the overall impact
that UTM has to the approach to security taken by the SMB contingent,
large enterprises and service providers.

The short of it: it means a LOT of things to a LOT of different people.  That’s potentially
great if you’re a vendor selling re-branded UTM kit that used to be a
firewall/IDS/IPS because it allows for a certain amount of latitude and
agility in positioning your product, but it can also backfire when you
don’t have a sound strategy and you try to be everything to everyone.

It also sucks if you’re a customer because you have to put the hip
waders on in order to determine if UTM is something you should care
about, integrate into your strategy and potentially purchase.

I’ve written about how UTM Messaging is broken
before, that there are TIERS of product offerings that are truely
differentiated.  Ultimately, UTM breaks down into two strata: Perimeter
UTM and Enterprise/Service Provider UTM.

For the sake of brevity, here’s the rundown introducing the differences:

…That’s what Enterprise-class UTM is for.  The main idea here is that
while for a small company UTM (perimeter UTM) is simply a box with a set number of
applications or security functions, composed in various ways and
leveraged to provide the ability to "do things" to traffic as it passes
through the bumps in the security stack.

In large enterprises and service providers the concept of the "box"
has to extend to an *architecture* whose primary attributes are
flexibility, resilience and performance
.

I think that most people don’t hear that, as the marketing of UTM
has eclipsed the engineering realities of management,
operationalization and deployment based upon what most people think of
as UTM.

Historically, UTM is defined as an approach to network security in
which multiple logically complimentary security applications, such as
firewall, intrusion detection and antivirus, are deployed together on a
single device. This reduces operational complexity while protecting the
network from blended threats.

For large networks where security requirements are much broader and
complex, the definition expands from the device to the architectural
level. In these networks, UTM is a “security services layer” within the
greater network architecture. This maintains the operational simplicity
of UTM, while enabling the scalable and intelligent delivery of
security services based on the requirements of the business and
network. It also enables enterprises and service providers to adapt to
new threats without having to add additional security infrastructure.

Today, Richard Stiennon (of "IDS is dead" fame) blogged
some very interesting comments ultimately asking if "..your UTM [is] a
Mutt?"  It’s an interesting comment on the UTM market as a whole where
ultimately he gets around to shoring up his question/statement by
referencing Symantec’s exit from the hardware market.

I’d say that most UTM offerings are mutts because that’s
exactly what perimeter UTM delivers — a mashup of every neighborhood
stray that happened to end up humping the same piece of hardware.  Ew.

That’s why unless you want to be king of the pound, sporting papers
which testifies to your pedigree and heritage is really important.
You’re not going to win best of show looking like the sappy little
poodle-chihuahua-dingo-thing featured above.

In his scribble, Richard makes the following statement which I exactly addressed in the comment above:

I have a problem with the idea of Universal Threat Management
appliances.  Leaving aside the horrible terminology (Who wants to
manage threats? Don’t you want to block them and forget about them?)
the question that I always ask is: If best-of-breed is the standard for
large enterprises why would it be good practice for a smaller entity to
lump a lot of security functions such as firewall, email gateway, spam
filter, anti-virus, anti-spyware, IDS, IPS, and vulnerability
management all in one under-powered device?

Firstly, the ‘U’ in UTM stands for "Unified" not "Universal,"
however I *totally* agree with Richard that managing (T)hreats and
vulnerabilities is the WRONG approach and UTM has become this catch-all
for the petty evolution of any device that continues to lump ad hoc
security functions onto an existing platform and call it something
else.  That’s perimeter UTM.

So, intead of manging threats, we should be managing risk.  Call me psychic, but that’s exactly what I wrote about here when I introduced the concept of Unified Risk Management (URM.) 

URM provides a way of closing the gap between
pure technology-focused information security infrastructure and
business-driven, risk-focused information survivability
architectures and does so by using sound risk management practices in conjunction with best
of breed consolidated Unified Threat Management (UTM) solutions as the
technology foundation of a consolidated risk management model.

Moving on, I’m not sure that with where we are in today’s compute
cycles that it’s fair to generalize that the companies Richard mentions
such as Astaro, Fortinet, or Watchguard are actually "under-powered,"
but  one could certainly  argue that extensibility, flexibility and
scalability are certaintly constrained by the practical limits of the
underlying machinery and its ability to perform and clumping lots of
these individual boxes together isn’t really a manageable solution.

That being said, I also wrote about this issue here whereby
I make the case that for the Enterprise and service provider markets,
commoditized general purpose boxes will not and cannot scale to
effectively meet the business and risk management requirements — even
with offload cards that plug into big, fat buses.

The reality is that like anything you do when you investigate
technology, concepts or strategy, you should map your business
requirements against the company’s appetite for risk and determine what
architecture (I didn’t say platform specifically) best fits the
resulting outcome.

If "good enough" security is good enough, you have lots of UTM
choices.  If, however, what you need is a balanced defense-in-depth
strategy invested in best-of-breed (based upon your business
requirements) which allows you to deploy security as a service layer in
an extremely high-performance, scaleable, extensible, flexible and
highly-available way, may I suggest the following: (blatant plug, I
know!)

Products_overview_1Finally, Symantec exiting the hardware business is a fine thing
because all it really does is galvanize the fact software companies should produce good software and do what they do best. 

What they (and others, mind you) realize that unifying hardware and software in a
compelling way is hard to do if you want to really offer
differentiation and value.  Sure, you can continue to deploy on commoditized hardware if what you want to do is serve an overly-crowded market with margins lower than dust, but why?

Richard further goes on to  talk about how Symantec is focusing on a more lucrative market:  services.   This, in my opinion, is a fantastic idea:

Evidently Symantec is more interested in software and services going
forward. I think they may be on to something.  If the appeal of
mixed-bread, easy to manage security appliances is so great for small
businesses maybe managed security services are set to take off.

Alan Shimel responded with a follow-on perspective to Mike Rothman’s post in which he said:

If big companies want best-of-breed, why should smaller companies
settle for less than that?  It just doesn’t make sense to me.  Mike Rothman
, in his big is small theory, says that customers are willing to put up
with less than best of breed by getting it all from one big vendor.
But some of the "pile them high" UTM’s are not big companies.  Astaro,
Fortinet, Baracuda are not exactly Cisco, Symantec or McAfee. However,
they are all grabbing market share with UTM’s that do not offer best of
breed applications.

This simply comes down to economics (see "good enough" comment above) where they may want an enterprise-class UTM product, but that doesn’t mean they’ll pay for one.  Doing battle in the SMB UTM space is brutal — don’t let the big, bold numbers impress you that much.  When you’re dealing with ASP’s in the $500 range, even with margins in the 40-50% bracket, you’ve got to sell a BOATLOAD of boxes to make money — then there’s the cost of all those adminstrative assistants-cum-network security administrators who call your support center further burdening the bottom line.

That dove-tails right into the argument regarding managed services and security in the cloud — these really are beginning to take off, so this move by Symantec is the right thing to do.  Let the folks who can deliver BoB hardware running your best-in-breed software do that, and you can have your customers pay you to manage it.  In the case of Crossbeam, we don’t market/sell to the SMB, as they are our customer’s customers…namely our enterprise and service-provider UTM offerings are deployed in a completely different space than the folks you mention above. 

In this case, we win either way: either a large enterprise buys our solutions directly or they sub-out to an MSSP/ISP that uses our solution to deploy their services.  Meanwhile, the perimeter/SMB UTM vendors fight for scraps in the pound waiting to be put down because nobody claims them 😉

We’ll cover the hot topic of security outsourcing here shortly.

/Chris

Ode to a suppressant. Or, “Why a colocation facility parked in the ocean still needs fire extinguishers…”

June 26th, 2006 3 comments

It just goes to show you that even on an old anti-aircraft gunnery tower cum colocation facility squatting squarely in the middle of the ocean, you still need to master the basics of risk management — or at least buy insurance…I swear this was on the CISSP exam.

I remember reading about HavenCo a couple of years ago when the debates were raging about the offshore hosting and colocation of, er, interesting commercial interests were raging. 

Sealand_fortress
HavenCo is was an Internet-connected hosting and colocation facility located on (in) the principality of Sealand which prides itself as being known as the world’s smallest soverign territory.  I thought that claim actually belonged to Cleveland, Ohio.  Oh well.

As it plays out, HavenCo is perched atop a structure which amasses said principality and is located 6 miles off the coast of Britain.  It was previously known as "Roughs Tower," an island fortress (anti-aircraft battery tower, actually) created in World War II by the British and ultimately "…surrendered/abandoned to the jurisdiction of the High Seas."  You can read about Sealand.  It’s a really trippy concept.  Read the history and fast forward to the tenants who are the featured element of this story…

If you’re interested in the guts of the place, check this out.  At least you know they have a toilet.

From HavenCo’s FAQ, you can clearly see that they pride themselves on providing the utmost service for their customers:

What makes HavenCo the best secure colocation facility?

  • Unsurpassed physical security from the world, including government
    subpoenas and search and seizures of equipment and data.
  • Redundancy and Reliability
  • Quality – 3 milliseconds from The City of London.
  •  

  • Tamper resistance – Our standard machines come with encrypted disk for
    user data partitions. We will deploy FIPS 140-1 Level 4 coprocessors, the
    highest security anyone has ever achieved, and offsite unlock codes.

It seems that the only thing missing was a fire extinguisher as HavenCo apparently burst into flame yesterday when a generator caught fire.  From EADT:

A FORMER wartime fortress which is now a self-proclaimed independent state has been left devastated after a fierce blaze tore through the structure.

The so-called Principality of Sealand, seven miles off the coast of Felixstowe and Harwich, was evacuated at lunchtime yesterdayafter a generator caught fire.

Thames Coastguard, Harwich RNLI lifeboat, Felixstowe Coastguard rescue teams, firefighting tug Brightwell, the RAF rescue helicopter from Wattisham and 15 Suffolk based firefighters from the National Maritime Incident Response Group (MIRG) were all called into action to tackle the blaze.

One man, believed to be a security guard, was airlifted from the scene and taken to Ipswich Hospital with smoke inhalation but no one else was on the Second World War gun emplacement.

“There have been a number of explosions on board as the fire has engulfed gas bottles and batteries. Only one person was on Sealand at the time, whom we understand to be a watchman whose job was to maintain the generators and equipment.

Horrible, really.  Especially when you realize that the royal family don’t appear to think that fire insurance is a necessary risk management utility.

I seem to recall reading stories of a nitrogen-filled data center to provide either anti-aging capabilities for the inhabitants (Sealand’s "rulers" are royals, after all…and we know how strange they can be) or supress fire due to the absence of oxygen. 

Oh well, dashed are my hopes of starting my own off-shore casino.  Perhaps I should consider speculative real estate.  Seems Sealand’s having a fire sale.

/Chris

Categories: Current Affairs Tags:

If news of more data breach floats your boat…

June 26th, 2006 No comments

Sinkboat
U.S. Navy: Data Breach Affects 28,000

It looks like we’re going to get one of these a day at this point.  Here’s the latest breach-du-jour.  I guess someone thought that our military veterans were hogging the limelight so active-duty personnel(and their families, no less) get their turn now.  From eWeek:

Five spreadsheet files with personal data on approximately 28,000 sailors and family members were found on an open Web site, the U.S. Navy announced June 23. 

The personal data included the name, birth date and social security
number on several Navy members and dependents. The Navy said it was
notified on June 22 of the breach and is working to identify and notify
the individuals affected.

"There is no evidence that any of the data has been used illegally.
However, individuals are encouraged to carefully monitor their bank
accounts, credit card accounts and other financial transactions," the
Navy said in a statement.

Sad.

Why are people so shocked re: privacy breaches?

June 25th, 2006 4 comments

Shocked
This is getting more and more laughable by the minute.  From Dark Reading:

JUNE 22, 2006 | Another
day, another security breach: In the last 48 hours, Visa, Wachovia,
Equifax, and the U.S. Department of Agriculture have joined a growing
list of major companies and government agencies to disclose they’ve
been hit by sensitive — and embarrassing — security breaches.

The organizations now are scrambling to assist customers and
employees whose personal information was either stolen or compromised
in recent weeks. They join AIG, ING, and the Department of Veterans
Affairs, all of which have disclosed major losses of sensitive data in
the last few weeks.

Each of the incidents came to light well after the fact.

Disclaimer: I am *not* suggesting that anyone should make light of or otherwise shrug off these sorts of events.  I am disgusted and concerned just like anyone else with the alarming rate of breach and data loss notifications in the last month, but you’re not really surprised, are you?  There, I’ve said it.

If anyone has any real expectation of privacy or security (two different things) when your data is in the hands of *any* third party, you are guaranteed to be sorely disspointed one day.  I fully expect that no matter what I do, that some amount of my personal information will be obtained, misappropriated and potentially misused in my lifetime.   I fully expect that any company I work for will ultimately have this problem, also.  I do what I can to take some amount of personal responsibility for this admission (and its consequences) but to me, it’s a done deal.  Get over it.

The Shimster (my bud, Alan Shimel) also wrote about some of this here and here.

Am I giving up and rolling over dead?  No.  At the same time, I am facing the realities of the overly-connected world in which we live and moreso the position in which I choose to live it.  It isn’t with my head in the sand or in some other dark cavity, but rather scanning the horizon for the next opportunity to do something about the problem.

Anyone who has been on the inside of protecting the critical assets of an Enterprise knows that isn’t "if" you’re going to have a problem with data or assets showing up somewhere they shouldn’t (or that you did not anticipate) but rather "when" … and hope to (insert diety here) it isn’t on your watch.

Sad but true.  We’ve seen corporations with every capability at their disposal show up on the front page because they didn’t/couldn’t/wouldn’t put in place the necessary controls to prevent these sorts of things from occuring…and here’s the dirty little secret: there is nothing they can do to completely prevent these sorts of things from occuring.

Today we focus on "network security" or "information security" instead of "information defensibility" or "information survivability" and this is a tragic mistake because we’re focusing on threats and vulnerabilities instead of RISK and this is a losing proposition because of these little annoyances called human beings and those other little annoyances they (we) use called computers.

Change control doesn’t work.  Data classification doesn’t work(* see below.)  Policies don’t work.  In the "real world" of IM, encrypted back channels, USB drives, telecommuting, web-based storage, VPN’s, mobile phones, etc., all it takes is one monkey to do the wrong thing even in the right context and it all comes tumbling down.

I was recently told that security is absolute.  Relatively speaking, of course, and that back in the day, we had secure networks.  That said nothing, of course, about the monkeys using them.

Now, I agree that we could go back to the centralized computing model with MAC/RBAC, dumb networks, draconian security measures and no iPods, but we all know that the global economy depends upon people being able to break/bend the rules in order to "innovate" and move business along the continuum and causing me not to put that confidential customer data on my laptop so I can work on it at home over the weekend would impact the business…

The reality is that no amount of compliance initiatives, technology, policies or procedures is going to prevent this sort of thing from happening completely, so the best we can do is try as hard as we can as security professionals to put a stake in the ground, start managing risk knowing we’re going to have our asses handed to us on a platter one day, and do our best to minimize the impact it will have.  But PLEASE don’t act surprised when it happens.

Outraged, annoyed, concerned, angered and vengeful, yes.  Surprised?  Not so much.

Until common sense comes packaged in an appliance, prepare for the worst!

/Chris

P.S. Unofficially, only 3 out of the 50 security professionals I contacted who *do* have some form of confidential imformation on their laptops (device configs, sample code, internal communications, etc.) actually utilize any form of whole disk encryption.  None use two factor authentication to provde the keys in conjunction with a strong password.  See here for the skinny as to why this is relevant.

*Data Classification doesn’t work because there’s no way to enforce its classification uniformly in the first place.  For example, how many people have seen documents stamped "confidential" or "Top Secret" somewhere other than where these sorts of data should reside.  Does MS Word or Outlook force you to "classify" your documents/emails before you store/print/send them?  Does the network have an innate capability to prevent the "routing" of data across segments/hosts?  What happens when you cut/paste data from one form to another?

I am very well aware of many types of solutions that provide some of these capabilities, but it needs to be said that they fail (short of being deployed at aterial junctions such as the perimeter) because:

  1. They usually expect to be able to see all data.  Unlikely because anyone that has a large network that has computers connected to it knows this is impossible (OK, improbable)
  2. They want to be pointed at the data and classify it so it can be recognized.  Unlikely because if you knew where all the data was, you’d probably be able to control/limit its distribution.
  3. They expect that data will be in some form that triggers an event based upon the discovery of its existence of movement.  Unlikely because of encryption (which is supposed to save us all, remember 😉 and the fact that people are devious little shits.
  4. What happens when I take a picture of it on my screen with my cameraphone, send it out-of-band and it shows up on a blog?

Rather, we should exercise some prudent risk management strategies, hope to whomever that those boring security awareness trainings inflict some amount of guilt and hope for the best.

But seriously, authenticating access *to* any data (no matter where it exists) and then being able to provide some form of access control, monitoring and non-repudiation is a much more worthwhile endeavor, IMHO.

Otherwise, this exercise is like herding cats.  It’s a general waste of time because it doesn’t make you any more "secure."

I’m getting more cynical by the (breach) minute…BTW, Michael Farnum just wrote about this very topic…

People Positing Pooh-Poohing Pre-emptive Patching Practices Please Provide Practical Proof…

June 18th, 2006 2 comments

Microsoft
I was reading Rothman’s latest post on Security Incite regarding patching
and I am left a little confused about his position. Despite his estimation of a high score on the
“boredometer scale” as it relates to the media’s handling of the patching
frenzy ( I *do* agree with that,) I think he’s a little sideways on the issue. At least now we can say that we don’t always
agree.

Mike writes:

I
hate Patch Tuesday. It’s become more of a media circus that anything useful
nowadays. So instead of focusing on what needs to be done, most security
administrators need to focus on what needs to be patched. Or not. And that
takes up more time because in reality, existing defenses reduce (if not
eliminate) the impact of many of the vulnerabilities being patched. Maybe it’s
just my ADD showing, in that these discussions are just not interesting
anymore. If you do the right stuff, then there shouldn’t be this crazy urgency
to patch – you are protected via other defenses. But the lemmings need
something to write about, so there you have it.

One lemming, reporting for duty, sir!

Specifically, Mike’s opinion seems to suggest that basically people who “…do
the right stuff” don’t need to patch because “…in reality, existing defenses
reduce (if not eliminate) the impact of many of the vulnerabilities being
patched.”

Since Mike’s always the champion of the little people, I’ll refer him to the
fact that perhaps not everyone has all the “…existing defenses” to rely upon –
or better yet, keep them up to date (you know, sort of like patching – but for security appliances!)  In fact, I’m going to argue
that despite everyone’s best efforts, currently stealthy little zero-day Trojan
buggery does a damn good job of getting through these defenses, despite the vendor hype
to the contrary.

Emerging technology will make these sorts of vulnerabilities less
susceptible to exploit, but that’s going to mean a whole lot of evolution on
the part of both the network and the host layer security solutions; there are a LOT of solutions out there now and not ONE of them actually works well in the real world.

I still
maintain that relying on the hosts (the things you are protecting – and worried
about) to auto-ameliorate is a dumb idea.  It’s akin to why I think we’re going to have
to spend just as much time defending the “self-defending network” than we do
today with our poorly-defended ones.

I’m going to tippytoe out on the ledge here because I have a feeling that my
response to Mike’s enormous generalization will leave him with just as huge of a hole to bury
me in, but so be it.  I think he was in a hurry to go on vacation, so please cut him some slack! 😉

Specifically, many of the latest critical vulnerabilities were released to
counter exploits targeted at generic desktop applications such as Excel, Powerpoint
and Internet Explorer; things that users rely on everyday to perform their job
duties at work. 

You don’t have to click
on links or open attachments for these beauties to blow up, you just open a
document from “your” IT department over the "trusted" network drive map that was infected by a rogue scanning worm
which deposited Trojans across your enterprise and BOOM! No such thing as “trust but verify” in the
real world, I’m afraid. 

By the way, this little beauty came into your network through a USB drive that someone used to bring their work from home back to the office…sound familiar?

Yep, we can close that hole down with more layers of security software — or better yet, epoxy the USB slots closed! 😉

OK, OK, I’m generalizing, too.  I know it, but everyone else does it …

I don’t know what the “right stuff” is, but if it includes using the
Internet, Word, Powerpoint or Excel, short of additional layers of host-based
security, it’s going to be difficult to defend against those sorts of
vulnerabilities without some form of patching (in combination with reasonable amounts of security — driven by RISK.)

Suggesting that people will do the right thing is noble – laughable, but
noble. 

I’ve heard the CTO’s from several security companies during talks at
computer security tradeshows brag that they don’t use AV on their desktop
computers, always “do the right thing(s),” and have never been compromised.

I think that’s a swell idea – a little contradictory and stupid if you sell
AV software – but swell nonetheless.  I
wish I was as attentive as these guys, but sometimes doing the right thing
means you actually have to know the difference between “right” and “wrong” as
it relates to the inner workings of rootkit installations.   If these experts don’t do the "right thing" based upon
what we hear every day (patch your systems, keep your AV up to date,
run anti-spyware, etc…) what makes you think Aunty Em is going to
listen?

I’ll admit, I know a thing or two about computers and security.  I try to do the “right thing” and I’ve been
lucky in that I have never had any desktop machine I’ve owned compromised.  But it takes lots of technology, work,
diligence, discipline, knowledge and common sense.  That’s a lot of layers. Rot Roh.

Changing gears a little…

It gets even more interesting when we see statistics that uncover that fact
that 1 out of 4 Microsoft flaws are discovered by vulnerability bounty hunters –
professionals paid to discover flaws! That
means we’re going to see more and more of these vulnerabilities discovered
because it’s good for business. Then
will come the immediate exploits and the immediate patches.

Speaking of which, now that Microsoft is at the “Forefront” of the security
space with their desktop security offerings, they will get to charge you for a
product that protects against vulnerabilities in the operating system that you
purchased – from them! Sweet! That is one bad-ass business model.

We’re going to have to keep patching.  Get over it.

/Chris

Got Rational Security?

June 14th, 2006 No comments

I love Google.  I found this whilst browsing this morning:
Fgotrational_1

Categories: General Rants & Raves Tags:

IDS/IPS – Finger Lickin’ Good!

June 13th, 2006 6 comments

Colonelsanders
[Much like Colonel Sander’s secret recipe, the evolution of "pure" IPS is becoming an interesting combo bucket of body parts — all punctuated, of course, by a secret blend of 11 herbs and spices…]

So, the usual suspects are at it again and I find myself generally agreeing with the two wisemen, Alan Shimel and Mike Rothman.  If that makes me a security sycophant, so be it.  I’m not sure, but I think these two guys (and Michael Farnum) are the only ones who read my steaming pile of blogginess — and of course Alex Neihaus who is really madly in rapture with my prose… 😉

Both Alan and Mike are discussing the relative evolution from IDS/IPS into "something else." 

Alan references a specific evolution from IDS/IPS to UTM — an even more extensible version of the tradtional perimeter UTM play — with the addition of post-admission NAC capabilities.  Interesting.

The interesting thing here is that NAC typically isn’t done "at the perimeter" — unless we’re talking the need to validate access via VPN, so I think that this is a nod towards the fact that there is, indeed, a convergence of thinking that demonstrates the movement of "perimeter UTM" towards Enterprise UTM deployments that companies are choosing to purchase in order to manage risk.

Alan seems to be alluding to the fact that these Enterprises are considering deployments internally of IPS with NAC capabilities.  I think that is a swell idea.  I also think he’s right.  NAC and about 5-6 other key, critical applications that are a natural fit for anything supposed to provide Unified Threat Management…that’s what UTM stands for, afterall.

Mike alludes to the reasonable assertion that IDS/IPS vendors are only riding the wave preceeding the massive ark building that will result in survival of the fittest, where the definition of "fit" is based upon what the customer wants (this week):

Of course the IDS/IPS vendors are going there because customers want
them to. Only the big of the big can afford to support all sorts of
different functions on different boxes with different management (see No mas box). The great unwashed want the IDS/IPS built into something bigger and simpler.

True enough.  Agreed.  However, there are vendors — big players — such as Cisco and Juniper that
won’t use the term UTM because it implies that their IDS and IPS
products, stacked with additional functions, are in fact turkeys (following up with the poultry analogies) and
that there exists a guilt by association that suggests the fact that
UTM is still considered a low-end solution.  The ASP of most UTM
products is around the $1500 range, so why fight for scraps.

So that leads me to the point I’ve made before wherein I contrast the differences in approach and the ultimate evolution of UTM:

Historically, UTM is defined as an approach to network security in
which multiple logically complimentary security applications, such as
firewall, intrusion detection and antivirus, are deployed together on a
single device. This reduces operational complexity while protecting the
network from blended threats.

For large networks where security requirements are much broader and
complex, the definition expands from the device to the architectural
level. In these networks, UTM is a “security services layer” within the
greater network architecture. This maintains the operational simplicity
of UTM, while enabling the scalable and intelligent delivery of
security services based on the requirements of the business and
network. It also enables enterprises and service providers to adapt to
new threats without having to add additional security infrastructure.

My point here is that just as firewalls added IDS and ultimately became IPS, IPS has had added to it Anti-X and become UTM — but, Perimeter UTM.   The thing missing there is the flexibility and extensibility of these platforms to support more functions and features.

However, as both Mike and Alan point out, UTM is also evolving into architectures that allow for virtualized
security service layers to be deployed from more scaleable platforms
across the network.The next logical evolution has already begun.

When I go out on the road to speak and address large audiences of folks who manage security, most relay the fact that most of them simply do not trust IPS devices with automated full blocking turned on.  Why?  Because they lack context.  While integrated VA/VM and passive/active scanning adds to the data collected, is that really actionalble intelligence?  Can these devices really make reasonable judgements as to the righteousness of the data they see?

Not without BA functionality, they can’t.  And I don’t mean today’s NBA (a la Gartner: Network Behavior Analysis) or NBAD (a la Arbor/Mazu: Network Behavioral Anomaly Detection) technology, either. 

[Put on your pads, boys, ‘cos here we go…]

NBA(D) as it exists today is nothing more than a network troubleshooting and utilization tool, NOT a security function — at least not in its current form and not given the data it collects today.  Telling me about flows across my network IS, I admit, mildly interesting, but without the fast-packet cracking capabilities to send flow data *including* content, it’s not very worthwhile (yes, I know that newer version of NetFlow will supposedly do this, but at what cost to the routers/switches that will have to perform this content inspection?)

NBA(D) today takes xFlow and looks at traffic patterns/protocol usage, etc. to determine if, within the scope of limited payload analysis, something "bad" has occured.

That’s nice, but then what?  I think that’s half the picture.  Someone please correct me, but today netflow comes primarily from routers and switches; when do firewalls start sending netflow data to these standalone BA units?  Don’t you need that information in conjunction with the exports from routers/switches at a minimum to make the least substantiated decision on what disposition to enact?

ISS has partnered with Arbor (good move, actually) in order to take this first step towards integration — in their world it’s IPS+BA.  Lots of other vendors — like SourceFire — are also developing BA functionality to shore up the IPS products — truth be told, they’re becoming UTM solutions, even if they don’t want to call their products by this name.

Optenet (runs on the Crossbeam) uses BA functionality to provide the engine and/or shore up the accuracy for most of their UTM functions (including IPS) — I think we’ll see more UTM companies doing this.  I am sure of that (hint, hint.)

The dirty little secret is that despite the fact that IDS is supposedly dead, we see (as do many of the vendors — they just won’t tell you so) most people purchasing IPS solutions and putting them in IDS mode…there’s a good use of money!

I think the answer lies in the evolution from the turkeys, chickens and buzzards above to the eagle-eyed Enterprise UTM architectures of tomorrow — the integrated, consolidated and virtualized combination of UTM with NAC and NBA(D) — all operating in a harmonious array of security goodness.

Add VA/VM, Virtual patching, and the ability to control how data is created, accessed, manipulated and transported, and then we’ll be cooking with gas!  Finger lickin’ good.

But what the hell do I know — I’m a DoDo…actually, since I grew up in New Zealand, I suppose that really makes me a Kiwi.   Go figure.

Full Drive Encryption on Laptops – Time for all of us to “nut up or shut up!”

June 11th, 2006 7 comments

Laptopmitm275300
…or "He who liveth in glass houses should either learn to throw small stones or investeth in glass insurance…lots and lots of glass insurance. I, by the way, have lots and lots of glass insurance ;)"

Given all of the recently disclosed privacy/identity breaches which have been demonstrated as a result of stolen laptops inappropriately containing confidential data, we’ve had an exponential increase in posts in the security blogosphere in regards to this matter.

This is to be expected.  This is what we do.  It’s the desperate housewives complex. 😉

These posts come from the many security experts, analysts, pundits and IT Professionals bemoaning the obvious poor application of policies, procedures, technology and standards that would "prevent" this sort of thing from happening and calling for the heads of those responsible…of the very people who not only perpertrated the crime, but also those responsible for making the crime possible; the monkey who put the data on the laptop in the first place.

So, since most of us who are "security experts" or IT professionals almost always utilize laptops in our lines of work, I ask you to honestly respond in comments below to the following question:

What whole-disk encryption solution utilizing two-factor authentication do you use to prevent an exposure of data should your laptop fall into the wrong hands?  You *do* use a whole-disk encryption solution utilizing two-factor authentication to secure the data on your laptop…don’t you?

Be honest. If you don’t use a solution like this then please don’t post another thing on this topic condemning anyone else.  Ever.

Sure, you may say that you don’t keep confidential information on your laptop and that’s great.  However, if you’ve got email and you’re involved in a company as a security/IT person (or management or even as a general user,) that argument’s already in the bullshit hopper.

If you say that you use encryption for specifically identified "confidential" files and information but still use a web-browser or any Office product on a Windows platform,  for example, please reference the aforementioned bovine excrement container.  It’s filling up fast, eh?

See where this is going?  If we, the keepers of the gate, don’t implement this sort of solution and we still gabble on about how crappy these errant users are, how irresponsible their bosses, how aware we should make and liable we should hold their Board of Directors, the government, etc…

I’ll ask you the same question about that USB thumb drive you have hanging on your keychain, too.

Don’t be a hyprocrite…encrypt yo shizzle.

If you don’t already, stop telling everyone else what lousy humans they are for not doing this and instead focus on getting something like this, or at a minimum, this.

/Chris

Unfied RISK Management – Towards a Business-Driven Information Survivability Architecture

June 10th, 2006 No comments

This is Part I of a two-part series on a topic for which I coined the phrase "Unified Risk Management"
The second part of this paper will be out shortly.   You can download this paper as a .PDF from here

NOTE: This is a little long for a blog post, but it should make for an interesting read.

Abstract

Managing risk is fast becoming a lost art. As the pace of technology’s evolution and
adoption overtakes our ability to assess and manage its impact on the business,
the overrun has created massive governance and operational gaps resulting in
exposure and misalignment. This has
caused organizations to lose focus on the things that matter most: the
survivability and ultimate growth of the business.

Overwhelmed with the escalation of increasingly complex
threats, the alarming ubiquity of vulnerable systems and the constant onslaught
of rapidly evolving exploits, security practitioners are forced to choose the
unending grind of tactical practices – focused on deploying and managing
security infrastructure –  over the
strategic art of managing and institutionalizing risk-driven architecture as a business
process.

In order to understand the nature of this problem and its
resolution we have separated this discussion into two separate papers:

· In Part One (this paper), we analyze the gap between
pure technology-focused information security infrastructure and
business-driven, risk-focused information survivability
architectures.

· In Part Two (a second paper), we show how this
gap is bridged using sound risk management practices in conjunction with best
of breed consolidated Unified Threat Management (UTM) solutions as the
technology foundation of a consolidated risk management model. We will also
show how governance organizations, business stakeholders, network and security
teams can harmonize their efforts to produce a true business protection and
enablement strategy that delivers security as an on-demand service layer at the
speed of business. This is a process we
call Unified Risk Management or URM.

The Way Things Are

Today’s constantly expanding chain of technically-complex security
point solutions do not necessarily reduce or effectively manage risk; they
mitigate threats and vulnerabilities in the form of products produced by
vendors to solve specific technical problems but without context for the assets
which they are tasked to protect and at a cost that may outweigh the protected
assets’ value.

But how does one go about defining and measuring risk?

Spire Security’s Pete Lindstrom best defines being able to
measure and manage risk by first describing what it is not:

· Risk is not static; it is dynamic and fluctuates
constantly with potentially high degrees of variation.

· Risk is not about the possibility that something
bad could happen; it is about the probability that it might happen.

· Risk is not some pie-in-the-sky academic
exercise; you have all of the necessary information available to you today.

· Risk is not a vague, ambiguous concept; it is a
continuum along which you can plot many levels of tolerance and aversion.

It is clear that based upon research available today, most
organizations experience difficulty aligning threats, vulnerabilities and
controls to derive the security posture of the organization (defined as
acceptable or not by the business itself.) In fact, much of what is referred to as risk management today is
actually just complex math in disguise indicating an even more complex
extrapolation of meaningless data that drives technology purchases and
deployments based upon fear, uncertainty and doubt. Nothing sells security like a breach or new worm.

As such, security practitioners are typically forced into
polarizing decision cycles based almost exclusively on threat and vulnerability
management and not a holistic risk management approach to deploying security as
a service. They are distracted by the
market battles to claim the right to the throne of Network Security Supremacy
to the point where the equipment and methodology used to fight the war has
become more attractive than the battle itself.

In most cases, these security products are positioned as
being either integrated into the network infrastructure such as routers or
switches or bolted onto it in the form of single vendor security suite
appliances. These products typically do
not collaborate, interoperate, communicate or coordinate their defensive
activities with solutions not of a like kind.

Realistically, there is room for everyone at the
table. Network vendors see an
opportunity to continue to leverage their hold on market share by adding value
in the form of security while pure-play security vendors continue to innovate
and bring new products and solutions to market that address acute needs that
the other parties cannot. Both are
needed but for different reasons.

Neither of the extremes represents an ultimate answer. Meeting in the middle is the best answer with
an open, extensible, and scaleable network security reference architecture that
integrates as a network switch with all of the diversity and functionality
delivered by on demand best of breed security functions.

As the battle rages, multiple layers of overlapping
proprietary technologies are being pressed into service against risks which are
often not quantified, threats that are not recognized and attempt to defend
against vulnerabilities which within context may have little recognized
business impact.

In many cases, these solutions are marketed as new
technology when in fact they exist as re-badged products with additional
functions cobbled together onto outdated or commoditized hardware and software
platforms, polished up and marketed as UTM or adaptive security solutions.

It is important to make clear the definition of UTM within
the context of the mainstream security solution space offered by most vendors
today. UTM solutions are those which provide an aggregate of security
functionality comprised of at least network firewall, network intrusion
detection and prevention, and
gateway anti-virus. UTM solutions are
often extended to offer additional functionality such as VPN, URL filtering,
and anti-spam capabilities with a recognized benefit of squeezing as much
functionality from a single product offering in order to maximize the
investment and minimize the number of arterial insertion points throughout the
network.

Most of the UTM solutions on the market today provide a
single management interface which governs the overall operation of many
obfuscated moving parts which deliver the functionality advertised above.

In many cases, however, there are numerous operational and
functional compromises made when deploying typical single application/multiple
function appliances or embedded security extensions applied to routers and
switches. These compromises range from
poor performance to an inability to scale based on emerging functionality or
performance requirements. The result is what some hope is “good enough” and
implies a tradeoff favoring cost over security.

Unfortunately, this model of “good enough” security is
proving itself not good enough as these solutions can lead to cost and
management complexities that become a larger problem than the perceived threat
and vulnerabilities the solutions were designed to mitigate in the first place.

So what to do? Focus
on risk!

Prudent risk management strategy dictates that the best
method of securing an organization’s most critical assets is the rational
application of policy, technology and processes where ultimately the risk
justifies the cost.

It is within this context that the definition of
information survivability demands an introduction as it bears directly on the
risk management processes described in this paper. In their paper titled “Information
Survivability: Required Shifts in Perspective,” Allen and Sledge introduce the
concept of information survivability as a discipline which is defined as “…the
capability of a system to fulfill its mission, in a timely manner, in the
presence of attacks, failures, or accidents.”

They further juxtapose information survivability against
information security by illustrating that information security “…takes a
technology centric point of view, with each technology solving a specific set
of issues and concerns that are generally separate and distinct from one
another. Survivability takes a broader,
more enterprise-wide point of view looking at solutions that are more pervasive
than point-solution oriented.”

Information survivability thus combines elements of
business impact, continuity, contingency and disaster recovery planning with
the more narrowly-focused and technical information security practices, thereby
elevating the combined foundational elements to an enterprise-wide risk
management concern.

From this perspective, risk management is not just about
the latest threat. It is not just about
the latest vulnerability or its exploit. It is about how, within the context of the continued operation of the
business and even while under duress, the organization’s mission-critical
functions will be sustained and the most important data will be appropriately
protected.

The language of risk

One obvious illustration of this risk gap is how
disconnected today’s enterprise security and networking staffs remain even when
their business interests should be so very much closely aligned. Worse yet is the resultant misalignment of
both teams with the enterprises’ mission and appetite for risk.

As an example, while risk analysis is conducted on one side
of the house with little understanding of the network and all its moving parts,
the device sprinkling of network and security appliances are strung together on
the other side of the house with little understanding of how these solutions
will affect risk or if they align to the objectives or matters to the business
at all.

To prove this point, ask your network team if they know
what OCTAVE or CoBIT frameworks are and how current operational security
practices map to either of them. Then, ask the security team if they know how MPLS
VRF, BGP route reflectors or the spanning tree protocol function at the network
level and how these technologies might affect the enterprise’s risk posture. 

Then, ask representative business stakeholders if they can
articulate how the answers given by either of the parties clearly maps to their
revenue goals for the year and how their regulatory compliance requirements may
be affected. Where are the metrics to
support any assertion?

Thus, while both parties seek to serve the business with a
common goal of balancing security with connectivity neither speaks a common
language that can be used to articulate the motivation, governance or value of
each other’s actions to the business.

At the level of network security integration, can either
team describe the mapping of asset-based risk categories across the enterprise
to the network infrastructure? Can they tell you tomorrow what the new gaps are
at each risk category level and provide a quantifiable risk measurement across the
enterprise of the most critical assets in a matter of minutes?

This illustration defines the problem at hand; how do we
make sure that we deliver exactly what the business requires to protect the
most critical assets in a manner fitting the risk profile of the organization
and no more.

Interestingly, from an economic point of view, the failure
to create a tightly integrated risk management ecosystem results almost by
definition in a completely inefficient and ineffective solution. Without risk
management basics such as asset and data classification and zoned network
segmentation by asset class, the network has the very real potential to actually
be over-defended at risk boundaries and thus drive costs and complexity much
higher than they need to be.

Consequently, most, if not all, security controls and
prescribed protective technologies are applied somewhat indiscriminately across
the enterprise as a whole. Either too
much security is applied or many of the features of the solution are disabled
since they are not needed. Where is the
return on investment there? Do you need
URL filtering in a DMZ? Do you need
SOA/XML schema enforcement applied across user desktops? No. So
why deploy complex blanketed security technology where it is neither needed nor
justified?

For example, since all assets and the data they contain are
not created equal, it is safe to assume that the impact to the business caused
by something “bad” happening to any two assets of different criticality would
also not be equal. If this is an
accepted corollary, does it make sense to deploy solutions that provide
indiscriminant protective umbrellas over assets that may not need any
protection at all?

In many cases, this issue also plays out in a different
direction as security architectures are constrained based on the deployment of
the physical wiring closets and switch and router infrastructures. Here, the ability or willingness to add one
after the other of point solution devices in-line between key network arteries,
incrementally add specialized security blades into core network components or
even forklift switching and routing infrastructure to provide for “integrated
security” is hideously problematic.

In these cases, overly-complex solutions consist of devices
sprinkled in every wiring closet because there will probably be a
representative computing resource of every risk category in that area of the
network.

Here we are being asked to change the network to fit the
security model rather than the other way around. If the network was built to accommodate the
applications and data that traverse it, should we not be just as nimble, agile
and accommodating in our ability to defend it?

Referring back to the definition of risk management, the
prudent answer is to understand exactly where you are at risk, why, the
business impact, and exactly what is needed from a control perspective to
appropriately manage the risk. In some
cases the choice may be to assert no control at all based upon the lack of
business impact to the organization.

One might ask if the situation is not better than it was
five years ago. The answer to this question is unclear – the effects of the
more visible and noisy threats such as script kiddies have been greatly
mitigated. On the other hand, the emergence of below-the-radar,
surgically-focused, financially motivated cyber-criminals has exposed business
assets and data more than ever. The net effect is that we are not, in fact,
safer than we were because we focus only on threats and vulnerabilities and not
risk.

Security is in the network…or is it in the appliance over
there?

Let us look for a moment at how technology visions spiral
out of control when decoupled from risk in a technology centric perspective. The most
blatant example is the promise of security embedded in the network or
all-in-one single vendor appliances.

On the one hand, we are promised a technically-enlightened,
self-defending network that is resilient to attack, repels intruders,
self-heals when infected and delivers security as a service as applications and
data move about fluidly pursuant to policies enforced across every platform and
network denizen.

We also are told to expect intelligent networks that offer
solution heterogeneity irrespective of operating system or access modality,
technology agnosticism, and completely integrated identity management as a way
to evolve from being data rich but information poor, providing autonomic
response when bad things happen.

Purveyors of routing and switching products plan to branch
out from the port density penetration
foothold they currently enjoy to deliver end-to-end security functionality
embedded into the very fabric of the machinery meant to move bits with the
security, reliability and speed it deserves and which the business demands.

At the other end of the spectrum, vendors who offer
single-sourced, proprietary security suites utilizing integrated functions by
way of appliances integrated into the network suggest that they will provide
the architecture of the future.

They both suggest they will provide host-based agents that
provide immune system-like responses to attempted “infection” and will take
their orders from a central networked “nervous system” that coordinates the
activities of the various security “organs” across the zones of trust defined
by policy.

They propose the evolution of the network into a sentient
platform for the delivery of business in all its forms, aware of and able to
interact with and control the applications and data which travel over it.

Data, voice, video and mobility with all of the challenges
posed by the ubiquity of access methodologies – and of course security – are to
be provided by the network platform as the launch pad for every conceivable
level of service. The network will take the place of complex business logic
such as Extraction/Transform/Load (ETL) layers and it will deliver applications
directly and commit and retrieve data dynamically and ultimately replace tiers
of highly-specialized functions and infrastructure that exist today.

All the while, as revolutionary technology and
architectures such as web services emerge, new standards compete for relevancy
and the constant demand for increased speeds and feeds continue to evolve, the
network will have to magically scale both in performance and functionality to
absorb this change while the transparency of applications, data and access
modality blurs.

These vendors claim that security will simply be subsumed
by the “network” as a function of the delivery of the service since the
applications and data will be provided by a network platform completely aware
of that which traverses its paths. It
will be able to apply clearly articulated business processes and eliminate
complex security problems by mitigating threats and vulnerabilities before they
exploit an attack surface.

These solutions are to be “open,” and allow for
collaboration across the enterprise, protecting heterogeneous elements up and
down the stack in a cooperative defense against impact to the delivery of
applications and data.

These solutions promise to be more nimble and will be
engineered to provide adaptive security capabilities in software with hardware
assist in order to keep pace with exponential increases in requirements. These solutions will allow for quick and easy
update as threats and vulnerabilities evolve. They will provide more deployment flexibility and allow for greater
coverage and value for the security dollar as policy-driven security is applied
across the enterprise.

What’s Wrong with These Answers? Mr. Fox, meet Ms. Chicken

Today’s favorite analogy for security is offered in direct
comparison to the human immune system. The immune system of modern man is indeed a remarkable operation. It is there, inside each human being, where
individual organs function independently, innocuously and in an autonomic
fashion. When employed in a coordinated fashion as a consolidated and
cooperative system, these organs are able to fight infection by adapting and
often become more resistant to attack and infection over time.

Networks and networked systems, it is promised, will
provide this same capability to self-defend and recover from infection. Networks of the future are being described as
being able to self-diagnose and self-prescribe antigens to cure their ills, all
the while delivering applications and data transparently and securely to those
who desire it.

It is clear, however, that unfortunately there are
infections that humans do not recover from. The immune system is sometimes overwhelmed by attack from invaders that
adapt faster than it can. Pathogens
spread before detection and activate in an overwhelming fashion before anything
can be done to turn the tide of infection. Mutations occur that were unexpected, unforeseen and previously
unknown. The body is used against itself
as the defense systems attack both attacker and healthy tissue and the patient
is ultimately overcome. These illnesses
are terminal with no cure.

Potent drugs, experimental treatments and radical medical
intervention may certainly extend or prolong life for a short time, but the
victims still die. Their immune systems
fail.

If this analogy is to be realistically adopted as the basis
for information survivability and risk management best practices, then anything
worse than a bad case of the sniffles could potentially cause networks – and
businesses — to wither and die if a more reasonable and measured approach is
not taken regarding what is expendable should the worst occur. Lose a limb or lose a life? What is more important? The autonomic system
can’t make that decision.

These glimpses into the future are still a narrowly-focused
technology endeavor without the intelligence necessary to make business
decisions outside of the context of bits and bytes. Moreover, the deeper and
deeper information security is pushed down into the stack, the less and less
survivable our assets and businesses will become because the security system
cannot operate independently of the organ it is protecting.

Applying indiscriminate and sometimes unnecessary layers of
security is the wrong thing to do. It
adds complexity, drives costs, and makes manageability and transparency second
class citizens.

In both cases, these promises will simply add layer upon
layer of complexity and drive away business transparency and the due care
required to maintain it further and further from those who have the expertise
to manage it. The reality is that either
path will require a subscription to a single vendor’s version of the truth. Despite claims to the contrary, innovation,
collaboration and integration will be subject to that vendor’s interpretation
of the solution space. Core
competencies will be stretched unreasonably and ultimately something will give.

Furthermore, these vendors suggest that they will provide
ubiquitous security across heterogeneous infrastructure by deploying what can
only be described as homogenous security solutions. How can that be? What possible motivation would one vendor
have to protect the infrastructure of his fiercest competitor?

In this case, monoculture parallels also apply to security
and infrastructure the same way in which they do to networked devices and
operating systems. Either of the examples referenced can potentially introduce
operational risk associated with targeted attacks against a single-vendor
sourced infrastructure that provides both the delivery and security for the
data and applications that traverse it. We have already seen recent malicious attacks surgically designed and
targeted to do just this.

What we need is perfectly described by Evan Kaplan of
Aventail who champions the notion of a “dumb” network connectivity layer with
high speed, low latency, high resiliency, predictable throughput and
reliability and an “intelligence” layer which can deliver valued added service
via open, agile and extensible solutions.

In terms of UTM, based upon a sound risk management model,
this would provide exactly the required best of breed security value with
maximum coverage exactly where needed, when needed and at a cost that can be
measured, allocated and applied to most appropriately manage risk.

We pose the question of whether proprietary vendor-driven
threat and vulnerability focused technology solutions truly offer answers to
business problems and if this approach really makes us more secure. More importantly, we call into question the
ability for these offerings to holistically manage risk. We argue they do not and inherently
cannot.

The Solution: Unified Risk Management utilizing Unified
Threat Management

A holistic paradigm for managing risk is possible. This
model is not necessarily new, but the manner in which it is executed is. Best-of-breed, consolidated UTM provides this
execution capability. It applies
solutions from vendors whose core competencies provide the best solution to the
problem at hand. It can be linked
directly to asset and information criticality.

It offers the battle-hardened lessons and wisdom of those
who have practiced before us and adds to their work all of the benefits that
innovation, remarkable technology and the pragmatic application of common sense
brings to the table. The foundation is
already here. It does not require years
of prognostication, massive infrastructure forklifts or clairvoyant bets made
on leveraging futures. It is available
today.

This methodology, which we call Unified Risk Management
(URM), is enabled by applying a well-defined framework of risk management
practices to an open, agile, innovative and collaborative best-of-breed UTM
solution set combined in open delivery platforms which optimize the
effectiveness of deployments in complex network environments.

These tools are combined with common sense and the
extraordinary creativity and practical brilliance of leading-edge risk
management practitioners who have put these tools to work across organizational
boundaries in original and highly effective ways.

This is the true meaning of thought leadership in the high
technology world: customers and vendors working hand-in-hand to create
breakthrough capabilities without expensive equipment forklifts and without the
associated brow-beating from self-professed prophetic visionaries who
pontificate from upon high about how we have all been doing this wrong and how
a completely new upgraded infrastructure designed to sell more boxes and Ethernet
ports is required in order to succeed.

URM is all about common sense. It is about protecting the right things for
the right reasons with the right tools at the right price. It is not a marketecture. It is not a fancy sales pitch. It is the logical evolution and extension of
Unified Threat Management within context.

It is about providing choice from best-of-breed offerings
and proven guidance in order to navigate the multitude of well-intentioned
frameworks and come away with a roadmap that allows for true risk management
irrespective of the logo on the front of the machinery providing the heavy
lifting. It is, quite literally, about
“thinking outside of the box.”

URM combines risk management – asset management, risk
assessment, business impact analysis, exposure risk analytics, vulnerability
management, automated remediation –  and
the virtualization of UTM security solutions as a business process into a tight
feedback loop that allows for the precise management of risk. It iteratively feeds into and out of
reference models like Spire Security’s Pete Lindstrom’s “Four Disciplines of
Security Management” that include elements such as:

· Trust Management

· Identity Management

· Vulnerability Management

· Threat Management

This system creates a continuously iterative and highly
responsive intelligent ecosystem linked directly to the business value of the
protected assets and data.

This information provides rational and defensible metrics
that show value, the reduction of risk on investment, and by communicating
effectively in business terms, is intelligible and visible to all levels of the
management hierarchy from the compliance auditor to the security and network
technicians to the chief executive officer.

This re-invigorated investment in the practical art of risk
management holds revolutionary promise for solving many of today’s business
problems which are sadly mislabeled as information security issues.

Risk management is not rocket science, but it does take innovation,
commitment, creativity, time, the reasonable and measured application of
appropriate business-driven policy, excellent technology and the rational
application of common sense.

This tightly integrated ecosystem consists of solutions
that embody best practices in risk management. It consists of tightly-coupled
and consolidated layers of UTM-based information survivability architectures
that can apply the results of the analytics and management toolsets to
business-driven risk boundaries in minutes. It collapses the complexity of existing architectures dramatically and
applies a holistic policy driven risk posture that meets the security appetite of
the business and it does so while preserving existing investments in routing
and switching infrastructure that serves the business well.

Conclusion: On To the Recipe

In this first part of our two-part series, we have tried to
define the basis for looking at network security architectures and risk
management in an integrated way.  Key to
this understanding is a move away from processes in which disparate appliances
are thrown at threats and vulnerabilities without a rationalized linkage to the
global risk profile of the infrastructure.

In the second paper of the series we will demonstrate
exactly how the lightweight processes that form the foundation of Unified Risk
Management can be implemented and applied to a UTM architecture to create a
highly responsive, real-time enterprise fully aware of the risks to its
business and able to respond on a continual basis in accordance with the ever-changing
risk profile of its critical data, applications and assets.

Categories: Risk Management Tags: