Network Intelligence is an Oxymoron & The Myth of Security Packet Cracking

May 21st, 2007 No comments

Cia[Live from Interop’s Data Center Summit]

Jon Oltsik crafted an interesting post today regarding the bifurcation of opinion on where the “intelligence” ought to sit in a networked world: baked into the routers and switches or overlaid using general-purpose compute engines that ride Moore’s curve.

I think that I’ve made it pretty clear where I stand.   I submit that you should keep the network dumb, fast, reliable and resilient and add intelligence (such as security) via flexible and extensible service layers that scale both in terms of speed but also choice.

You should get to define and pick what best of breed means to you and add/remove services at the speed of your business, not the speed of an ASIC spin or an acquisition of technology that is neither in line with the pace and evolution of classes of threats and vulnerabilities or the speed of an agile business. 

The focal point of his post, however, was to suggest that the real issue is the fact that all of this intelligence requires exposure to the data streams which means that each component that comprises it needs to crack the packet before processing.   Jon suggests that you ought to crack the packet once and then do interesting things to the flows.  He calls this COPM (crack once, process many) and suggests that it yields efficiencies — of what, he did not say, but I will assume he means latency and efficacy.

So, here’s my contentious point that I explain below:

Cracking the packet really doesn’t contribute much to the overall latency equation anymore thanks to high-speed hardware, but the processing sure as heck does!  So whether you crack once or many times, it doesn’t really matter, what you do with the packet does.

Now, on to the explanation…

I think that it’s fair to say that many of the underlying mechanics of security are commoditizing so things like anti-virus, IDS, firewalling, etc. can be done without a lot of specialization – leveraging prior art is quick and easy and thus companies can broaden their product portfolios by just adding a feature to an existing product.

Companies can do this because of the agility that software provides, not hardware.  Hardware can give you scales of economy as it relates to overall speed (for certain things) but generally not flexibility. 

However, software has it’s own Moore’s curve or sorts and I maintain that unfortunately its lifecycle, much like what we’re hearing @ Interop regarding CPU’s, does actually have a shelf life and point of diminishing return for reasons that you’re probably not thinking about…more on this from Interop later.

John describes the stew of security componenty and what he expects to see @ Interop this week:

I expect network intelligence to be the dominant theme at this week’s Interop show in Las Vegas. It may be subtle but its definitely there. Security companies will talk about cracking packets to identify threats, encrypt bits, or block data leakage. The WAN optimization crowd will discuss manipulating protocols and caching files, Application layer guys crow about XML parsing, XSLT transformation, and business logic. It’s all about stuffing networking gear with fat microprocessors to perform one task or another.

That’s a lot of stuff tied to a lot of competing religious beliefs about how to do it all as Jon rightly demonstrates and ultimately highlights a nasty issue:

The problem now is that we are cracking packets all over the place. You can’t send an e-mail, IM, or ping a router without some type of intelligent manipulation along the way.

<nod>  Whether it’s in the network, bolted on via an appliance or done on the hosts, this is and will always be true.  Here’s the really interesting next step:

I predict that the next bit wave in this evolution will be known as COPM for "Crack once, process many." In this model, IP packets are stopped and inspected and then all kinds of security, acceleration, and application logic actions occur. Seems like a more efficient model to me.

To do this, it basically means that this sort of solution requires Proxy (transparent or terminating) functionality.  Now, the challenge is that whilst “cracking the packets” is relatively easy and cheap even at 10G line rates due to hardware, the processing is really, really hard to do well across the spectrum of processing requirements if you care about things such as quality, efficacy, and latency and is “expensive” in all of those categories.

The intelligence of deciding what to process and how once you’ve cracked the packets is critical. 

This is where embedding this stuff into the network is a lousy idea. 

How can a single vendor possibly provide anything more than “good enough” security in a platform never designed to solve this sort of problem whilst simultaneously trying to balance delivery and security at line rate? 

This will require a paradigm shift for the networking folks that will either mean starting from scratch and integrating high-speed networking with general-purpose compute blades, re-purposing a chassis (like, say, a Cat65K) and stuffing it with nothing but security cards and grafting it onto the switches or stack appliances (big or small – single form factor or in blades) and graft them onto the switches once again.   And by the way, simply adding networking cards to a blade server isn’t an effective solution, either.  "Regular" applications (and esp. SOA/Web 2.0 apps) aren’t particularly topology sensitive.  Security "applications" on the other hand, are wholly dependent and integrated with the topologies into which they are plumbed.

It’s the hamster wheel of pain.

Or, you can get one of these which offers all the competency, agility, performance, resilience and availability of a specialized networking component combined with an open, agile and flexible operating and virtualized compute architecture that scales with parity based on Intel chipsets and Moore’s law.

What this gives you is an ecosystem of loosely-coupled BoB security services that can be intelligently combined in any order once cracked and ruthlessly manipulated as it passes through them governed by policy – and ultimately dependent upon making decisions on how and what to do to a packet/flow based upon content in context.

The consolidation of best of breed security functionality delivered in a converged architecture yields efficiencies that is spread across the domains of scale, performance, availability and security but also on the traditional economic scopes of CapEx and OpEx.

Cracking packets, bah!  That’s so last Tuesday.

/Hoff

Off to Interop Las Vegas and Palo Alto Next Week…

May 16th, 2007 No comments

VegasOff to Interop next week.  I’ll be there from Sunday (Data Center Summit) through Wednesday mid-morning.  If you’re going to be there, let’s grab a beer and chat.

I’ll be in Palo Alto on the 23rd, flying back to Boston on the 24th.

/Hoff

(…and in an advanced planning compendium, during May-June, I’ll be in Orlando, D.C., Dallas, New York, Atlanta, and some chunk of Europe for Crossbeam’s Next Generation Product  Launch activities)

Categories: Travel Tags:

Should Vendors Mitigate All Vulnerabilities Immediately?

May 15th, 2007 1 comment

Swvulnerability
I read an interesting piece by Roger Grimes @ InfoWorld wherein he described the situation of a vendor who was not willing to patch an unsupported version of software even though it was vulnerable and shown to be (remotely) exploitable.

Rather, the vendor suggested that using some other means (such as blocking the offending access port) was the most appropriate course of action to mitigate the threat.

What’s interesting about the article is not that the vendor is refusing to patch older unsupported code, but that ultimately Roger suggests that irrespective of severity, vendors should immediately patch ANY exploitable vulnerability — with or without public disclosure.

A reader who obviously works for a software vendor commented back with a reply that got Roger thinking and it did for me, also.   The reader suggests that they don’t patch lower severity vulnerabilities immediately (they actually "sit on them" until a customer raises a concern) but instead focus on the higher-severity discoveries:

The reader wrote
to say that his company often sits on security bugs until they are
publicly announced or until at least one customer complaint is made.
Before you start disagreeing with this policy, hear out the rest of his
argument.

“Our
company spends significantly to root out security issues," says the
reader. "We train all our programmers in secure coding, and we follow
the basic tenets of secure programming design and management. When bugs
are reported, we fix them. Any significant security bug that is likely
to be high risk or widely used is also immediately fixed. But if we
internally find a low- or medium-risk security bug, we often sit on the
bug until it is reported publicly. We still research the bug and come
up with tentative solutions, but we don’t patch the problem.”

In the best of worlds, I’d agree with Roger — vendors should patch all vulnerabilities as quickly as possible once discovered, irrespective of whether or not the vulnerability or exploit is made public.  The world would be much better — assuming of course that the end-user could actually mitigate the vulnerability by applying the patch in the first place.

Let’s play devil’s advocate for a minute…

Back here on planet Earth, the prioritization of mitigating vulnerabilities and the resource allocation to mitigate the vulnerability is approached by vendors not unlike the way in which the consumers choose to apply patches of the same; most look at the severity of a vulnerability and start from the highest severity and make their way down.  That’s just the reality of my observation.   

So, for the bulk of these consumers, is the vendor’s response out of line?  It seems in total alignment.

As a counterpoint to my own discussion here, I’d suggest that using prudent risk management best practice, one would protect those assets that matter most.  Sometimes this means that one would mitigate a Sev3 (medium) vulnerability over a Sev5 (highest) based upon risk exposure…this is where solutions like Skybox come in to play.  Vendors can’t attach a weight to an asset, all they can do is assess the impact that an exploitable vulnerability might have on their product…

The reader’s last comment caps it off neatly with a challenge:

“Industry pundits such as yourself often say that
it benefits customers more when a company closes all known security
holes, but in my 25 years in the industry, I haven’t seen that to be
true. In fact I’ve seen the exact opposite. And before you reply, I
haven’t seen an official study that says otherwise. Until you can
provide me with a research paper, everything you say in reply is just
your opinion. With all this said, once the hole is publicly announced,
or becomes high-risk, we close it. And we close it fast because we
already knew about it, coded a solution, and tested it.”

I’m not sure I need an official study to respond to this point, but I’d be interested in if there were such a thing.  Gerhard Eschelbeck has been studying vulnerabilities and their half-lives for some time.  I’d be interested to see how this plays.

So, read the gentleman’s posts; in some cases his comments are understandable and in others they’re hard to swallow…this definitely depends upon which (if not both) side of the fence you stand.  All vendors are ultimately consumers in one form or another…

Thoughts?

/Hoff

BeanSec! 9 – May 16th – 6PM to ?

May 14th, 2007 5 comments

Beansec3
Yo!  BeanSec! 9 is upon us.

BeanSec! is an informal meetup of information security professionals, researchers and academics in the Greater Boston area that meets the third Wednesday of each month.  When I’m able to attend (and that’s most of the time) I buy the booze and appetizers.  It’s how we roll.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join up”, present a zero-day exploit, or defend your dissertation to attend.
Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139

Categories: BeanSec! Tags:

Security: “Built-in, Overlay or Something More Radical?”

May 10th, 2007 No comments

Networkpill
I was reading Joseph Tardo’s (Nevis Networks) new Illuminations blog and found the topic of his latest post ""Built-in, Overlay or Something More Radical?" regarding the possible future of network security quite interesting.

Joseph (may I call you Joseph?) recaps the topic of a research draft from Stanford funded by the "Stanford Clean Slate Design for the Internet" project that discusses an approach to network security called SANE.   The notion of SANE (AKA Ethane) is a policy-driven security services layer that utilizes intelligent centrally-located services to replace many of the underlying functions provided by routers, switches and security products today:

Ethane is a new architecture for enterprise networks which provides a powerful yet simple management model and strong security guarantees.  Ethane allows network managers to define a single, network-wide, fine-grain policy, and then enforces it at every switch.  Ethane policy is defined over human-friendly names (such as "bob, "payroll-server", or "http-proxy) and  dictates who can talk to who and in which manner.  For example, a policy rule may specify that all guest users who have not authenticated can only use HTTP and that all of their traffic must traverse a local web proxy.

Ethane has a number of salient properties difficult to achieve
with network technologies today.  First, the global security policy is enforced at each switch in a manner that is resistant to poofing.  Second, all packets on an Ethane network can be
attributed back to the sending host and the physical location in
which the packet entered the network.  In fact, packets collected
in the past can also be attributed to the sending host at the time the packets were sent — a feature that can be used to aid in
auditing and forensics.  Finally, all the functionality within
Ethane is provided by very simple hardware switches.
      

The trick behind the Ethane design is that all complex
functionality, including routing, naming, policy declaration and
security checks are performed by a central
controller (rather than
in the switches as is done today).  Each flow on the network must
first get permission from the controller which verifies that the
communicate is permissible by the network policy.  If the controller allows a flow, it computes a route for the flow to
take, and adds an entry for that flow in each of the switches
along the path.
      

With all complex function subsumed by the controller, switches in
Ethane are reduced to managed flow tables whose entries can only be populated by the controller (which it does after each succesful permission check).  This allows a very simple design for Ethane
      switches using only SRAM (no power-hungry TCAMS) and a little bit
of logic.

   

I like many of the concepts here, but I’m really wrestling with the scaling concerns that arise when I forecast the literal bottlenecking of admission/access control proposed therein.

Furthermore, and more importantly, while SANE speaks to being able to define who "Bob"  is and what infrastructure makes up the "payroll server,"  this solution seems to provide no way of enforcing policy based on content in context of the data flowing across it.  Integrating access control with the pseudonymity offered by integrating identity management into policy enforcement is only half the battle.

The security solutions of the future must evolve to divine and control not only vectors of transport but also the content and relative access that the content itself defines dynamically.

I’m going to suggest that by bastardizing one of the Jericho Forum’s commandments for my own selfish use, the network/security layer of the future must ultimately respect and effect disposition of content based upon the following rule (independent of the network/host):

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component. 

 

Deviating somewhat from Jericho’s actual meaning, I am intimating that somehow, somewhere, data must be classified and self-describe the policies that govern how it is published and consumed and ultimately this security metadata can then be used by the central policy enforcement mechanisms to describe who is allowed to access the data, from where, and where it is allowed to go.

…Back to he topic at hand, SANE:

As Joseph alluded, SANE would require replacing (or not using much of the functionality of) currently-deployed routers, switches and security kit.  I’ll let your imagination address the obvious challenges with this design.

Without delving deeply, I’ll use Joseph’s categorization of “interesting-but-impractical”

/Hoff

The Last Word on Schneier’s “Why Security Shouldn’t Matter” Post…

May 10th, 2007 No comments

Monkey
All this bruhaha over Schneier’s commentary in Wired regarding the existence of and need for IT Security is addressed brilliantly by Paul McNamara here.  Read it and let Bruce get back to posting about bombs, the government and giant squids, won’t you?

Anyone else who took the bait (as Bruce designed, obviously) and actually attempted to argue against what was admittedly unarguable circuitous and rhetorical sets of disjointed constructs paid service and tribute to the process as designed.  There’s one born every minute.  Yes, this is a candidate for the "Captain Obvious Award" and Bruce is no dummy, but obviously some of us who read this stuff and treat everything as a literal next-action need to chill.

Obviously Bruce has made a career from IT Security — and he recently sold his company to another that hopes to do the same, so accept the piece for what it is: a provocation to challenge the status quo and improve Technorati ratings 😉

This piece was meant to agitate us, as was Art Coviello’s address at RSA wherein he stated that the security industry will cease to exist in 3 years.

Thinking about this stuff is good for business — in all senses.

/Hoff

Categories: General Rants & Raves Tags:

Liability of Reverse Engineering Security Vulnerability Research?

May 8th, 2007 5 comments

Eula(Ed.: Wow, some really great comments came out of this question.  I did a crappy job framing the query but there exists a cohesiveness to both the comments and private emails I have received that shows there is confusion in both terminology and execution of reverse engineering. 

I suppose the entire issue of reverse engineering legality can just be washed away by what appeared to me as logical and I stated in the first place — there is no implied violation of an EULA or IP if one didn’t agree to it in the first place (duh!) but I wanted to make sure that my supposition was correct.]

I have a question that hopefully someone can answer for me in a straightforward manner.  It  popped into my mind yesterday in an unrelated matter and perhaps it’s one of those obvious questions, but I’m not convinced I’ve ever seen an obvious answer.

If I as an individual or as a representative of a company that performs vulnerability research and assurance engages in reverse engineering of a product that is covered by patent/IP protection and/or EULA’s that expressly forbids reverse engineering, how would I deflect liability for violating these tenets if I disclose that I have indeed engaged in reverse engineering?

HID and Cisco have both shown that when backed into a corner, they will litigate and the researcher and/or company is forced to either back down or defend (usually the former.) (Ed:. Poor examples as these do not really fall into the same camp as the example I give below.)

Do you folks who do this for a living (or own/manage a company that does) simply count on the understanding that if one can show "purity" of non-malicious motivation that nothing bad will occur?

It’s painfully clear that the slippery slope of full-disclosure plays into this, but help me understand how
the principle of the act (finding vulnerability and telling the company/world about it) outweighs the liability involved.

Do people argue that if you don’t purchase the equipment you’re not covered under the EULA?  I’m trying to rationalize this.  How does one side-step the law in these cases without playing Russian Roulette?

Here’s an example of what I mean.  If you watch this video, the researchers that demonstrated the
Cisco NAC attack @ Black Hat clearly articulate the methods they used to reverse engineer Cisco’s products.

I’m not looking for a debate on the up/downside of full disclosure, but
more specifically the mechanics of the process used to identify that a
vulnerability exists in the first place — especially if reverse
engineering is used.

Perhaps this is a naive question or an uncomfortable one to answer, but I’m really interested.

Thanks,

/Hoff

Cisco as a Bellweather…where’s all the commentary?

May 7th, 2007 4 comments

Ciscoslow(Ed.: I wanted to clarify that issues external to security
vulnerabilities and advanced technology most definitely caused the impact and commentary
noted here — global economic dynamics nothwithstanding, I’m just
surprised at the lack of chatter around the ol’ Blogosphere on this)

From the "I meant to comment on this last week" Department…

A couple of weeks ago, analyst reports announced that Cisco was indicating a general slow-down of their enterprise business and they were placing pressure on the service provider business units to make up the difference.  Furthermore, deep discounts to the channel and partners were crafted in order to incentivize  Q2 customer purchases:

Cisco is headed for a disappointing quarter, according to a cautionary research note issued Monday from a research analyst, reports Barron’s Online.

Samuel Wilson, an analyst at JMP Securities writes that the slow down in U.S. enterprise business during Cisco’s fiscal second quarter has continued into its current quarter, according to Barron’s.

According to the Barron’s story: "Wilson writes that ‘according to
resellers, top Cisco sales staff have recently expressed concerns about
making their April quarter numbers.” He says that the company has
apparently increased “partner-focused incentives’ designed to shift
business in from the July quarter. ‘Based on the past three months,
many resellers now believe that U.S. enterprises have begun to delay
discretionary spending above and beyond normal seasonality typical of
the [calendar] first quarter.’

Wilson also wrote that Cisco has cut headcount and expenses in its
enterprsie switching business unit. He forecasts Cisco’s fiscal third
quarter revenue to be $38.1 billion, down from the consensus estimates
of $39.4 billion, according to Barron’s.

Given how Cisco is a bellweather stock for not only IT but in many case an indicator of overall enterprise spend trends, why isn’t there more concern in the air?  Maybe it’s just rumor and innuendo, but when analysts start press releases about Mr. Chambers’ neighborhood, they’re usually pretty conservative.

Rothman practically needed a Wet-Nap when he commented on Cisco’s Q1 announcement (Cisco Takes it to the Next Level) but nary a word from the "All things including the kitchen sink will go into a Cat65K" camp on this news?  What, no gleeful prognostication on rebounds or doom?

Interestingly, from here, Goldman advises to buy ahead of Q3 announcement:

We believe that management will put concerns around slower U.S. large
cap tech spending to rest. It represents only 13% of sales and we
believe is seeing indications of a rebound. We believe management is
likely to reaffirm positive longer-term trends in emerging markets, new
technologies and the impact of video on networks as key drivers of
sustained double-digit top-line growth.

We’ll see.  Focusing on all the advanced technology projects and not focusing on core competencies can bite a company — even Cisco — when they least expect it.  Couple that with the continued vulnerabilities in their security products (another one today across Pix/ASA) and I’d say folks might start talking…

I wonder how the security products have weathered through all this?

…but that’s just me.  Lash away, boys.

/Hoff

Categories: Cisco, Information Security Tags:

Clean Pipes – Less Sewerage or More Potable Water?

May 6th, 2007 2 comments

Pipesprev
Jeff Bardin over on the CSO blog pitched an interesting stake in the ground when he posited "Connectivity As A Utility: Where are My Clean Pipes?"

Specifically, Jeff expects that his (corporate?) Internet service functions in the same manner as his telephone service via something similar to a "do not call list."  Basically, he opts out by placing himself on the no-call list and telemarketers cease to call. Others might liken it to turning on a tap and getting clean, potable water; you pay for a utility and expect it to be usable.  All of it.

Many telecommunications providers want to charge you for having
clean pipes, deploying a suite of DDoS services that you have to buy to
enhance your security posture.   Protection of last mile bandwidth is
very key to network availability as well as confidentiality and
integrity. If I am subscribing for a full T1, shouldn’t I get the full
T1 as part of the price and not just a segment of the T1? Why do I have
to pay for the spam, probes, scans, and malicious activity that my
telecommunications service provider should prevent at 3 miles out
versus my having to subscribe to another service to attain clean pipes
at my doorstep?

I think that most people would agree with the concept of clean pipes in principle.  I can’t think of any other utility where the service levels delivered are taken with such a lackadaisical best effort approach and where the consumer can almost always expect that some amount (if not the majority) of the utility is unusable. 

Over the last year, I’ve met with many of the largest ISP’s, MSSP’s, TelCo’s and Mobile Operators on the planet and all are in some phase of deploying some sort of clean pipes variant.  Gartner even predicts a large amount of security to move "into the cloud."

In terms of adoption, EMEA is leaps and bounds ahead of the US and APAC in these sorts of services and will continue to be.  The relative oligopolies associated with smaller nation states allows for much more agile and flexible service definition and roll-outs — no less complex, mind you.  It’s incredible to see just how disparate and divergent the gap is between what consumers (SME/SMB/Mobile as well as large enterprise) are offered in EMEA as opposed to the good-ol’ U S of A.

However, the stark reality is that the implementation of clean pipes by your service provider(s) comes down to a balance of two issues: efficacy and economics, with each varying dramatically with the market being served; the large enterprise’s expectations and requirements look very, very different from the SME/SMB.

Let’s take a look at both of these elements.

ECONOMICS

If you ask most service providers about so-called clean pipes up to a year ago, you could expect to get an answer that was based upon a "selfish" initiative aimed at stopping wasteful bandwidth usage upstream in the service provider’s network, not really protecting the consumer. 

The main focus here is really on DDoS and viri/worm propagation.  Today, the closest you’ll come to "clean pipes" is usually some combination of the following services deployed both (still) at the customer premises as well as somewhere upstream:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

What is interesting about these services is that they basically define the same functions you can now get in those small little UTM boxes that consolidate security functionality at the "perimeter."  The capital cost of these devices and the operational levies associated with their upkeep are pretty close in the SME/SMB and when you balance what you get in "good enough" services for this market as well as the overall availability of these "in the cloud" offerings, UTM makes more sense for many in the near term.

For the large enterprise, the story is different.  Outsourcing some level of security to an MSSP (or perhaps even the entire operation) or moving some amount upstream is a matter of core competence and leveraging the focus of having internal teams focus on the things that matter most while the low hanging fruit can be filtered out and monitored by someone else.  I describe that as filtering out the lumps.  Some enormous companies have outsourced not only their security functions but their entire IT operations and data center assets in this manner.  It’s not pretty, but it works.

I’m not sure they are any more secure than they were before, however.  The risk simply was transferred whilst the tolerance/appetite for it didn’t change at all.  Puzzling.

Is it really wrong to think that companies (you’ll notice I said companies, not "people" in the general sense) should pay for clean pipes?  I don’t think it is.  The reality is that for non-commercial subscribers such as home users, broadband or mobile users, some amount of bandwidth hygiene should be free — the potable water approach.

I think, however, that should a company which expects elevated service levels and commensurate guarantees of such, want more secure connectivity, they can expect to ante up.  Why?  Because the investment required to deliver this sort of service costs a LOT of money — both to spin up and to instantiate over time.  You’re going to have to pay for that somewhere.

I very much like Jeff’s statistics:

We stop on average for our organization nearly 600
million malicious emails per year at our doorstep averaging 2.8
gigabytes of garbage per day. You add it up and we are looking at
nearly a terabyte of malicious email we have to stop. Now add in probes
and scans against HTTP and HTTPS sites and the number continues to
skyrocket.

Again, even though Jeff’s organization isn’t small by any means, the stuff he’s complaining about here is really the low-hanging fruit.  It doesn’t bear a dent against the targeted, malicious and financially-impacting security threats that really demands a level of service no service provider will be able to deliver without a huge cost premium.

I won’t bore you with the details, but the level of high-availability,
resilience, performance, manageability, and provisioning required to
deliver even this sort of service is enormous.  Most vendors simply can’t do
it and most service providers are slow to invest in proprietary
solutions that won’t scale economically with the operational models in
place.

Interestingly, vendors such as McAfee even as recently as 2005 announced with much fanfare that they were going to deliver technology, services and a united consortium of participating service providers with the following lofty clean pipe goals (besides selling more product, that is):

The initiative is one
part of a major product and services push from McAfee, which is
developing its next generation of carrier-grade security appliances and
ramping up its enterprise security offerings with NAC and secure
content management product releases planned for the first half of next
year, said Vatsal Sonecha, vice president of market development and
strategic alliances at McAfee, in Santa Clara, Calif.

Clean Pipes will be a major expansion of McAfee’s managed
services offerings. The company will sell managed intrusion prevention;
secure content management; vulnerability management; malware
protection, including anti-virus, anti-spam and anti-spyware services;
and mobile device security, Sonecha said.

McAfee is working with Cable
and Wireless PLC, British Telecommunications PLC (British Telecom),
Telefónica SA and China Network Communications (China Netcom) to tailor
its offerings through an invitation-only group it calls the Clean Pipes
Consortium.

http://www.eweek.com/article2/0,1895,1855188,00.asp

Look at all those services!  What have they delivered as a service in the cloud or clean pipes?  Nada. 

The chassis-based products which were to deliver these services never materialized and neither did the services.  Why?  Because it’s really damned hard to do correctly.  Just ask Inkra, Nexi, CoSine, etc.  Or you can ask me.  The difference is, we’re still in business and they’re not.  It’s interesting to note that every one of those "consortium members" with the exception of Cable and Wireless are Crossbeam customers.  Go figure.

EFFICACY

Once the provider starts filtering at the ingress/egress, one must trust that the things being filtered won’t have an impact on performance — or confidentiality, integrity and availability.  Truth be told, as simple as it seems, it’s not just about raw bandwidth.  Service levels must be maintained and the moment something that is expected doesn’t make its way down the pipe, someone will be screaming bloody murder for "slightly clean" pipes.

Ask me how I know.  I’ve lived through inconsistent application of policies, non-logged protocol filtering, dropped traffic and asymmetric issues introduced by on-prem and in-the-cloud MSSP offerings.  Once the filtering moves past your prem. as a customer, your visibility does too.  Those fancy dashboards don’t do a damned bit of good, either.  Ever consider the forensic impact?

Today, if you asked a service provider what constitutes their approach to clean pipes, most will refer you back to the same list I referenced above:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

The problem is that most of these solutions are disparate point products run by different business units at different parts of the network.  Most are still aimed at the perimeter service — it’s just that the perimeter has moved outward a notch in the belt.

Look, for the SME/SMB (or mobile user,) "good enough" is, for the most part, good
enough.  Having an upstream provider filter out a bunch of spam and
viri is a good thing and most firewall rules in place in the SME/SMB
block everything but a few inbound ports to DMZ hosts (if there are
any) and allow everything from the inside to go out.  Not very
complicated and it doesn’t take a rocket scientist to see how, from the
perspective of what is at risk, that this service doesn’t pay off
handsomely.

From the large enterprise I’d say that if you are going to expect that operational service levels will be met, think again.  What happens when you introduce web services, SOA and heavy XML onto externally-exposed network stubs.  What happens when Web2/3/4.x technologies demand more and more security layers deployed alongside the mechanics and messaging of the service?

You can expect issues and the lack of transparency will be an issue on all but the most simple of issues.

Think your third party due diligence requirements are heady now?  Wait until this little transference of risk gets analyzed when something bad happens — and it will.  Oh how quickly the pendulum will swing back to managing this stuff in-house again.

This model doesn’t scale and it doesn’t address the underlying deficiencies in the most critical elements of the chain: applications, databases and end-point threats such as co-opted clients as unwilling botnet participants.

But to Jeff’s point, if he didn’t have to spend money on the small stuff above, he could probably spend it elsewhere where he needs it most.

I think services in the cloud/clean pipes makes a lot of sense.  I’d sure as hell like to invest less in commoditizing functions at the perimeter and on my desktop.  I’m just not sure we’re going to get there anytime soon.

/Hoff

*Image Credit: CleanPipes

Read more…

Unified Risk Management (URM) and the Secure Architecture Blueprint

May 6th, 2007 5 comments

Gunnar once again hits home with an excellent post defining what he calls the Security Architecture Blueprint (SAB):

The purpose of the security architecture blueprint is to bring focus to the key areas of
concern for the enterprise, highlighting decision criteria and context for each domain.
Since security is a system property it can be difficult for Enterprise Security groups to
separate the disparate concerns that exist at different system layers and to understand
their role in the system as a whole. This blueprint provides a framework for
understanding disparate design and process considerations; to organize architecture and
actions toward improving enterprise security.

Securityarchitectureroadmap

I appreciated the graphical representation of the security architecture blueprint as it provides some striking parallels to the diagram that I created about a year ago to demonstrate a similar concept that I call the Unified Risk Management (URM) framework

(Ed.: URM focuses on business-driven information survivability architectures that describes as much risk tolerance as it does risk management.)

Here are both the textual and graphical representations of URM: 

Managing risk is fast becoming a lost art.  As the pace of technology’s evolution and adoption overtakes our ability to assess and manage its impact on the business, the overrun has created massive governance and operational gaps resulting in exposure and misalignment.  This has caused organizations to lose focus on the things that matter most: the survivability and ultimate growth of the business.

Overwhelmed with the escalation of increasingly complex threats, the alarming ubiquity of vulnerable systems and the constant onslaught of rapidly evolving exploits, security practitioners are ultimately forced to choose between the unending grind of tactical practices focused on deploying and managing security infrastructure versus the strategic art of managing and institutionalizing risk-driven architecture as a business process.

URM illustrates the gap between pure technology-focused information security infrastructure and business-driven, risk-focused information survivability architectures and show how this gap is bridged using sound risk management practices in conjunction with best of breed consolidated Unified Threat Management (UTM) solutions as the technology anchor tenant in a consolidated risk management model.

URM demonstrates how governance organizations, business stakeholders, network and security teams can harmonize their efforts to produce a true business protection and enablement strategy utilizing best of breed consolidated UTM solutions as a core component to effectively arrive at managing risk and delivering security as an on-demand service layer at the speed of business.  This is a process we call Unified Risk Management or URM.

Urmv12

(Updated on 5/8/07 with updates to URM Model)

The point of URM is to provide a holistic framework against which one may measure and effectively manage risk.  Each one of the blocks above has a set of sub-components that breaks out the specifics of each section.  Further, my thinking on URM became the foundation of my exploration of the Security Services Oriented Architecture (SSOA) model. 

You might also want to check out Skybox Security’s Security Risk Management (SRM) Blueprint, also.

Thanks again to Gunnar as I see some gaps that I have to think about based upon what I read in his SAB document.

/Hoff