Archive

Archive for the ‘Unified Threat Management (UTM)’ Category

No DNS Disclosure Debacle Here: Stiennon Pens the Funniest Thing I’ve Read in 2008…

July 22nd, 2008 6 comments

Clownnose
Hat tip to Rothman for this.

I don’t know if Stiennon is off his meds or simply needed to re-post something from 2001 to meet an editorial quota, but his Network World article titled "The Most Important Networking Trend of 2008" ties thus far with the "Evolution of Dance" as my vote for most entertaining Internet content.

Richard’s epiphany goes something like this:

  • Multifunction network devices that have the ability to "route" traffic and combine security capabilities are the ‘next big thing’
  • If a company offers a multifunction network device that has the ability to "route" traffic and combine security capabilities but have the misfortune of using Linux as the operating system, they will "…forever be pigeon-holed as SMB solutions, not ready for enterprise
    prime time."

  • The Wall Street Journal issued "… the year’s most important article on networking" in an article titled "New Routers Catch the Eyes of IT Departments" which validates the heretofore undiscovered trend of convergence and commoditization!
     
  • "Real" network security players such as Cisco, Juniper and Redback are building solutions to this incredible new trend and because of the badge on the box, will be considered ready for "…enterprise prime time."
     
  • The WSJ article talks about the Cisco ASR1000 router as the penultimate representation of this new breed of converged "network security" device.
     
  • Strangely, Stiennon seems to have missed the fact that the operating system (IOS-XE) that the ASR1000 is based on is, um, Linux.  You know, that operating system that dictates that this poor product will "…forever be pigeon-holed as SMB solutions, not ready for enterprise
    prime time."

Oh, crap!  Somebody better tell Cisco!

So despite the fact that Cisco ASR1000 is positioned as an edge device as are these crazy solutions called UTM devices, it seems we’re all missing something because somehow a converged edge device now counts as being able to provide a "secure network fabric?"

In closing, allow me to highlight the cherry on top of Stiennon’s security sundae:   

Have you ever noticed how industry "experts" tend to get stuck in
a rut and continue to see everything through the same lens despite
major shifts in markets and technology?

Yes, Richard, I do believe I have noticed this…

Funny stuff!

/Hoff

New Fortinet Patents May Spell Nasty Trouble For UTM Vendors, Virtualization Vendors, App. Delivery Vendors, Routing/Switching Vendors…

June 23rd, 2008 11 comments

FortinetCheck out the update below…

Were I in the UTM business, I’d be engaging the reality distortion field and speed-dialing my patent attorneys at this point.

Fortinet has recently had some very interesting patent applications granted by the PTO.

Integrated network and application security, together with virtualization technologies, offer a powerful and synergistic approach for defending against an increasingly dangerous cyber-criminal environment. In combination with its extensive patent-pending applications and patents already granted, Fortinet’s newest patents address critical technologies that enable comprehensive network protection:

  • U.S. Patent #7,333,430 – Systems and Methods for Passing Network Traffic Data – directed to efficiently processing network traffic data to facilitate policy enforcement, including content scanning, source/destination verification, virus scanning, content detection and intrusion detection;

  • U.S. Patent #7,340,535 – System and Method for Controlling Routing in a Virtual Router System – directed to controlling the routing of network data, and providing efficient configuration of routing functionality and optimized use of available resources by applying functions to data packets in a virtual environment;

  • U.S. Patent #7,376,125 – Service Processing Switch – directed to providing IP services and IP packet processing in a virtual router-based system using IP flow caches, virtual routing engines, virtual services engines and advanced security engines;

  • U.S. Patent # 7,389,358 – Distributed Virtual System to Support Managed, Network-based Services – directed to a virtual routing system, which includes processing elements to manage and optimize IP traffic, useful for service provider switching functions at Internet point-of-presence (POP) locations.

These patents could have some potentially profound impact on vendors who offer "integrated security" by allowing for virtualized application of network security policy.  These patents could easily be enforced outside of the typically-defined UTM offerings, also.

I’m quite certain Cisco and Juniper are taking note as should be anyone in the business of offering virtualized routing/switching combined with security — that’s certainly a broad swath, eh?

On a wider note, I’ve actually been quite impressed with the IP portfolio that Fortinet has been assembling over the last couple of years.  If you’ve been paying attention, you will notice (for example) that that they have scooped up much of the remaining CoSine IP as well as recently acquired IPlocks’ database security portfolio.

If I were they, the next thing I’d look for (and would have a while ago) is to scoop up a Web Application Firewall/Proxy vendor…

I trust you can figure out why…why not hazard a guess in the comments?

/Hoff

Updated:  It occured to me that this may be much more far-reaching than just UTM vendors, that basically this could affect folks like Crossbeam, Check Point, StillSecure, Cisco, Juniper, Secure Computing, f5…basically anyone who sells a product that mixes the application of security policy with virtualized routing/switching capabilities…

How about those ASA’s or FWSMs?  How about those load balancers with VIPs?

Come to mention it, what of VMware?  How about the fact that in combining virtual networking with VMsafe, you’ve basically got what amounts to coverage by the first two patents:

U.S. Patent #7,333,430 – Systems and Methods for Passing Network Traffic Data – directed to efficiently processing network traffic data to facilitate policy enforcement, including content scanning, source/destination verification, virus scanning, content detection and intrusion detection;

U.S. Patent #7,340,535 – System and Method for Controlling Routing in a Virtual Router System – directed to controlling the routing of network data, and providing efficient configuration of routing functionality and optimized use of available resources by applying functions to data packets in a virtual environment;

Whoopsie.

Now, I’m not a lawyer, I just play one on teh Interwebs.

Complexity: The Enemy of Security? Or, If It Ain’t Fixed, Don’t Break It…

December 12th, 2007 4 comments

Hammerhead
When all you have is a hammer, everything looks like a nail…

A couple of days ago, I was concerned (here) that I had missed Don Weber’s point (here)
regarding how he thinks solutions like UTM that consolidate multiple
security functions into a single solution increased complexity and
increased risk
.

I was interested in more detail regarding Don’s premise for his argument, so I asked him for some substantiating background information before I responded:

The question I have for Don is simple: how is it that you’ve
arrived at the conclusion that the consolidation and convergence of
security functionality from multiple discrete products into a
single-sourced solution adds "complexity" and leads to "increased risk?"

Can you empirically demonstrate this by giving us an example of
where a single function security device that became a multiple function
security product caused this complete set combination of events to occur:

  1. Product complexity increased
  2. Lead to a vulnerability that was exploitable and
  3. Increased "risk" based upon business impact and exposure

Don was kind enough to respond to my request with a rather lengthy
post titled "The Perimeter Is Dead —  Let’s Make It More Complex."  I knew that I wouldn’t get the example I wanted, but I did get
what I expected.  I started to write a very detailed response but stopped when I realized a couple of important things in reading his post as well as many of the comments:

  • It’s clear that many folks simply don’t understand the underlying internal operating principles and architectures of security products on the market, and frankly for the most part they really shouldn’t have to.  However, if you’re going to start debating security architecture and engineering implementation of security software and hardware, it’s somewhat unreasonable to start generalizing and creating bad analogs about things you clearly don’t have experience with. 
     
  • Believe it or not, most security companies that create bespoke security solutions do actually hire competent product management and engineering staff with the discipline, processes and practices that result in just a *little* bit more than copy/paste integration of software.  There are always exceptions, but if this were SOP, how many of them would still be in business?
     
  • The FUD that vendors are accused of spreading to supposedly motivate consumers to purchase their products is sometimes outdone by the sheer lack of knowledge illustrated by the regurgitated drivel that is offered by people suggesting why these same products are not worthy of purchase. 

    In markets that have TAMs of $4+ Billion, either we’re all incompetent lemmings (to be argued elsewhere) or there are some compelling reasons for these products.  Sometimes it’s not solely security, for sure, but people don’t purchase security products with the expectations of being less secure with products that are more complex and put them more at risk.  Silliness.
     

  • I find it odd that the people who maintain that they must have diversity in their security solution providers gag when I ask them for proof that they have invested in multiple switch and router vendors across their entire enterprise, that they deliberately deploy critical computing assets on disparate operating systems and that they have redundancy for all critical assets in their enterprise…including themselves. 
     
  • It doesn’t make a lot of sense arguing about the utility, efficacy, usability and viability of a product with someone who has never actually implemented the solution they are arguing about and instead compares proprietary security products with a breadboard approach to creating a FrankenWall of non-integrated open source software on a common un-hardened Linux distro.
     
  • Using words like complexity and risk within a theoretical context that has no empirical data offered to back it up short of a "gut reaction" and some vulnerability advisories in generally-available open source software lacks relevancy and is a waste of electrons.

I have proof points, ROI studies, security assessment results to the code level, and former customer case studies that demonstrate that some of the most paranoid companies on the planet see fit to purchase millions of dollars worth of supposedly "complex risk-increasing" solutions like these…I can tell you that they’re not all lemmings.

Again, not all of those bullets are directed at Don specifically, but I sense we’re
really just going to talk past one another on this point and the emails I’m getting trying to privately debate this point are agitating to say the least.

Your beer’s waiting, but expect an arm wrestle before you get to take the first sip.

/Hoff

Consolidating Controls Causes Chaos and Certain Complexity?

December 10th, 2007 6 comments

Simplicity_complexity
Don Weber wrote a post last week describing his thoughts on the consolidation of [security] controls and followed it up with another today titled "Quit Complicating our Controls – UTM Remix" in which he suggests that the consolidation of controls delivers an end-state of additional "complexity" and "higher risk":

Of course I can see why people desire to integrate the technologies. 

  • It is more cost effective to have two or more technologies on one piece of hardware.
  • You only have to manage one box.
  • The controls can augment each other more effectively and efficiently (according to the advertising on the box).
  • Firewalls usually represent a choke point to external and potentially hostile environments.
  • Vendors can market it as the Silver Bullet (no relation to Gary McGraw’s podcast) of controls.
  • “The next-generation firewall will have greater blocking and
    visibility into types of protocols,” says Greg Young, research vice
    president for Gartner.
  • etc

Well, I have a problem with all of this. Why are we making our
controls more complex?
Complexity leads to vulnerabilities.
Vulnerabilities lead to exploits. Exploits lead to compromises.
Compromises lead to loss.

…and:

Don’t get me wrong. I am all for developing new technologies that will
allow organizations to analyze their traffic so that they get a better
picture of what is traversing and exiting their networks. I just think
they will be more effective if they are deployed so that they augment
each other’s control measures instead of threatening them by increasing
the risk through complexity. Controls should reduce risk, not increase
it.

Don’s posts have touched on a myriad of topics I have very strong opinions on: complex simplicity, ("magical") risk, UTM and application firewalls.  I don’t agree with Don’s statements regarding any of them. That’s probably why he called me out.

The question I have for Don is simple: how is it that you’ve arrived at the conclusion that the consolidation and convergence of security functionality from multiple discrete products into a single-sourced solution adds "complexity" and leads to "increased risk?"

Can you empirically demonstrate this by giving us an example of where a single function security device that became a multiple function security product caused this complete set combination of events to occur:

  1. Product complexity increased
  2. Lead to a vulnerability that was exploitable and
  3. Increased "risk" based upon business impact and exposure

I’m being open-minded here and rather than try and address every corner-case I am eager to understand more of the background of Don’s position so I might respond accordingly.

/Hoff

The 4th Generation of Security Devices = UTM + Routing & Switching or New Labels = Perfuming a Pig?

June 22nd, 2007 5 comments

That’s it.  I’ve had it.  Again.  There’s no way I’d ever make it as a Marketeer.  <sigh> Pig_costume1_2

I almost wasn’t going to write anything about this particular topic because my response can (and probably should) easily be perceived as and retorted against as a pissy little marketing match between competitors.  Chu don’t like it, Chu don’t gotta read it, capice?

Sue me for telling the truth. {strike that, as someone probably will}

However, this sort of blatant exhalation of so-called revolutionary security product and architectural advances disguised as prophecy is just so, well, recockulous, that I can’t stand it.

I found it funny that the Anti-Hoff (Stiennon) managed to slip another patented advertising editorial Captain Obvious press piece in SC Magazine regarding what can only be described as the natural evolution of network security products that plug into — but are not natively — routing or switching architectures.

I don’t really mind that, but to suggest that somehow this is an original concept is just disingenuous.

Besides trying to wean Fortinet away from the classification as UTM devices (which Richard clearly hates
to be associated with) by suggesting that UTM should be renamed as "Flexible Security Platform," he does a fine job of asserting that a "geologic shift" (I can only assume he means tectonic) is coming soon in the so-called fourth generation of security products.

Of course, he’s completely ignoring the fact that the solution he describes is and has already been deployed for years…but since tectonic shifts usually take millions of years to culminate in something noticeably remarkable, I can understand his confusion.

As you’ll see below, calling these products "Flexible Security Platforms" or "Unified Network Platforms" is merely an arbitrary and ill-conceived hand-waving exercise in an attempt to differentiate in a crowded market.  Open source or COTS, ASIC/FPGA or multi-core Intel…that’s just the packaging and delivery mechanism.  You can tart it up all you want with fancy marketing…

It’s not new, it’s not revolutionary (because it’s already been done) and it sure as hell ain’t the second coming.  I’ll say it again, it’s been here for years.  I personally bought it and deployed it as a customer almost 4 years ago…if you haven’t figured out what I’m talking about yet, read on.

Here’s how C.O. describes what the company I work for has been doing for 6 years and that he intimates Fortinet will provide that nobody else can:

We are rapidly approaching the advent of the fourth generation
security platform. This is a device that can do all of the security
functions that are lumped in to UTM but are also excellent network
devices at layers two and three. They act as a switch and a router.
They supplant traditional network devices while providing security at
all levels. Their inherent architectural flexibility makes them easy to
fit into existing environments and even make some things possible that
were never possible before. For instance a large enterprise with
several business units could deploy these advanced networking/security
devices at the core and assign virtual security domains to each
business unit while performing content filtering and firewalling
between each virtual domain, thus segmenting the business units and
maximizing the investment in core security devices.

One geologic
shift that will occur thanks to the advent of these fourth generation
security platforms is that networking vendors will be playing catch up,
trying to patch more and more security functions into their
under-powered devices or complicating their go to market message with a
plethora of boxes while the security platform vendors will quickly and
easily add networking functionality to their devices.

Fourth
generation network security platforms will evolve beyond stand alone
security appliances to encompass routing and switching as well. This
new generation of devices will impact the networking industry it
scrambles to acquire the expertise in security and shift their business
model from commodity switching and routing to value add networking and
protection capabilities.

Let’s see…combine high-speed network processing whose routing/switching architecture was designed by the same engineers that designed Bay/Welfleet’s core routers, add in a multi-core Intel processing/compute layer which utilizes virtualized, load-balanced security applications as a  service layer that can be overlaid across a fast, reliable, resilient and highly-available network transport and what do you get?

X80angled_2This:

Up to 32 GigE or 64 10/100 switching ports and 40 Intel cores in a single chassis today…and in Q3’07 you’ll also have the combination of our NextGen network processors which will provide up to 8x10GigE and 40xGigE with 64 MIPS Network Security cores combined with the same 40 Intel cores in the same chassis.

By the way, I consider that routing and switching are just table stakes, not market differentiators; in products like the one to the left, this is just basic expected functionality.

Furthermore, in this so-called next generation of "security switches," the customer should be able to run both open source as well as best-in-breed COTS security applications on the platform and not constrain the user to a single vendor’s version of the truth running proprietary software.

—–

But wait, it only gets better…what I found equally as hysterical is the notion that Captain Obvious now has a sidekick!  It seems Alan Shimel has signed on as Richard’s Boy Wonder.  Alan’s suggesting that again, the magic bullet is Cobia and that because he can run a routing daemon and his appliance has more than a couple of ports, it’s a router and a switch as well as a multi-function UTM UNP swiss army knife of security & networking goodness — and he was the first to do it!  Holy marketing-schizzle Batman! 

I don’t need to re-hash this.  I blogged about it here before.

You can dress Newt Gingrich up as a chick but it doesn’t mean I want to make out with him…

This is cheap, cheap, cheap marketing on both your parts and don’t believe for a minute that customers don’t see right through it; perfuming pigs is not revolutionary, it’s called product marketing.

/Hoff

Profiling Data At the Network-Layer and Controlling It’s Movement Is a Bad Thing?

June 3rd, 2007 2 comments

Carcrash
I’ve been watching what appears like a car crash in slow-motion and for some strange reason I share some affinity and/or responsibility for what is unfolding in the debate between Rory and Rob.

What motivated me to comment on this on-going exploration of data-centric security was Rory’s last post in which he appears to refer to some points I raised in my original post but still bent on the idea that the crux of my concept was tied to DRM:

So .. am I anti-security? Nope I’m extremely pro-security. My feeling
is however that the best way to implement security is in ways which
it’s invisable to users. Every time you make ordinary business people
think about security (eg, usernames/passwords) they try their darndest
to bypass those requirements.

That’s fine and I agree.  The concept of ADAPT is completely transparent to "users."  This doesn’t obviate the fact that someone will have to be responsible for characterizing what is important and relevant to the business in terms of "assets/data," attaching weight/value to them, and setting some policies regarding how to mitigate impact and ultimately risk.

Personally I’m a great fan of network segregation and defence in
depth at the network layer. I think that devices like the ones
crossbeam produce are very useful in coming up with risk profiles, on a
network by network basis rather than a data basis and
managing traffic in that way. The reason for this is that then the
segregation and protections can be applied without the intervention of
end-users and without them (hopefully) having to know about what
security is in place.

So I think you’re still missing my point.  The example I gave of the X-Series using ADAPT takes a combination of best-of-breed security software components such as FW, IDP, WAF, XML, AV, etc. and provides you with segregation as you describe.  HOWEVER, the (r)evolutionary delta here is that the ADAPT profiling of content set by policies which are invisible to the user at the network layer allows one to make security decisions on content in context and control how data moves.

So to use the phrase that I’ve seen in other blogs on this subject,
I think that the "zones of trust" are a great idea, but the zone’s
shouldn’t be based on the data that flows over them, but the
user/machine that are used. It’s the idea of tagging all that data with
the right tags and controlling it’s flow that bugs me.

…and thus it’s obvious that I completely and utterly disagree with this statement.  Without tying some sort of identity (pseudonymity) to the user/machine AND combining it with identifying the channels (applications) and the content (payload) you simply cannot make an informed decision as to the legitimacy of the movement/delivery of this data.

I used the case of being able to utilize client-side tagging as an extension to ADAPT, NOT as a dependency.  Go back and re-read the post; it’s a network-based transparent tagging process that attaches the tag to the traffic as it moves around the network.  I don’t understand why that would bug you?

So that’s where my points in the previous post came from, and I
still reckon their correct. Data tagging and parsing relies on the
existance of standards and their uptake in the first instance and then
users *actually using them* and personally I think that’s not going to
happen in general companies and therefore is not the best place to be
focusing security effort…

Please explain this to me?  What standards need to exist in order to tag data — unless of course you’re talking about the heterogeneous exchange and integration of tagging data at the client side across platforms?  Not so if you do it at the network layer WITHIN the context of the architecture I outlined; the clients, switches, routers, etc. don’t need to know a thing about the process as it’s done transparently.

I wasn’t arguing that this is the end-all-be-all of data-centric security, but it’s worth exploring without deadweighting it to the negative baggage of DRM and the existing DLP/Extrusion Prevention technologies and methodologies that currently exist.

ADAPT is doable and real; stay tuned.

/Hoff

For Data to Survive, It Must ADAPT…

June 1st, 2007 2 comments

Adapt

Now that I’ve annoyed you by suggesting that network security will over time become irrelevant given lost visibility due to advances in OS protocol transport and operation, allow me to give you another nudge towards the edge and further reinforce my theories with some additionally practical data-centric security perspectives.

If any form of network-centric security solution is to succeed in adding value over time, the mechanics of applying policy and effecting disposition on flows as they traverse the network must be made on content in context.  That means we must get to a point where we can make “security” decisions based upon information and its “value” and classification as it moves about.

It’s not good enough to only make decisions on how flows/data should be characterized and acted on with the criteria being focused on the 5-tupule (header,) signature-driven profiling or even behavioral analysis that doesn’t characterize the content in context of where it’s coming from, where it’s going and who (machine, virtual machine and “user”) or what (application, service) intends to access and consume it.

In the best of worlds, we like to be able to classify data before it makes its way though the IP stack and enters the network and use this metadata as an attached descriptor of the ‘type’ of content that this data represents.  We could do this as the data is created by applications (thick or thin, rich or basic) either using the application itself or by using an agent (client-side) that profiles the data prior to storage or transmission.

Since I’m on my Jericho Forum kick lately, here’s how they describe how data ought to be controlled:

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component.

You would probably need client-side software to provide this
functionality.  As an example, we do this today with email compliance solutions that have primitive versions of
this sort of capability that force users to declare the classification
of an email before you can hit the send button or even the document info that can be created when one authors a Word document.

There are a bunch of ERM/DRM solutions in play today that are bandied about being sold as “compliance” solutions, but there value goes much deeper than that.  IP Leakage/Extrusion prevention systems (with or without client-side tie-ins) try to do similar things also.

Ideally, this metadata would be used as a fixed descriptor of the content that permanently attaches itself and follows that content around so it can be used to decide what content should be “routed” based upon policy.

If we’re not able to use this file-oriented static metadata, we’d like then for the “network” (or something in/on it) to be able to dynamically profile content at wirespeed and characterize the data as it moves around the network from origin to destination in the same way.

So, this is where Applied Data & Application Policy Tagging (ADAPT) comes in.  ADAPT is an approach that can make use of existing and new technology to profile and characterize content (by using content matching, signatures, regular expressions and behavioral analysis in hardware or software) to then apply policy-driven information “routing” functionality as flows traverse the network by using an 802.1 q-in-q VLAN tags (open approach) or applying a proprietary ADAPT tag-header as a descriptor to each flow as it moves around the network.

Think of it like a VLAN tag the describes the data within the packet/flow which is defined as seen fit;

The ADAPT tag/VLAN is user defined and can use any taxonomy that best suits the types of content that is interesting; one might use asset classification such as “confidential” or uses taxonomies such as “HIPAA” or “PCI” to describe what is contained in the flows.  One could combine and/or stack the tags, too.  The tag maps to one of these arbitrary categories which could be fed by interpreting metadata attached to the data itself (if in file form) or dynamically by on-the-fly profiling at the network level.

As data moves across the network and across what we call boundaries (zones) of trust, the policy tags are parsed and disposition effected based upon the language governing the rules.  If you use the “open” version using the q-in-q VLAN’s, you have something on the order of 4096 VLAN IDs to choose from…more than enough to accomodate most asset classification and still leave room for VLAN usage.  Enforcing the ACL’s can be done by pretty much ANY modern switch that supports q-in-q, very quickly.

Just like an ACL for IP addresses or VLAN policies, ADAPT does the same thing for content routing, but using VLAN ID’s (or the proprietary ADAPT header) to enforce it.

To enable this sort of functionality, either every switch/router in the network would need to either be q-in-q capable (which is most switches these days) or ADAPT enabled (which would be difficult since you’d need every network vendor to support the protocols.)  You could use an overlay UTM security services switch sitting on top of the network plumbing through which all traffic moving from one zone to another would be subject to the ADAPT policy since each flow has to go through said device.

Since the only device that needs to be ADAPT aware is this UTM security service switch (see the example below,) you can let the network do what it does best and utilize this solution to enforce the policy for you across these boundary transitions.  Said UTM security service switch needs to have an extremely high-speed content security engine that is able to characterize the data at wirespeed and add a tag to the frame as it moves through the switching fabric and processed prior to popping out onto the network.

Clearly this switch would have to have coverage across every network segment.  It wouldn’t work well in virtualized server environments or any topology where zoned traffic is not subject to transit through the UTM switch.

I’m going to be self-serving here and demonstrate this “theoretical” solution using a Crossbeam X80 UTM security services switch plumbed into a very fast, reliable, and resilient L2/L3 Cisco infrastructure.  It just so happens to have a wire-speed content security engine installed in it.  The reason the X-Series can do this is because once the flow enters its switching fabric, I own the ultimate packet/frame/cell format and can prepend any header functionality I like onto the structure to determine how it gets “routed.”

Take the example below where the X80 is connected to the layer-3 switches using 802.1q VLAN trunked interfaces.  I’ve made this an intentionally simple network using VLANs and L3 routing; you could envision a much more complex segmentation and routing environment, obviously.

AdaptjpgThis network is chopped up into 4 VLAN segments:

  1. General Clients (VLAN A)
  2. Finance & Accounting Clients (VLAN B)
  3. Financial Servers (VLAN C)
  4. HR Servers (VLAN D)

Each of the clients/servers in the respective VLANs default routes out to an IP address which belongs to the firewall cluster IP addresses which is proffered by the firewall application modules providing service in the X80.

Thus, to get from one VLAN to another VLAN, one must pass through the X80 and profiled by this content security engine and whatever additional UTM services are installed in the chassis (such as firewall, IDP, AV, etc.)

Let’s say then that a user in VLAN A (General Clients) attempts to access one or more resources in the VLAN D (HR Servers.)

Using solely IP addresses and/or L2 VLANs, let’s say the firewall and IPS policies allow this behavior as the clients in that VLAN have a legitimate need to access the HR Intranet server.  However, let’s say that this user tries to access data that exists on the HR Intranet server but contains personally identifiable information that falls under the governance/compliance mandates of HIPAA.

Let us further suggest that the ADAPT policy states the following:

Rule  Source                Destination            ADAPT Descriptor           Action
==============================================================

1        VLAN A             VLAN D                    HIPAA, Confidential        Deny
IP.1.1               IP.3.1

2        VLAN B             VLAN C                    PCI                                 Allow
IP.2.1             IP.4.1

Using rule 1 above, as the client makes the request, he transits from VLAN A to VLAN D.  The reply containing the requested information is profiled by the content security engine which is able to  characterize the data as containing information that matches our definition of either “HIPAA or Confidential” (purely arbitrary for the sake of this example.)

This could be done by reading the metadata if it exists as an attachment to the content’s file structure, in cooperation with an extrusion prevention application running in the chassis, or in the case of ad-hoc web-based applications/services, done dynamically.

According to the ADAPT policy above, this data would then be either silently dropped, depending upon what “deny” means, or perhaps the user would be redirected to a webpage that informs them of a policy violation.

Rule 2 above would allow authorized IP’s in VLANs to access PCI-classified data.

You can imagine how one could integrate IAM and extend the policies to include pseudonymity/identity as a function of access, also.  Or, one could profile the requesting application (browser, for example) to define whether or not this is an authorized application.  You could extend the actions to lots of stuff, too.

In fact, I alluded to it in the first paragraph, but if we back up a step and look at where consolidation of functions/services are being driven with virtualization, one could also use the principles of ADAPT to extend the ACL functionality that exists in switching environments to control/segment/zone access to/from virtual machines (VMs) of different asset/data/classification/security zones.

What this translates to is a workflow/policy instantiation that would use the same logic to prevent VM1 from communicating with VM2 if there was a “zone” mis-match; as we add data classification in context, you could have various levels of granularity that defines access based not only on VM but VM and data trafficked by them.

Furthermore, assuming this service was deployed internally and you could establish a trusted CA with certs that would support transparent MITM SSL decrypts, you could do this (with appropriate scale) with encrypted traffic also.

This is data-centric security that uses the network when needed, the host when it can and the notion of both static and dynamic network-borne data classification to enforce policy in real-time.

/Hoff

[Comments/Blogs on this entry you might be interested in but have no trackbacks set:

MCWResearch Blog

Rob Newby’s Blog

Alex Hutton’s Blog

Security Retentive Blog

My IPS (and FW, WAF, XML, DBF, URL, AV, AS) *IS* Bigger Than Yours Is…

May 23rd, 2007 No comments

Butrule225Interop has has been great thus far.  One of the most visible themes of this year’s show is (not suprisingly) the hyped emergence of 10Gb/s Ethernet.  10G isn’t new, but the market is now ripe with products supporting it: routers, switches, servers and, of course, security kit.

With this uptick in connectivity as well as the corresponding float in compute power thanks to Mr. Moore AND some nifty evolution of very fast, low latency, reasonably accurate deep packet inspection (including behavioral technology,) the marketing wars have begun on who has the biggest, baddest toys on the block.

Whenever this discussion arises, without question the notion of "carrier class" gets bandied about in order to essentially qualify a product as being able to withstand enormous amounts of traffic load without imposing latency. 

One of the most compelling reasons for these big pieces of iron (which are ultimately a means to an end to run software, afterall) is the service provider/carrier/mobile operator market which certainly has its fair share of challenges in terms of not only scale and performance but also security.

I blogged a couple of weeks ago regarding the resurgence of what can be described as "clean pipes" wherein a service provider applies some technology that gets rid of the big lumps upstream of the customer premises in order to deliver more sanitary network transport.

What’s interesting about clean pipes is that much of what security providers talk about today is only actually a small amount of what is actually needed.  Security providers, most notably IPS vendors, anchor the entire strategy of clean pipes around "threat protection" that appears somewhat one dimensional.

This normally means getting rid of what is generically referred to today as "malware," arresting worm propagation and quashing DoS/DDoS attacks.  It doesn’t speak at all to the need for things that aren’t purely "security" in nature such as parental controls (URL filtering,) anti-spam, P2P, etc.  It appears that in the strictest definition, these aren’t threats?

So, this week we’ve seen the following announcements:

  • ISS announces their new appliance that offers 6Gb/s of IPS
  • McAfee announces thei new appliance that offers 10Gb/s of IPS

The trumpets sounded and the heavens parted as these products were announced touting threat protection via IPS at levels supposedly never approached before.  More appliances.  Lots of interfaces.  Big numbers.  Yet to be seen in action.  Also, to be clear a 2U rackmount appliance that is not DC powered and non-NEBS certified isn’t normally called "Carrier-Class."

I find these announcements interesting because even with our existing products (which run ISS and Sourcefire’s IDS/IPS software, by the way) we can deliver 8Gb/s of firewall and IPS today and have been able to for some time.

Lisa Vaas over @ eWeek just covered
the ISS and McAfee announcements and she was nice enough to talk about
our products and positioning.  One super-critical difference is that along with high throughput and low latency you get to actually CHOOSE which IPS you want to run — ISS, Sourcefire and shortly Check Point’s IPS-1.

You can then combine that with firewall, AV, AS, URL filtering, web app. and database firewalls and XML security gateways in the same chassis to name a few other functions — all best of breed from top-tier players — and this is what we call Enterprise and Provider-Class UTM folks.

Holistically approaching threat management across the entire spectrum is really important along with the speeds and feeds and we’ve all seen what happens when more and more functionality is added to the feature stack — you turn a feature on and you pay for it performance-wise somewhere else.  It’s robbing Peter to pay Paul.  The processing requirements necessary at 10G line rates to do IPS is different when you add AV to the mix.

The next steps will be interesting and we’ll have to see how the switch and overlay vendors rev up to make their move to have the biggest on the block.  Hey, what ever did happen to that 3Com M160?

Then there’s that little company called Cisco…

{Ed: Oops.  I made a boo-boo and talked about some stuff I shouldn’t have.  You didn’t notice, did you?  Ah, the perils of the intersection of Corporate Blvd. and Personal Way!  Lesson learned. 😉 }

 

Clean Pipes – Less Sewerage or More Potable Water?

May 6th, 2007 2 comments

Pipesprev
Jeff Bardin over on the CSO blog pitched an interesting stake in the ground when he posited "Connectivity As A Utility: Where are My Clean Pipes?"

Specifically, Jeff expects that his (corporate?) Internet service functions in the same manner as his telephone service via something similar to a "do not call list."  Basically, he opts out by placing himself on the no-call list and telemarketers cease to call. Others might liken it to turning on a tap and getting clean, potable water; you pay for a utility and expect it to be usable.  All of it.

Many telecommunications providers want to charge you for having
clean pipes, deploying a suite of DDoS services that you have to buy to
enhance your security posture.   Protection of last mile bandwidth is
very key to network availability as well as confidentiality and
integrity. If I am subscribing for a full T1, shouldn’t I get the full
T1 as part of the price and not just a segment of the T1? Why do I have
to pay for the spam, probes, scans, and malicious activity that my
telecommunications service provider should prevent at 3 miles out
versus my having to subscribe to another service to attain clean pipes
at my doorstep?

I think that most people would agree with the concept of clean pipes in principle.  I can’t think of any other utility where the service levels delivered are taken with such a lackadaisical best effort approach and where the consumer can almost always expect that some amount (if not the majority) of the utility is unusable. 

Over the last year, I’ve met with many of the largest ISP’s, MSSP’s, TelCo’s and Mobile Operators on the planet and all are in some phase of deploying some sort of clean pipes variant.  Gartner even predicts a large amount of security to move "into the cloud."

In terms of adoption, EMEA is leaps and bounds ahead of the US and APAC in these sorts of services and will continue to be.  The relative oligopolies associated with smaller nation states allows for much more agile and flexible service definition and roll-outs — no less complex, mind you.  It’s incredible to see just how disparate and divergent the gap is between what consumers (SME/SMB/Mobile as well as large enterprise) are offered in EMEA as opposed to the good-ol’ U S of A.

However, the stark reality is that the implementation of clean pipes by your service provider(s) comes down to a balance of two issues: efficacy and economics, with each varying dramatically with the market being served; the large enterprise’s expectations and requirements look very, very different from the SME/SMB.

Let’s take a look at both of these elements.

ECONOMICS

If you ask most service providers about so-called clean pipes up to a year ago, you could expect to get an answer that was based upon a "selfish" initiative aimed at stopping wasteful bandwidth usage upstream in the service provider’s network, not really protecting the consumer. 

The main focus here is really on DDoS and viri/worm propagation.  Today, the closest you’ll come to "clean pipes" is usually some combination of the following services deployed both (still) at the customer premises as well as somewhere upstream:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

What is interesting about these services is that they basically define the same functions you can now get in those small little UTM boxes that consolidate security functionality at the "perimeter."  The capital cost of these devices and the operational levies associated with their upkeep are pretty close in the SME/SMB and when you balance what you get in "good enough" services for this market as well as the overall availability of these "in the cloud" offerings, UTM makes more sense for many in the near term.

For the large enterprise, the story is different.  Outsourcing some level of security to an MSSP (or perhaps even the entire operation) or moving some amount upstream is a matter of core competence and leveraging the focus of having internal teams focus on the things that matter most while the low hanging fruit can be filtered out and monitored by someone else.  I describe that as filtering out the lumps.  Some enormous companies have outsourced not only their security functions but their entire IT operations and data center assets in this manner.  It’s not pretty, but it works.

I’m not sure they are any more secure than they were before, however.  The risk simply was transferred whilst the tolerance/appetite for it didn’t change at all.  Puzzling.

Is it really wrong to think that companies (you’ll notice I said companies, not "people" in the general sense) should pay for clean pipes?  I don’t think it is.  The reality is that for non-commercial subscribers such as home users, broadband or mobile users, some amount of bandwidth hygiene should be free — the potable water approach.

I think, however, that should a company which expects elevated service levels and commensurate guarantees of such, want more secure connectivity, they can expect to ante up.  Why?  Because the investment required to deliver this sort of service costs a LOT of money — both to spin up and to instantiate over time.  You’re going to have to pay for that somewhere.

I very much like Jeff’s statistics:

We stop on average for our organization nearly 600
million malicious emails per year at our doorstep averaging 2.8
gigabytes of garbage per day. You add it up and we are looking at
nearly a terabyte of malicious email we have to stop. Now add in probes
and scans against HTTP and HTTPS sites and the number continues to
skyrocket.

Again, even though Jeff’s organization isn’t small by any means, the stuff he’s complaining about here is really the low-hanging fruit.  It doesn’t bear a dent against the targeted, malicious and financially-impacting security threats that really demands a level of service no service provider will be able to deliver without a huge cost premium.

I won’t bore you with the details, but the level of high-availability,
resilience, performance, manageability, and provisioning required to
deliver even this sort of service is enormous.  Most vendors simply can’t do
it and most service providers are slow to invest in proprietary
solutions that won’t scale economically with the operational models in
place.

Interestingly, vendors such as McAfee even as recently as 2005 announced with much fanfare that they were going to deliver technology, services and a united consortium of participating service providers with the following lofty clean pipe goals (besides selling more product, that is):

The initiative is one
part of a major product and services push from McAfee, which is
developing its next generation of carrier-grade security appliances and
ramping up its enterprise security offerings with NAC and secure
content management product releases planned for the first half of next
year, said Vatsal Sonecha, vice president of market development and
strategic alliances at McAfee, in Santa Clara, Calif.

Clean Pipes will be a major expansion of McAfee’s managed
services offerings. The company will sell managed intrusion prevention;
secure content management; vulnerability management; malware
protection, including anti-virus, anti-spam and anti-spyware services;
and mobile device security, Sonecha said.

McAfee is working with Cable
and Wireless PLC, British Telecommunications PLC (British Telecom),
Telefónica SA and China Network Communications (China Netcom) to tailor
its offerings through an invitation-only group it calls the Clean Pipes
Consortium.

http://www.eweek.com/article2/0,1895,1855188,00.asp

Look at all those services!  What have they delivered as a service in the cloud or clean pipes?  Nada. 

The chassis-based products which were to deliver these services never materialized and neither did the services.  Why?  Because it’s really damned hard to do correctly.  Just ask Inkra, Nexi, CoSine, etc.  Or you can ask me.  The difference is, we’re still in business and they’re not.  It’s interesting to note that every one of those "consortium members" with the exception of Cable and Wireless are Crossbeam customers.  Go figure.

EFFICACY

Once the provider starts filtering at the ingress/egress, one must trust that the things being filtered won’t have an impact on performance — or confidentiality, integrity and availability.  Truth be told, as simple as it seems, it’s not just about raw bandwidth.  Service levels must be maintained and the moment something that is expected doesn’t make its way down the pipe, someone will be screaming bloody murder for "slightly clean" pipes.

Ask me how I know.  I’ve lived through inconsistent application of policies, non-logged protocol filtering, dropped traffic and asymmetric issues introduced by on-prem and in-the-cloud MSSP offerings.  Once the filtering moves past your prem. as a customer, your visibility does too.  Those fancy dashboards don’t do a damned bit of good, either.  Ever consider the forensic impact?

Today, if you asked a service provider what constitutes their approach to clean pipes, most will refer you back to the same list I referenced above:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

The problem is that most of these solutions are disparate point products run by different business units at different parts of the network.  Most are still aimed at the perimeter service — it’s just that the perimeter has moved outward a notch in the belt.

Look, for the SME/SMB (or mobile user,) "good enough" is, for the most part, good
enough.  Having an upstream provider filter out a bunch of spam and
viri is a good thing and most firewall rules in place in the SME/SMB
block everything but a few inbound ports to DMZ hosts (if there are
any) and allow everything from the inside to go out.  Not very
complicated and it doesn’t take a rocket scientist to see how, from the
perspective of what is at risk, that this service doesn’t pay off
handsomely.

From the large enterprise I’d say that if you are going to expect that operational service levels will be met, think again.  What happens when you introduce web services, SOA and heavy XML onto externally-exposed network stubs.  What happens when Web2/3/4.x technologies demand more and more security layers deployed alongside the mechanics and messaging of the service?

You can expect issues and the lack of transparency will be an issue on all but the most simple of issues.

Think your third party due diligence requirements are heady now?  Wait until this little transference of risk gets analyzed when something bad happens — and it will.  Oh how quickly the pendulum will swing back to managing this stuff in-house again.

This model doesn’t scale and it doesn’t address the underlying deficiencies in the most critical elements of the chain: applications, databases and end-point threats such as co-opted clients as unwilling botnet participants.

But to Jeff’s point, if he didn’t have to spend money on the small stuff above, he could probably spend it elsewhere where he needs it most.

I think services in the cloud/clean pipes makes a lot of sense.  I’d sure as hell like to invest less in commoditizing functions at the perimeter and on my desktop.  I’m just not sure we’re going to get there anytime soon.

/Hoff

*Image Credit: CleanPipes

Read more…

Unified Risk Management (URM) and the Secure Architecture Blueprint

May 6th, 2007 5 comments

Gunnar once again hits home with an excellent post defining what he calls the Security Architecture Blueprint (SAB):

The purpose of the security architecture blueprint is to bring focus to the key areas of
concern for the enterprise, highlighting decision criteria and context for each domain.
Since security is a system property it can be difficult for Enterprise Security groups to
separate the disparate concerns that exist at different system layers and to understand
their role in the system as a whole. This blueprint provides a framework for
understanding disparate design and process considerations; to organize architecture and
actions toward improving enterprise security.

Securityarchitectureroadmap

I appreciated the graphical representation of the security architecture blueprint as it provides some striking parallels to the diagram that I created about a year ago to demonstrate a similar concept that I call the Unified Risk Management (URM) framework

(Ed.: URM focuses on business-driven information survivability architectures that describes as much risk tolerance as it does risk management.)

Here are both the textual and graphical representations of URM: 

Managing risk is fast becoming a lost art.  As the pace of technology’s evolution and adoption overtakes our ability to assess and manage its impact on the business, the overrun has created massive governance and operational gaps resulting in exposure and misalignment.  This has caused organizations to lose focus on the things that matter most: the survivability and ultimate growth of the business.

Overwhelmed with the escalation of increasingly complex threats, the alarming ubiquity of vulnerable systems and the constant onslaught of rapidly evolving exploits, security practitioners are ultimately forced to choose between the unending grind of tactical practices focused on deploying and managing security infrastructure versus the strategic art of managing and institutionalizing risk-driven architecture as a business process.

URM illustrates the gap between pure technology-focused information security infrastructure and business-driven, risk-focused information survivability architectures and show how this gap is bridged using sound risk management practices in conjunction with best of breed consolidated Unified Threat Management (UTM) solutions as the technology anchor tenant in a consolidated risk management model.

URM demonstrates how governance organizations, business stakeholders, network and security teams can harmonize their efforts to produce a true business protection and enablement strategy utilizing best of breed consolidated UTM solutions as a core component to effectively arrive at managing risk and delivering security as an on-demand service layer at the speed of business.  This is a process we call Unified Risk Management or URM.

Urmv12

(Updated on 5/8/07 with updates to URM Model)

The point of URM is to provide a holistic framework against which one may measure and effectively manage risk.  Each one of the blocks above has a set of sub-components that breaks out the specifics of each section.  Further, my thinking on URM became the foundation of my exploration of the Security Services Oriented Architecture (SSOA) model. 

You might also want to check out Skybox Security’s Security Risk Management (SRM) Blueprint, also.

Thanks again to Gunnar as I see some gaps that I have to think about based upon what I read in his SAB document.

/Hoff