Archive for the ‘Security Innovation & Imagination’ Category

Security Application Instrumentation: Reinventing the Wheel?

June 19th, 2007 No comments

Two of my favorite bloggers engaged in a trackback love-fest lately on the topic of building security into applications; specifically, enabling applications as a service delivery function to be able to innately detect, respond to and report attacks.

Richard Bejtlich wrote a piece called Security Application Instrumentation and Gunnar Peterson chimed in with Building Coordinated Response In – Learning from the Anasazis.  As usual, these are two extremely well-written pieces and arrive at a well constructed conclusion that we need a standard methodology and protocol for this reporting.  I think that this exquisitely important point will be missed by most of the security industry — specifically vendors.

While security vendor’s hearts are in the right place (stop laughing,) the "security is the center of the universe" approach to telemetry and instrumentation will continue to fall on deaf ears because there are no widely-adopted standard ways of reporting across platforms, operating systems and applications that truly integrate into a balanced scorecard/dashboard that demonstrates security’s contribution to service availability across the enterprise.   I know what you’re thinking…"Oh God, he’s going to talk about metrics!  Ack!"  No.  That’s Andy’s job and he does it much better than I.

This mess is exactly why the SEIM market emerged to clean up the cesspool of log dumps that spew forth from devices that are, by all approximation, utterly unaware of the rest of ecosystem in which they participate.  Take all these crappy log dumps via Syslog and SNMP (which can still be proprietary,) normalize if possible, correlate "stuff" and communicate that something "bad" or "abnormal" has occurred.

How does that communicate what this really means to the business, its ability to function, deliver servce and ultimately the impact on risk posture?  It doesn’t because security reporting is the little kid wearing a dunce hat standing in the corner because it doesn’t play well with others.

Gunnar stated this well:

Coordinated detection and response is the logical conclusion to defense
in depth security architecture. I think the reason that we have
standards for authentication, authorization, and encryption is because
these are the things that people typically focus on at design time.
Monitoring and auditing are seen as runtime operational acitivities,
but if there were standards based ways to communicate security
information and events, then there would be an opportunity for the
tooling and processes to improve, which is ultimately what we need.

So, is the call for "security
application instrumentation"
doomed to fail because we in the security industry will try to reinvent the wheel with proprietary solutions and suggest
that the current toolsets and frameworks which are available as part of
a much larger enterprise management and reporting strategy not enough? 

Bejtlich remarked on the need for mechanisms that report application state must be built into the application and must report more than just performance:

Today we need to talk about applications defending themselves. When
they are under attack they need to tell us, and when they are abused,
subverted, or breached they would ideally also tell us

I would like to see the next innovation be security application instrumentation,
where you devise your application to report not only performance and
fault logging, but also security and compliance logging. Ideally the
application will be self-defending as well, perhaps offering less
vulnerability exposure as attacks increase (being aware of DoS
conditions of course).

I would agree, but I get the feeling that without integrating this telemetry and the output metrics and folding it into response systems whose primary role is to talk about delivery and service levels — of which "security" is a huge factor — the relevance of this data within the visible single pane of glass of enterprise management is lost.

So, rather than reinvent the wheel and incrementally "innovate," why don’t we take something like the Open Group’s Application Response Measurement (ARM) standard, make sure we subscribe to a telemetry/instrumentation format that speaks to the real issues and enable these systems to massage our output in terms of the language of business (risk?) and work to extend what is already a well-defined and accepted enterprise response management toolset to include security?

To wit:

The Application Response Measurement (ARM) standard describes a common
method for integrating enterprise applications as manageable entities.
The ARM standard allows users to extend their enterprise management
tools directly to applications creating a comprehensive end-to-end
management capability that includes measuring application availability,
application performance, application usage, and end-to-end transaction
response time.

Or how about something like EMC’s Smarts:

Maximize availability and performance of
mission-critical IT resources—and the business services they support.
EMC Smarts software provides powerful solutions for managing complex
infrastructures end-to-end, across technologies, and from the network
to the business level. With EMC Smarts innovative technology you can:

  • Model components and their relationships across networks, applications, and storage to understand effect on services.
  • Analyze data from multiple sources to pinpoint root cause problems—automatically, and in real time.
  • Automate discovery, modeling, analysis, workflow, and updates for dramatically lower cost of ownership.


…add security into these and you’ve got a winner.   

There are already industry standards (or at least huge market momentum)
around intelligent automated IT infrastructure, resource management and service level reporting.
We should get behind a standard that elevates the perspective of how security contributes to service delivery (and dare I say risk management) instead of trying to reinvent the wheel…unless you happen to like the Hamster Wheel of Pain…


Really, There’s More to Security than Admission/Access Control…

June 16th, 2007 2 comments

Dr. Joseph Tardo over at the Nevis Networks Illuminations blog composed a reasonably well-balanced commentary regarding one or more of my posts in which I was waxing on philosophically about about my beliefs regarding keeping the network plumbing dumb and overlaying security as a flexible, agile, open and extensible services layer.

It’s clear he doesn’t think this way, but I welcome the discourse.  So let me make something clear:

Realistically, and especially in non-segmented flat networks, I think there are certain low-level security functions that will do well by being served up by switching infrastructure as security functionality commoditizes, but I’m not quite sure for the most part how or where yet I draw the line between utility and intelligence.  I do, however, think that NAC is one of those utility services.

I’m also unconvinced that access-grade, wiring closet switches are architected to scale in either functionality, efficacy or performance to provide any more value or differentiation other than port density than the normal bolt-on appliances which continue to cause massive operational and capital expenditure due to continued forklifts over time.  Companies like Nevis and Consentry quietly admit this too, which is why they have both "secure switches" AND appliances that sit on top of the network…

Joseph suggested he was entering into a religious battle in which he summarized many of the approaches to security that I have blogged about previously and I pointed out to him on his blog that this is exactly why I practice polytheism 😉 :

In case you aren’t following the
religious wars going on in the security blogs and elsewhere, let me bring you
up to date.

It goes like this. If you are in
the client software
business, then security has to be done in the endpoints and the network is just
dumb “plumbing,” or rather, it might as well be because you can’t assume
anything about it. If you sell appliances
that sit here and there in the network, the network sprouts two layers, with
the “plumbing” part separated from the “intelligence.” Makes sense, I guess. But
if you sell switches and routers then the intelligence must be integrated in with
the infrastructure. Now I get it. Or maybe I’m missing the point, what if you
sell both appliances
and infrastructure

I believe that we’re currently forced to deploy in defense in depth due to the shortcomings of solutions today.  I believe the "network" will not and cannot deliver all the security required.  I believe we’re going to have to invest more in secure operating systems and protocols.  I further believe that we need to be data-centric in our application of security.  I do not believe in single-point product "appliances" that are fundamentally functionally handicapped.  As a delivery mechanism to deliver security that matters across network I believe in this.

Again, the most important difference between what I believe and what Joseph points out above is that the normal class of "appliances" he’s trying to suggest I advocate simply aren’t what I advocate at all.  In fact, one might surprisingly confuse the solutions I do support as "infrastructure" — they look like high-powered switches with a virtualized blade architecture integrated into the solution.

It’s not an access switch, it’s not a single function appliance and it’s not a blade server and it doesn’t suffer from the closed proprietary single vendor’s version of the truth.  To answer the question, if you sell and expect to produce both secure appliances and infrastructure, one of them will come up short.   There are alternatives, however.

So why leave your endpoints,
the ones that have all those vulnerabilities that created the security industry
in the first place, to be hit on by bots, “guests,” and anyone else that wants
to? I don’t know about you, but I would want both something on the endpoint,
knowing it won’t be 100% but better than nothing, and also something in the
network to stop the nasty stuff, preferably before it even got in.

I have nothing to disagree with in the paragraph above — short of the example of mixing active network defense with admission/access control in the same sentence; I think that’s confusing two points.   Back to the religious debate as Joseph drops back to the "Nevis is going to replace all switches in the wiring closet" approach to security via network admission/access control:

Now, let’s talk about getting on
the network. If the switches are just dumb plumbing they will blindly let
anyone on, friend or foe, so you at least need to beef up the dumb plumbing
with admission enforcement points. And you want to put malware sensors where
they can be effective, ideally close to entry points, to minimize the risk of having
the network infrastructure taken down. So, where do you want to put the
intelligence, close to the entry enforcement points or someplace further in the
bowels of the network where the dumb plumbing might have plugged-and-played a
path around your expensive intelligent appliance?

That really depends upon what you’re trying to protect; the end point, the network or the resources connected to it.  Also, I won’t/can’t argue about wanting to apply access/filtering (sounds like IPS in the above example) controls closest to the client at the network layer.  Good design philosophy.   However, depending upon how segmented your network is, the types, value and criticality of the hosts in these virtual/physical domains, one may choose to isolate by zone or VLAN and not invest in yet another switch replacement at the access layer.

If the appliance is to be
effective, it has to sit at a choke point and really be and enforcement point.
And it has to have some smarts of its own. Like the secure switch that we make.

Again, that depends upon your definition of enforcement and applicability.  I’d agree that in flat networks, you’d like to do it at the port/host level, though replacing access switches to do so is usually not feasible in large networks given investments in switching architectures.  Typical fixed configuration appliances overlaid don’t scale, either.

Furthermore, depending upon your definition of what an enforcement zone and it’s corresponding diameter is (port, VLAN, IP Subnet) you may not care.  So putting that "appliance" in place may not be as foreboding as you wager, especially if it overlays across these boundaries satisfactorily.

We will see how long before these new-fangled switch vendors that used to be SSL VPN’s, that then became IPS appliances that have now "evolved" into NAC solutions, will become whatever the next buzzword/technology of tomorrow represents…especially now with Cisco’s revitalized technology refresh for "secure" access switches in the wiring closets.  Caymas, Array, and Vernier (amongst many) are perfect examples.

When it comes down to it, in the markets Crossbeam serves — and especially the largest enterprises — they are happy with their switches, they just want the best security choice on top of it provided in a consolidated, agile and scalable architecture to support it.



McAfee’s Bolin suggests Virtual “Sentinel” Security Model for Uncompromised Security

June 15th, 2007 2 comments

Christopher Bolin, McAfee’s EVP & CTO blogged an interesting perspective on utilizing virtualization technology to instantiate security functionality in an accompanying VM on a host to protect one or more VM’s on the same host.  This approach differs than the standard approach of placing host-based security controls on each VM or the intra-VS IPS models from companies such as Reflex (blogged about that here.)

I want to flush out some of these concepts with a little more meat attached

He defines the concept of running
security software alongside the operating system it is protecting as "the sentinel":

In this approach, the security software itself resides in its own
virtual machine outside and parallel to the system it is meant to
protect, which could be another virtual machine running an operating
system such as Windows. This enables the security technology to look
omnisciently into the subject OS and its operation and take appropriate
action when malware or anomalous behavior is detected.

Understood so far…with some caveats, below.

The security
software would run in an uncompromised environment monitoring in
real-time, and could avoid being disabled, detected or deceived (or
make the bad guys work a lot harder.)

While this supposedly uncompromised/uncompromisable OS could exist, how are you going to ensure that the underlying "routing" traffic flow control actually forces the traffic through the Sentinel VM in the first place? If the house of cards rests on this design element, we’ll be waiting a while…and adding latency.  See below.

This kind of security is not necessarily a one-to-one relationship
between sentinel and OSs. One physical machine can run several virtual
machines, so one virtual sentinel could watch and service many virtual

I think this is a potentially valid and interesting alternative to deploying more and more host-based security products (which seems odd coming from McAfee) or additional physical appliances, but there are a couple of issues with this premise, some of which Bolin points out, others I’ll focus on here:

  1. Unlike other applications which run in a VM and just require a TCP/IP stack, security applications are extremely topology sensitive.  The ability to integrate sentinels in a VM environment with other applications/VM’s at layer 2 is extremely difficult, especially if these security applications are to act "in-line." 

    Virtualizing transport while maintaining topology context is difficult and when you need to then virtualize the policies based upon this topology, it gets worse.  Blade servers have this problem; they have integrated basic switch/load balancing modules, but implementing policy-driven "serialization" and "parallelization" (which is what we call it @ Crossbeam) is very, very hard.

  2. The notion that the sentinel can "…look
    omnisciently into the subject OS and its operation and take appropriate
    action when malware or anomalous behavior is detected" from a network perspective is confusing.  If you’re outside the VM/Hypervisor, I don’t understand the feasibility of this approach.  This is where Blue Lane’s VirtualShiel ESX plug-in kicks ass — it plugs into the Hypervisor and protects not only directed traffic to the VM but also intra-VM traffic with behavioral detection, not just signatures.

  3. Resource allocation of the sentinel security control as a VM poses a threat vector inasmuch as one could overwhelm/DoS the Sentinel VM and security/availability of the entire system could be compromised; the controls protecting the VMs are competing for the same virtualized resources that the resources are after.
  4. As Bolin rightfully suggests, a vulnerability in the VM/VMM/Chipsets could introduce a serious set of modeling problems.

I maintain that securing virtualization by virtualizing security is nascent at best, but as Bolin rightfully demonstrates, there are many innovative approaches being discussed to address these new technologies.


Evan Kaplan and Co. (Aventail) Take the Next Step

June 12th, 2007 2 comments

So Aventail’s being acquired by SonicWall?  I wish Evan Kaplan and his team well and trust that SonicWall will do their best to integrate the best of Aventail’s technology into their own.  It’s interesting that this news pops up today because I was just thinking about Aventail’s CEO today as part of a retrospective of security over the last 10+ years.

I’ve always admired Evan Kaplan’s messaging from afar and a couple of months ago I got to speak with him for an hour or so.  For someone who has put his stake in the ground the ground for last 11 years as a leader in the SSL VPN market, you might be surprised to know that Evan’s perspective on the world of networking and security isn’t limited to "tunnel vision" as one might expect.

One of my favorite examples of Evan’s beliefs is this article in Network World back in 2005 that resonated so very much with me then and still does today.  The title of the piece is "Smart Networks are Not a Good Investment" and was a "face off" feature between Evan and Cisco’s Rob Redford.

Evan’s point here — and what resonates at the core of what I believe should happen to security — is that the "network" ought to be separated into two strata, the "plumbing" (routers, switches, etc.) and intelligent "service layers" (security being one of them.)

Evan calls these layers "connectivity" and "intelligence."

The plumbing should be fast, resilient, reliable, and robust providing the connectivity and the service layers should be agile, open, interoperable, flexible and focused on delivering service as a core competency.   

Networking vendors who want to leverage the footprint they already have in port density and extend their stranglehold single vendor version of the truth obviously disagree with this approach.  So do those who ultimately suggest that "good enough" is good enough.

Evan bangs the drum:

Network intelligence as promoted by the large network vendors is the
Star Wars defense system of our time – monolithic, vulnerable and
inherently unreliable. Proponents of smart networks want to extend
their hegemony by incorporating application performance and security into a unified, super-intelligent infrastructure. They want to integrate everything into the network and embed security into every node. In theory, you would then have centralized control and strong perimeter defense.

Yup.  As I blogged recently, "Network Intelligence is an Oxymoron."  The port puppets will have you believe that you can put all this intelligence in the routers in switches and solve all the problems these platforms were never designed to solve whilst simultaneously scale performance and features against skyrocketing throughput requirements, extreme latency thresholds, emerging technologies and an avalanche of compounding threats and vulnerabilities…all from one vendor, of course.

While on the surface this sounds reasonable, a deeper look reveals
that this kind of approach presents significant risk for users and
service providers. It runs counter to the clear trends in network
communication, such as today’s radical growth in broadband and wireless networks
, and increased virtualization of corporate networks through use of
public infrastructure. As a result of these trends, much network
traffic is accessing corporate data centers from public networks rather
than the private LAN, and the boundaries of the enterprise are
expanding. Companies must grow by embracing these trends and fully
leveraging public infrastructure and the power of the Internet.

Exactly.  Look at BT’s 21CN network architecture as a clear and unequivocal demonstration of this strategy; a fantastic high-performance, resilient and reliable foundational transport coupled with an open, agile, flexible and equally high-performance and scalable security service layer.  If BT is putting 18 billion pounds of their money investing in a strategy like this and don’t reckon they can rely on "embedded" security, why would you?

vendors are right in recognizing and trying to address the two
fundamental challenges of network communications: application
performance and security. However, they are wrong in believing the best
way to address these concerns is to integrate application performance
and security into the underlying network.

The alternative is to avoid building increasing intelligence into the physical network, which I call the connectivity lane, and building it instead into a higher-level plane I call the intelligence plane.

The connectivity plane covers end-to-end network connectivity in its broadest sense, leveraging IPv4 and eventually IPv6
. This plane’s characteristics are packet-level performance and high
availability. It is inherently insecure but incredibly resilient. The
connectivity plane should be kept highly controlled and standardized,
because it is heavy to manage and expensive to build and update. It
should also be kept dumb, with change happening slowly.

He’s on the money here again.  Let the network evolve at its pace using standards-based technology and allow innovation to deliver service at the higher levels.  The network evolves much more slowly and at a pace that demands stability.  The experientially-focused intelligence layer needs to be much more nimble and agile, taking advantage of the opportunism and the requirements to respond to rapidly emerging technologies and threat/vulnerabilities.

Look at how quickly solutions like DLP and NAC have stormed onto the market.  If we had to wait for Cisco to get their butt in gear and deliver solutions that actually work as an embedded function within the "network," we’d be out of business by now.

I don’t have the time to write it again, but the security implications of having the fox guarding the henhouse by embedding security into the "fabric" is scary.  Just look at the number of security vulnerabilities Cisco has had in their routing, switching, and security products in the last 6 months.  Guess what happens when they’re all one?   I sum it up here and here as examples.

Conversely, the intelligence plane is application centric and policy
driven, and is an overlay to the connectivity plane. The intelligence
plane is where you build relationships, security and policy, because it
is flexible and cost effective. This plane is network independent,
multi-vendor and adaptive, delivering applications and performance
across a variety of environments, systems, users and devices. The
intelligence plane allows you to extend the enterprise boundary using
readily available public infrastructure. Many service and product
vendors offer products that address the core issues of security and
performance on the intelligence plane.

Connectivity vendors should focus their efforts on building faster, easier to manage and more reliable networks. Smart networks
are good for vendors, not customers.

Wiser words have not been spoken…except by me agreeing with them, of course 😉  Not too shabby for an SSL VPN vendor way back in 2005.

Evan, I do hope you won’t disappear and will continue to be an outspoken advocate of flushing the plumbing…best of luck to you and your team as you integrate into SonicWall.


Redux: Liability of Security Vulnerability Research…The End is Nigh!

June 10th, 2007 3 comments

I posited the potential risks of vulnerability research in this blog entry here.   Specifically I asked about reverse engineering and implications related to IP law/trademark/copyright, but the focus was ultimately on the liabilities of the researchers engaging in such activities.

Admittedly I’m not a lawyer and my understanding of some of the legal and ethical dynamics are amateur at best, but what was very interesting to me was the breadth of the replies from both the on and off-line responses to my request for opinion on the matter. 

I was contacted by white, gray and blackhats regarding this meme and the results were divergent across legal, political and ideological lines.

KJH (Kelly Jackson Higgins — hey, Kel!) from Dark Reading recently posted an interesting collateral piece titled "Laws Threaten Security Researchers" in which she outlines the results of a CSI working group chartered to investigate and explore the implications that existing and pending legislation would have on vulnerability research and those who conduct it.  Folks like Jeremiah Grossman (who comments on this very story, here) and Billy Hoffman participate on this panel.

What is interesting is the contrast in commentary between how folks responded to my post versus these comments based upon the CSI working group’s findings:

In the report, some Web researchers say that even if they
find a bug accidentally on a site, they are hesitant to disclose it to
the Website’s owner for fear of prosecution. "This opinion grew
stronger the more they learned during dialogue with working group
members from the Department of Justice," the report says.

I believe we’ve all seen the results of some overly-litigious responses on behalf of companies against whom disclosures related to their products or services have been released — for good or bad.

Ask someone like Dave Maynor if the pain is ultimately worth it.  Depending upon your disposition, your mileage may vary. 

That revelation is unnerving to Jeremiah Grossman, CTO and
founder of WhiteHat Security and a member of the working group. "That
means only people that are on the side of the consumer are being
silenced for fear of prosecution," and not the bad guys.

"[Web] researchers are terrified about what they can and
can’t do, and whether they’ll face jail or fines," says Sara Peters,
CSI editor and author of the report. "Having the perspective of legal
people and law enforcement has been incredibly valuable. [And] this is
more complicated than we thought."

This sort of response didn’t come across that way at all from folks who both privately or publicly responded to my blog; most responses were just the opposite, stated with somewhat of a sense of entitlement and immunity.   I expect to query those same folks again on the topic. 

Check this out:

The report discusses several methods of Web research, such as
gathering information off-site about a Website or via social
engineering; testing for cross-site scripting by sending HTML mail from
the site to the researcher’s own Webmail account; purposely causing
errors on the site; and conducting port scans and vulnerability scans.

Interestingly, DOJ representatives say that using just one of
these methods might not be enough for a solid case against a [good or
bad] hacker. It would take several of these activities, as well as
evidence that the researcher tried to "cover his tracks," they say. And
other factors — such as whether the researcher discloses a
vulnerability, writes an exploit, or tries to sell the bug — may
factor in as well, according to the report.

Full disclosure and to whom you disclose it and when could mean the difference between time in the spotlight or time in the pokey!


For Data to Survive, It Must ADAPT…

June 1st, 2007 2 comments


Now that I’ve annoyed you by suggesting that network security will over time become irrelevant given lost visibility due to advances in OS protocol transport and operation, allow me to give you another nudge towards the edge and further reinforce my theories with some additionally practical data-centric security perspectives.

If any form of network-centric security solution is to succeed in adding value over time, the mechanics of applying policy and effecting disposition on flows as they traverse the network must be made on content in context.  That means we must get to a point where we can make “security” decisions based upon information and its “value” and classification as it moves about.

It’s not good enough to only make decisions on how flows/data should be characterized and acted on with the criteria being focused on the 5-tupule (header,) signature-driven profiling or even behavioral analysis that doesn’t characterize the content in context of where it’s coming from, where it’s going and who (machine, virtual machine and “user”) or what (application, service) intends to access and consume it.

In the best of worlds, we like to be able to classify data before it makes its way though the IP stack and enters the network and use this metadata as an attached descriptor of the ‘type’ of content that this data represents.  We could do this as the data is created by applications (thick or thin, rich or basic) either using the application itself or by using an agent (client-side) that profiles the data prior to storage or transmission.

Since I’m on my Jericho Forum kick lately, here’s how they describe how data ought to be controlled:

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component.

You would probably need client-side software to provide this
functionality.  As an example, we do this today with email compliance solutions that have primitive versions of
this sort of capability that force users to declare the classification
of an email before you can hit the send button or even the document info that can be created when one authors a Word document.

There are a bunch of ERM/DRM solutions in play today that are bandied about being sold as “compliance” solutions, but there value goes much deeper than that.  IP Leakage/Extrusion prevention systems (with or without client-side tie-ins) try to do similar things also.

Ideally, this metadata would be used as a fixed descriptor of the content that permanently attaches itself and follows that content around so it can be used to decide what content should be “routed” based upon policy.

If we’re not able to use this file-oriented static metadata, we’d like then for the “network” (or something in/on it) to be able to dynamically profile content at wirespeed and characterize the data as it moves around the network from origin to destination in the same way.

So, this is where Applied Data & Application Policy Tagging (ADAPT) comes in.  ADAPT is an approach that can make use of existing and new technology to profile and characterize content (by using content matching, signatures, regular expressions and behavioral analysis in hardware or software) to then apply policy-driven information “routing” functionality as flows traverse the network by using an 802.1 q-in-q VLAN tags (open approach) or applying a proprietary ADAPT tag-header as a descriptor to each flow as it moves around the network.

Think of it like a VLAN tag the describes the data within the packet/flow which is defined as seen fit;

The ADAPT tag/VLAN is user defined and can use any taxonomy that best suits the types of content that is interesting; one might use asset classification such as “confidential” or uses taxonomies such as “HIPAA” or “PCI” to describe what is contained in the flows.  One could combine and/or stack the tags, too.  The tag maps to one of these arbitrary categories which could be fed by interpreting metadata attached to the data itself (if in file form) or dynamically by on-the-fly profiling at the network level.

As data moves across the network and across what we call boundaries (zones) of trust, the policy tags are parsed and disposition effected based upon the language governing the rules.  If you use the “open” version using the q-in-q VLAN’s, you have something on the order of 4096 VLAN IDs to choose from…more than enough to accomodate most asset classification and still leave room for VLAN usage.  Enforcing the ACL’s can be done by pretty much ANY modern switch that supports q-in-q, very quickly.

Just like an ACL for IP addresses or VLAN policies, ADAPT does the same thing for content routing, but using VLAN ID’s (or the proprietary ADAPT header) to enforce it.

To enable this sort of functionality, either every switch/router in the network would need to either be q-in-q capable (which is most switches these days) or ADAPT enabled (which would be difficult since you’d need every network vendor to support the protocols.)  You could use an overlay UTM security services switch sitting on top of the network plumbing through which all traffic moving from one zone to another would be subject to the ADAPT policy since each flow has to go through said device.

Since the only device that needs to be ADAPT aware is this UTM security service switch (see the example below,) you can let the network do what it does best and utilize this solution to enforce the policy for you across these boundary transitions.  Said UTM security service switch needs to have an extremely high-speed content security engine that is able to characterize the data at wirespeed and add a tag to the frame as it moves through the switching fabric and processed prior to popping out onto the network.

Clearly this switch would have to have coverage across every network segment.  It wouldn’t work well in virtualized server environments or any topology where zoned traffic is not subject to transit through the UTM switch.

I’m going to be self-serving here and demonstrate this “theoretical” solution using a Crossbeam X80 UTM security services switch plumbed into a very fast, reliable, and resilient L2/L3 Cisco infrastructure.  It just so happens to have a wire-speed content security engine installed in it.  The reason the X-Series can do this is because once the flow enters its switching fabric, I own the ultimate packet/frame/cell format and can prepend any header functionality I like onto the structure to determine how it gets “routed.”

Take the example below where the X80 is connected to the layer-3 switches using 802.1q VLAN trunked interfaces.  I’ve made this an intentionally simple network using VLANs and L3 routing; you could envision a much more complex segmentation and routing environment, obviously.

AdaptjpgThis network is chopped up into 4 VLAN segments:

  1. General Clients (VLAN A)
  2. Finance & Accounting Clients (VLAN B)
  3. Financial Servers (VLAN C)
  4. HR Servers (VLAN D)

Each of the clients/servers in the respective VLANs default routes out to an IP address which belongs to the firewall cluster IP addresses which is proffered by the firewall application modules providing service in the X80.

Thus, to get from one VLAN to another VLAN, one must pass through the X80 and profiled by this content security engine and whatever additional UTM services are installed in the chassis (such as firewall, IDP, AV, etc.)

Let’s say then that a user in VLAN A (General Clients) attempts to access one or more resources in the VLAN D (HR Servers.)

Using solely IP addresses and/or L2 VLANs, let’s say the firewall and IPS policies allow this behavior as the clients in that VLAN have a legitimate need to access the HR Intranet server.  However, let’s say that this user tries to access data that exists on the HR Intranet server but contains personally identifiable information that falls under the governance/compliance mandates of HIPAA.

Let us further suggest that the ADAPT policy states the following:

Rule  Source                Destination            ADAPT Descriptor           Action

1        VLAN A             VLAN D                    HIPAA, Confidential        Deny
IP.1.1               IP.3.1

2        VLAN B             VLAN C                    PCI                                 Allow
IP.2.1             IP.4.1

Using rule 1 above, as the client makes the request, he transits from VLAN A to VLAN D.  The reply containing the requested information is profiled by the content security engine which is able to  characterize the data as containing information that matches our definition of either “HIPAA or Confidential” (purely arbitrary for the sake of this example.)

This could be done by reading the metadata if it exists as an attachment to the content’s file structure, in cooperation with an extrusion prevention application running in the chassis, or in the case of ad-hoc web-based applications/services, done dynamically.

According to the ADAPT policy above, this data would then be either silently dropped, depending upon what “deny” means, or perhaps the user would be redirected to a webpage that informs them of a policy violation.

Rule 2 above would allow authorized IP’s in VLANs to access PCI-classified data.

You can imagine how one could integrate IAM and extend the policies to include pseudonymity/identity as a function of access, also.  Or, one could profile the requesting application (browser, for example) to define whether or not this is an authorized application.  You could extend the actions to lots of stuff, too.

In fact, I alluded to it in the first paragraph, but if we back up a step and look at where consolidation of functions/services are being driven with virtualization, one could also use the principles of ADAPT to extend the ACL functionality that exists in switching environments to control/segment/zone access to/from virtual machines (VMs) of different asset/data/classification/security zones.

What this translates to is a workflow/policy instantiation that would use the same logic to prevent VM1 from communicating with VM2 if there was a “zone” mis-match; as we add data classification in context, you could have various levels of granularity that defines access based not only on VM but VM and data trafficked by them.

Furthermore, assuming this service was deployed internally and you could establish a trusted CA with certs that would support transparent MITM SSL decrypts, you could do this (with appropriate scale) with encrypted traffic also.

This is data-centric security that uses the network when needed, the host when it can and the notion of both static and dynamic network-borne data classification to enforce policy in real-time.


[Comments/Blogs on this entry you might be interested in but have no trackbacks set:

MCWResearch Blog

Rob Newby’s Blog

Alex Hutton’s Blog

Security Retentive Blog

Network Intelligence is an Oxymoron & The Myth of Security Packet Cracking

May 21st, 2007 No comments

Cia[Live from Interop’s Data Center Summit]

Jon Oltsik crafted an interesting post today regarding the bifurcation of opinion on where the “intelligence” ought to sit in a networked world: baked into the routers and switches or overlaid using general-purpose compute engines that ride Moore’s curve.

I think that I’ve made it pretty clear where I stand.   I submit that you should keep the network dumb, fast, reliable and resilient and add intelligence (such as security) via flexible and extensible service layers that scale both in terms of speed but also choice.

You should get to define and pick what best of breed means to you and add/remove services at the speed of your business, not the speed of an ASIC spin or an acquisition of technology that is neither in line with the pace and evolution of classes of threats and vulnerabilities or the speed of an agile business. 

The focal point of his post, however, was to suggest that the real issue is the fact that all of this intelligence requires exposure to the data streams which means that each component that comprises it needs to crack the packet before processing.   Jon suggests that you ought to crack the packet once and then do interesting things to the flows.  He calls this COPM (crack once, process many) and suggests that it yields efficiencies — of what, he did not say, but I will assume he means latency and efficacy.

So, here’s my contentious point that I explain below:

Cracking the packet really doesn’t contribute much to the overall latency equation anymore thanks to high-speed hardware, but the processing sure as heck does!  So whether you crack once or many times, it doesn’t really matter, what you do with the packet does.

Now, on to the explanation…

I think that it’s fair to say that many of the underlying mechanics of security are commoditizing so things like anti-virus, IDS, firewalling, etc. can be done without a lot of specialization – leveraging prior art is quick and easy and thus companies can broaden their product portfolios by just adding a feature to an existing product.

Companies can do this because of the agility that software provides, not hardware.  Hardware can give you scales of economy as it relates to overall speed (for certain things) but generally not flexibility. 

However, software has it’s own Moore’s curve or sorts and I maintain that unfortunately its lifecycle, much like what we’re hearing @ Interop regarding CPU’s, does actually have a shelf life and point of diminishing return for reasons that you’re probably not thinking about…more on this from Interop later.

John describes the stew of security componenty and what he expects to see @ Interop this week:

I expect network intelligence to be the dominant theme at this week’s Interop show in Las Vegas. It may be subtle but its definitely there. Security companies will talk about cracking packets to identify threats, encrypt bits, or block data leakage. The WAN optimization crowd will discuss manipulating protocols and caching files, Application layer guys crow about XML parsing, XSLT transformation, and business logic. It’s all about stuffing networking gear with fat microprocessors to perform one task or another.

That’s a lot of stuff tied to a lot of competing religious beliefs about how to do it all as Jon rightly demonstrates and ultimately highlights a nasty issue:

The problem now is that we are cracking packets all over the place. You can’t send an e-mail, IM, or ping a router without some type of intelligent manipulation along the way.

<nod>  Whether it’s in the network, bolted on via an appliance or done on the hosts, this is and will always be true.  Here’s the really interesting next step:

I predict that the next bit wave in this evolution will be known as COPM for "Crack once, process many." In this model, IP packets are stopped and inspected and then all kinds of security, acceleration, and application logic actions occur. Seems like a more efficient model to me.

To do this, it basically means that this sort of solution requires Proxy (transparent or terminating) functionality.  Now, the challenge is that whilst “cracking the packets” is relatively easy and cheap even at 10G line rates due to hardware, the processing is really, really hard to do well across the spectrum of processing requirements if you care about things such as quality, efficacy, and latency and is “expensive” in all of those categories.

The intelligence of deciding what to process and how once you’ve cracked the packets is critical. 

This is where embedding this stuff into the network is a lousy idea. 

How can a single vendor possibly provide anything more than “good enough” security in a platform never designed to solve this sort of problem whilst simultaneously trying to balance delivery and security at line rate? 

This will require a paradigm shift for the networking folks that will either mean starting from scratch and integrating high-speed networking with general-purpose compute blades, re-purposing a chassis (like, say, a Cat65K) and stuffing it with nothing but security cards and grafting it onto the switches or stack appliances (big or small – single form factor or in blades) and graft them onto the switches once again.   And by the way, simply adding networking cards to a blade server isn’t an effective solution, either.  "Regular" applications (and esp. SOA/Web 2.0 apps) aren’t particularly topology sensitive.  Security "applications" on the other hand, are wholly dependent and integrated with the topologies into which they are plumbed.

It’s the hamster wheel of pain.

Or, you can get one of these which offers all the competency, agility, performance, resilience and availability of a specialized networking component combined with an open, agile and flexible operating and virtualized compute architecture that scales with parity based on Intel chipsets and Moore’s law.

What this gives you is an ecosystem of loosely-coupled BoB security services that can be intelligently combined in any order once cracked and ruthlessly manipulated as it passes through them governed by policy – and ultimately dependent upon making decisions on how and what to do to a packet/flow based upon content in context.

The consolidation of best of breed security functionality delivered in a converged architecture yields efficiencies that is spread across the domains of scale, performance, availability and security but also on the traditional economic scopes of CapEx and OpEx.

Cracking packets, bah!  That’s so last Tuesday.


Security: “Built-in, Overlay or Something More Radical?”

May 10th, 2007 No comments

I was reading Joseph Tardo’s (Nevis Networks) new Illuminations blog and found the topic of his latest post ""Built-in, Overlay or Something More Radical?" regarding the possible future of network security quite interesting.

Joseph (may I call you Joseph?) recaps the topic of a research draft from Stanford funded by the "Stanford Clean Slate Design for the Internet" project that discusses an approach to network security called SANE.   The notion of SANE (AKA Ethane) is a policy-driven security services layer that utilizes intelligent centrally-located services to replace many of the underlying functions provided by routers, switches and security products today:

Ethane is a new architecture for enterprise networks which provides a powerful yet simple management model and strong security guarantees.  Ethane allows network managers to define a single, network-wide, fine-grain policy, and then enforces it at every switch.  Ethane policy is defined over human-friendly names (such as "bob, "payroll-server", or "http-proxy) and  dictates who can talk to who and in which manner.  For example, a policy rule may specify that all guest users who have not authenticated can only use HTTP and that all of their traffic must traverse a local web proxy.

Ethane has a number of salient properties difficult to achieve
with network technologies today.  First, the global security policy is enforced at each switch in a manner that is resistant to poofing.  Second, all packets on an Ethane network can be
attributed back to the sending host and the physical location in
which the packet entered the network.  In fact, packets collected
in the past can also be attributed to the sending host at the time the packets were sent — a feature that can be used to aid in
auditing and forensics.  Finally, all the functionality within
Ethane is provided by very simple hardware switches.

The trick behind the Ethane design is that all complex
functionality, including routing, naming, policy declaration and
security checks are performed by a central
controller (rather than
in the switches as is done today).  Each flow on the network must
first get permission from the controller which verifies that the
communicate is permissible by the network policy.  If the controller allows a flow, it computes a route for the flow to
take, and adds an entry for that flow in each of the switches
along the path.

With all complex function subsumed by the controller, switches in
Ethane are reduced to managed flow tables whose entries can only be populated by the controller (which it does after each succesful permission check).  This allows a very simple design for Ethane
      switches using only SRAM (no power-hungry TCAMS) and a little bit
of logic.


I like many of the concepts here, but I’m really wrestling with the scaling concerns that arise when I forecast the literal bottlenecking of admission/access control proposed therein.

Furthermore, and more importantly, while SANE speaks to being able to define who "Bob"  is and what infrastructure makes up the "payroll server,"  this solution seems to provide no way of enforcing policy based on content in context of the data flowing across it.  Integrating access control with the pseudonymity offered by integrating identity management into policy enforcement is only half the battle.

The security solutions of the future must evolve to divine and control not only vectors of transport but also the content and relative access that the content itself defines dynamically.

I’m going to suggest that by bastardizing one of the Jericho Forum’s commandments for my own selfish use, the network/security layer of the future must ultimately respect and effect disposition of content based upon the following rule (independent of the network/host):

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component. 


Deviating somewhat from Jericho’s actual meaning, I am intimating that somehow, somewhere, data must be classified and self-describe the policies that govern how it is published and consumed and ultimately this security metadata can then be used by the central policy enforcement mechanisms to describe who is allowed to access the data, from where, and where it is allowed to go.

…Back to he topic at hand, SANE:

As Joseph alluded, SANE would require replacing (or not using much of the functionality of) currently-deployed routers, switches and security kit.  I’ll let your imagination address the obvious challenges with this design.

Without delving deeply, I’ll use Joseph’s categorization of “interesting-but-impractical”


NAC is a Feature not a Market…

March 30th, 2007 7 comments

MarketfeatureI’m picking on NAC in the title of this entry because it will drive
Alan Shimel ape-shit and NAC has become the most over-hyped hooplah
next to Britney’s hair shaving/rehab incident…besides, the pundits come a-flockin’ when the NAC blood is in the water…

Speaking of chumming for big fish, love ’em or hate ’em, Gartner’s Hype Cycles do a good job of allowing
one to visualize where and when a specific technology appears, lives
and dies
as a function of time, adoption rate and utility.

We’ve recently seen a lot of activity in the security space that I
would personally describe as natural evolution along the continuum,
but is often instead described by others as market "consolidation" due to

I’m not sure they are the same thing, but really, I don’t care to argue
that point.  It’s boring.  It think that anyone arguing either side is
probably right.  That means that Lindstrom would disagree with both. 

What I do want to do is summarize a couple of points regarding some of
this "evolution" because I use my blog as a virtual jot pad against which
I can measure my own consistency of thought and opinion.  That and the
chicks dig it.

Without my usual PhD Doctoral thesis brevity, here are just a few
network security technologies I reckon are already doomed to succeed as
features and not markets — those technologies that will, within the
next 24 months, be absorbed into other delivery mechanisms that
incorporate multiple technologies into a platform for virtualized
security service layers:

  1. Network Admission Control
  2. Network Access Control
  3. XML Security Gateways
  4. Web Application Firewalls
  5. NBAD for the purpose of DoS/DDoS
  6. Content Security Accelerators
  7. Network-based Vulnerability Assessment Toolsets
  8. Database Security Gateways
  9. Patch Management (Virtual or otherwise)
  10. Hypervisor-based virtual NIDS/NIPS tools
  11. Single Sign-on
  12. Intellectual Property Leakage/Extrusion Prevention

…there are lots more.  Components like gateway AV, FW, VPN, SSL
accelerators, IDS/IPS, etc. are already settling to the bottom of UTM
suites as table stakes.  Many other functions are moving to SaaS
models.  These are just the ones that occurred to me without much

Now, I’m not suggesting that Uncle Art is right and there will be no
stand-alone security vendors in three years, but I do think some of this
stuff is being absorbed into the bedrock that will form the next 5
years of evolutionary activity.

Of course, some folks will argue that all of the above will just all be
absorbed into the "network" (which means routers and switches.)  Switch
or multi-function device…doesn’t matter.  The "smoosh" is what I’m
after, not what color it is when it happens.

What’d I miss?


(Written from SFO Airport sitting @ Peet’s Coffee.  Drinking a two-shot extra large iced coffee)

The semantics of UTM messaging: Snake Oil and Pissing Matches

March 14th, 2007 No comments
Those of you who know me realize that no matter where I go, who I work for or who’s buying me drinks, I am going to passionately say what I believe at the expense of sometimes being perceived as a bit of a pot-stirrer. 

I’m far from being impartial on many topics — I don’t believe that anyone is truly impartial about anything —  but at the same time, I have an open mind and will gladly listen to points raised in response to anything I say.  I may not agree with it, but I’ll also tell you why. 

What I have zero patience for, however, is when I get twisted semantic marketing spin responses.  It makes me grumpy.  That’s probably why Rothman, Shimmy and I get along so well.

Some of you might remember grudge match #1 between me and Alex Niehaus, the former VP of Marketing for Astaro (coincidence?)  This might become grudge match #2.  People will undoubtedly roll their eyes and dismiss this as vendors sniping at one another.  So be it.  Please see paragraphs #1 and 2 above. 

My recent interchange with Richard Stiennon is an extension of arguments we’ve been having for a year or so from when Richard was still an independent analyst.  He is now employed as the Chief Marketing Officer at Fortinet. 

Our disagreements have intensified for what can only be described as obvious reasons, but I’m starting to get as purturbed as I did with Alex Neihaus when the marketing sewerage obfuscates the real issues with hand-waving and hyperbole. 

I called Richard out recently for what I believed to be complete doubletalk on his stance on UTM and he responded here in a comment.  Comments get buried so I want to bring this back up to the top of the stack for all to see.  Don’t mistake this as a personal attack against Richard, but a dissection of what Richard says.  I think it’s just gobbledygook.

To be honest, I think it took a lot of guts to respond, but his answer makes my head spin as much as Anna Nicole Smith in a cheesecake factory.  Yes, I know she’s dead, but she loved cheesecake and I’m pressed for an analogy.

The beauty of blogging is that the instant you say something, it becomes a record of "fact."  That can be good or bad depending upon what you say. 

I will begin to respond to Richard’s retort wherein he first summarily states:

Here is where I stand. I hate the huge bucket that UTM has become.  Absolutely every form of gateway security can be lumped in to this
category that IDC invented. We discussed this at RSA on the panel that
Mr. Rothman so graciously hosted.

I also assume that this means Richard hates the bit buckets that Firewall, IPS, NAC, VA/VM, and Patch Management (as examples) have become, too?   This trend is the natural by-product of marketers and strategists scrambling to find a place to hang their hat in a very crowded space.  So what.

UTM is about solving applied sets of business problems.  You can call it what you like, but the only reason marketeers either love or hate UTM usually depends upon where they sit in the rankings.  This intrigues me, Richard, because (as you mention further on) Fortinet pays to be a part of IDC’s UTM Tracker, and they rank Fortinet as #1 in at least one of the product price ranges, so someone at Fortinet seems to think UTM is a decent market to hang a shingle on.

Hate it or not, Fortinet is a UTM vendor, just like Crossbeam.  Both companies hang their shingles on this market because it’s established and tracked.

When trying to classify a market you
look for common traits and, even better, common buying patterns, to
help lump vendors or products in to a category. But for Crossbeam,
Fortinet, and Astaro to be lumped together has always struck me as a
sign that the UTM "market" was not going to work.

You’re right.  Lumping Crossbeam with Fortinet and Astaro is the wrong thing to do.  😉

Arguing the viability of a market which has tremendous coverage and validated presence seems a little odd.  Crafting a true strategy of differentiation as to how you’re different in that market is a good thing, however.

I much prefer the Gartner view (as I would) of Security Platforms.
These are devices that are able to apply security policies using a
bunch of different methods and they can loosely be thrown on to a grid…

So what you’re saying is that you like the nebulous and ill-defined blob that is Gartner’s view, don’t like IDC, but you’ll gladly pay for their services to declare you #1 in a market you don’t respect?

Now, yes, I did join a company that IDC considers to be a major UTM
player- leading in volume shipments in those parts of 2006 that they
are reporting. But, I was an independent analyst and I NEVER classified
Fortinet as a UTM play.

You mean besides when you said:

"By all accounts the so called UTM market is doing very well with players like Fortinet, Barracuda, Sonicwall, Astaro, and Watchguard, evidently seeing considerable success" 

Just in case you’re interested, you can find that quote here.   There are many, many other examples of you saying this, by the way.  Podcasts, blog entries, etc.

Also, are you suggesting that Fortinet does not consider itself a UTM player?  Someone better tell the Marketing department.  Look at one of your news pages on your website.  Say, this one, for example — 10 articles have UTM in the title and your own Mr. Akomoto (VP of Fortinet, Japan) says "The UTM market was pioneered by us," says Mr. Okamoto, the vice-president of Fortinet Japan. Mr. Okamoto explains how Fortinet created the UTM category, the initial
popularity of UTM solutions with SMBs…" 

Heck, in the 24 categories for the security
market that I maintained I did not even track UTMs. As I tracked
Fortinet over the years I considered them a security platform vendor
and one that just happened to be executing on my vision for the network
security space.

Yes, I understand how much you dislike IDC.  Can you kindly show reference to where you previously commented on how Fortinet was executing on your vision for Secure Network Fabric?  I can show you where you did for Crossbeam — it was at our Sales Meeting two years ago where you presented.  I can even upload the slide presentation if you like.

As you know Chris I have always been a big fan of Crossbeam and in
the interest of full disclosure, Crossbeam was a client while I was a
Gartner analyst and my second client when I launched my own firm. Great
people and a great product.

Richard, I’m not really looking for the renewal of your Crossbeam Fan Club membership…really.

Crossbeam is the security platform of
choice for running legacy security apps.

Oh, now it’s on!  I’m fixin’ to get "Old Testament" on you!

Just so we’re clear, ISV applications that run on Crossbeam such as XML gateways, web-application firewalls, database firewalls and next generation network converged security services such as session border controllers are all UTM "legacy applications!?" 

So besides an ASIC for AV, what "new" non-legacy apps does Fortinet bring to the table?  I mean now.  From the Fortinet homepage, please demonstrate which novel new applications that Firewall, IPS, VPN, Web filtering and Antispam represent?

It must suck to have to craft a story around boat-anchor ASICs that can’t extend past AV offload.  That means you have to rely on software and innovation in that space.  Cobbling together a bunch of "legacy" applications with a nice GUI doesn’t necessarily represent innovation and "next generation." 

Now let’s address the concept of running multiple security defenses
on one security platform. Let’s take three such functions, Firewalling,
VPN, and IPS. Thanks to Checkpoint, firewalls and VPN are frequently
bundled together. It has become the norm, although in the early days
these were separate boxes. Now, you can either take a Snort
implementation and bolt it on to your firewall in such a way that a
signature can trigger a temporary block command ala Checkpoint and a
bunch of other so called IPS devices or you can create a deep packet
inspection capable firewall that can apply policies like: No Worm
Traffic. To do the latter you have to start from scratch. You need new
technology and several vendors do this pretty well.

It’s clear you have a very deluded interesting perspective on security applications. The "innovation" that you’re suggesting differentiates what has classically been described as the natrual evolution of converging marketspaces.  That over-played Snort analogy is crap.  The old "signature" vs. "anomaly detection" argument paired with "deep packet inspection" is tired.  Fortinet doesn’t really do anything that anyone else can’t/doesn’t already do.  Except for violating GPL, that is.

I suppose now that Check Point has acquired NFR, their technology is crap, too?  Marcus would be proud.

So, given a new way to firewall (payload inspection instead of
stateful inspection) what enterprise would choose *not* to use IPS
capability in their firewall and use a separate device behind the
firewall? See the trouble? A legacy firewall is NO LONGER BEST OF
BREED! The best of breed firewall can do IPS.

Oh come on, Richard.  First of all, the answer to your question is that many, many large enterprises and service providers utilize a layered defense and place an IPS before or after their firewall.  Some have requirements for firewall/IDS/IPS pairs from different vendors.  Others require defense in depth and do not trust that the competence in a solutions provider that claims to "do it all."

Best of breed is what the customer defines as best of breed.  Just to be clear, would you consider Fortinet to be best of breed?

If you use a Crossbeam, by the way, it’s not a separate device and you’re not limited to just using the firewall or IPS in "front of" or "behind" one another.  You can virtualize placement wherever you desire.  Also, in many large enterprises, using IPS’s and firewalls from separate vendors is not only good practice but also required.

How does Fortinet accomplish that?

Your "payload inspection" is leveraging a bunch of OSS-based functionality paired with an ASIC that is used for AV — you know, signatures — with heuristics and a nice GUI.  Whilst the Cosine IP Fortinet acquired represents some very interesting technology for provisioning and such, it ain’t in your boxes.

You’re really trying to pick a fight with me about Check Point when you choose to also ignore the fact that we run up to 15 other applications such as SourceFire and ISS on the same platform?  We all know you dislike Check Point.  Get over it.

I have spent eight of the last 12 weeks on the road meeting our
large enterprise clients in the Americas, Asia, and EMEA. None of them
shop comparatively for UTM appliances. Every single customer was
shopping for firewall upgrades, SSL VPN, spam or virus filtering
inline, etc.

Really?  So since you don’t have separate products to address these (Fortinet sells UTM, afterall) that means you had nothing to offer them?  Convergence is driving UTM adoption.  You can call it what you want, but you’re whitewashing to prove a flawed theorem.

During the sales process they realize the benefit of
combined functionality that comes with the ability to process payloads
and invariably sign up for more than just a single security function.
Does that mean UTM is gaining traction in the enterprise? To me the
answer is no. It means that the enterprise is looking for advanced
security platforms that can deliver better security at lower capex and

…and what the heck is the difference between that and UTM, exactly?  People don’t buy IPS, they buy network level protection to defend against attack.  IPS is just the product catagory, as is UTM. 

I would lay off the Bourbon Chris. Try a snifter of my 16 yr old
Lagavulin that I picked up in London this Friday. It will help to
mellow you out.

I don’t like Scotch, Richard.  It leaves a bad taste in my mouth…sort of like your response 😉