Archive

Archive for June, 2007

Off to NorCal & Utah this Week /UK & Milan, Italy next week.

June 19th, 2007 No comments

I4
I’m traveling to NorCal and Utah this week for customer visits and then off to the UK and then speak at the I4 forum in Milan, Italy.

I’ll be thinking of you all over a bowl of penne and lovely Barolo.  Ciao!

If you’re in the area, ping me.

/Hoff

Categories: Travel Tags:

Bye Bye, SPI (Dynamics…)

June 19th, 2007 1 comment

Spilogo_3
I think this one has been forecasted about eleventy-billion times already, but HP is acquiring SPI Dynamics.

I think it’s clear why, and HP, just like IBM, are going to use this to drive service revenue.  I wonder when HP’s acqusition of an MSSP will take place to compete with BT’s efforts with Counterpane & INS, etc.?

Well, that leaves only 600+ security companies left in the security consolidation dating pool…

/Hoff

Categories: Web Application Security Tags:

Security Application Instrumentation: Reinventing the Wheel?

June 19th, 2007 No comments

Bikesquarewheel
Two of my favorite bloggers engaged in a trackback love-fest lately on the topic of building security into applications; specifically, enabling applications as a service delivery function to be able to innately detect, respond to and report attacks.

Richard Bejtlich wrote a piece called Security Application Instrumentation and Gunnar Peterson chimed in with Building Coordinated Response In – Learning from the Anasazis.  As usual, these are two extremely well-written pieces and arrive at a well constructed conclusion that we need a standard methodology and protocol for this reporting.  I think that this exquisitely important point will be missed by most of the security industry — specifically vendors.

While security vendor’s hearts are in the right place (stop laughing,) the "security is the center of the universe" approach to telemetry and instrumentation will continue to fall on deaf ears because there are no widely-adopted standard ways of reporting across platforms, operating systems and applications that truly integrate into a balanced scorecard/dashboard that demonstrates security’s contribution to service availability across the enterprise.   I know what you’re thinking…"Oh God, he’s going to talk about metrics!  Ack!"  No.  That’s Andy’s job and he does it much better than I.

This mess is exactly why the SEIM market emerged to clean up the cesspool of log dumps that spew forth from devices that are, by all approximation, utterly unaware of the rest of ecosystem in which they participate.  Take all these crappy log dumps via Syslog and SNMP (which can still be proprietary,) normalize if possible, correlate "stuff" and communicate that something "bad" or "abnormal" has occurred.

How does that communicate what this really means to the business, its ability to function, deliver servce and ultimately the impact on risk posture?  It doesn’t because security reporting is the little kid wearing a dunce hat standing in the corner because it doesn’t play well with others.

Gunnar stated this well:

Coordinated detection and response is the logical conclusion to defense
in depth security architecture. I think the reason that we have
standards for authentication, authorization, and encryption is because
these are the things that people typically focus on at design time.
Monitoring and auditing are seen as runtime operational acitivities,
but if there were standards based ways to communicate security
information and events, then there would be an opportunity for the
tooling and processes to improve, which is ultimately what we need.

So, is the call for "security
application instrumentation"
doomed to fail because we in the security industry will try to reinvent the wheel with proprietary solutions and suggest
that the current toolsets and frameworks which are available as part of
a much larger enterprise management and reporting strategy not enough? 

Bejtlich remarked on the need for mechanisms that report application state must be built into the application and must report more than just performance:

Today we need to talk about applications defending themselves. When
they are under attack they need to tell us, and when they are abused,
subverted, or breached they would ideally also tell us

I would like to see the next innovation be security application instrumentation,
where you devise your application to report not only performance and
fault logging, but also security and compliance logging. Ideally the
application will be self-defending as well, perhaps offering less
vulnerability exposure as attacks increase (being aware of DoS
conditions of course).

I would agree, but I get the feeling that without integrating this telemetry and the output metrics and folding it into response systems whose primary role is to talk about delivery and service levels — of which "security" is a huge factor — the relevance of this data within the visible single pane of glass of enterprise management is lost.

So, rather than reinvent the wheel and incrementally "innovate," why don’t we take something like the Open Group’s Application Response Measurement (ARM) standard, make sure we subscribe to a telemetry/instrumentation format that speaks to the real issues and enable these systems to massage our output in terms of the language of business (risk?) and work to extend what is already a well-defined and accepted enterprise response management toolset to include security?

To wit:

The Application Response Measurement (ARM) standard describes a common
method for integrating enterprise applications as manageable entities.
The ARM standard allows users to extend their enterprise management
tools directly to applications creating a comprehensive end-to-end
management capability that includes measuring application availability,
application performance, application usage, and end-to-end transaction
response time.

Or how about something like EMC’s Smarts:

Maximize availability and performance of
mission-critical IT resources—and the business services they support.
EMC Smarts software provides powerful solutions for managing complex
infrastructures end-to-end, across technologies, and from the network
to the business level. With EMC Smarts innovative technology you can:

  • Model components and their relationships across networks, applications, and storage to understand effect on services.
  • Analyze data from multiple sources to pinpoint root cause problems—automatically, and in real time.
  • Automate discovery, modeling, analysis, workflow, and updates for dramatically lower cost of ownership.

            

…add security into these and you’ve got a winner.   

There are already industry standards (or at least huge market momentum)
around intelligent automated IT infrastructure, resource management and service level reporting.
We should get behind a standard that elevates the perspective of how security contributes to service delivery (and dare I say risk management) instead of trying to reinvent the wheel…unless you happen to like the Hamster Wheel of Pain…

/Hoff

Does Centralized Data Governance Equal Centralized Data?

June 17th, 2007 4 comments

Cube
I’ve been trying to construct a palette of blog entries over the last few months which communicates the need for a holistic network, host and data-centric approach to information security and information survivability architectures. 

I’ve been paying close attention to the dynamics of the DLP/CMF market/feature positioning as well as what’s going on in enterprise information architecture with the continued emergence of WebX.0 and SOA.

That’s why I found this Computerworld article written by Jay Cline very interesting as it focused on the need for a centralized data governance function within an organization in order to manage risk associated with coping with the information management lifecycle (which includes security and survivability.)  The article went on to also discuss how the roles within the organization, namely the CIO/CTO, will also evolve in parallel.

The three primary indicators for this evolution were summarized as:

1. Convergence of information risk functions
2. Escalating risk of information compliance
3. Fundamental role of information.

Nothing terribly earth-shattering here, but the exclamation point of this article to enable a
centralized data governance  organization is a (gasp!) tricky combination of people, process
and technology:

"How does this all add up? Let me connect the dots: Data must soon become centralized,
its use must be strictly controlled within legal parameters, and information must drive the
business model. Companies that don’t put a single, C-level person in charge of making
this happen will face two brutal realities: lawsuits driving up costs and eroding trust in the
company, and competitive upstarts stealing revenues through more nimble use of centralized
information."

Let’s deconstruct this a little because I totally get the essence of what is proposed, but
there’s the insertion of some realities that must be discussed.  Working backwards:

  • I agree that data and it’s use must be strictly controlled within legal parameters.
  • I agree that a single, C-level person needs to be accountable for the data lifecycle
  • However, I think that whilst I don’t disagree that it would be fantastic to centralize data,
    I think it’s a nice theory but the wrong universe. 

Interesting, Richard Bejtlich focused his response to the article on this very notion, but I can’t get past a couple of issues, some of them technical and some of them business-related.

There’s a confusing mish-mash alluded to in Richard’s blog of "second home" data repositories that maintain copies of data that somehow also magically enforce data control and protection schemes outside of this repository while simultaneously allowing the flexibility of data creation "locally."  The competing themes for me is that centralization of data is really irrelevant — it’s convenient — but what you really need is the (and you’ll excuse the lazy use of a politically-charged term) "DRM" functionality to work irrespective of where it’s created, stored, or used.

Centralized storage is good (and selfishly so for someone like Richard) for performing forensics and auditing, but it’s not necessarily technically or fiscally efficient and doesn’t necessarily align to an agile business model.

The timeframe for the evolution of this data centralization was not really established,
but we don’t have the most difficult part licked yet — the application of either the accompanying
metadata describing the information assets we wish to protect OR the ability to uniformly classify and
enforce it’s creation, distribution, utilization and destruction.

Now we’re supposed to also be able to magically centralize all our data, too?  I know that large organizations have embraced the notion of data warehousing, but it’s not the underlying data stores I’m truly worried about, it’s the combination of data from multiple silos within the data warehouses that concerns me and its distribution to multi-dimensional analytic consumers. 

You may be able to protect a DB’s table, row, column or a file, but how do you apply a policy to a distributed ETL function across multiple datasets and paths?

ATAMO?  (And Then A Miracle Occurs) 

What I find intriguing about this article is that this so-described pendulum effect of data centralization (data warehousing, BI/DI) and resource centralization (data center virtualization, WAN optimization/caching, thin client computing) seem to be on a direct  collision course with the way in which applications and data are being distributed with  Web2.0/Service Oriented architectures and delivery underpinnings such as rich(er) client side technologies such as mash-ups and AJAX…

So what I don’t get is how one balances centralizing data when today’s emerging infrastructure
and information architectures are constructed to do just the opposite; distribute data, processing
and data re-use/transformation across the Enterprise?  We’ve already let the data genie out of the bottle and now we’re trying to cram it back in? 
(*please see below for a perfect illustration)

I ask this again within the scope of deploying a centralized data governance organization and its associated technology and processes within an agile business environment. 

/Hoff

P.S. I expect that a certain analyst friend of mine will be emailing me in T-Minus 10, 9…

*Here’s a perfect illustration of the futility of centrally storing "data."  Click on the image and notice the second bullet item…:

Googlegears

Really, There’s More to Security than Admission/Access Control…

June 16th, 2007 2 comments

Wired_science_religion
Dr. Joseph Tardo over at the Nevis Networks Illuminations blog composed a reasonably well-balanced commentary regarding one or more of my posts in which I was waxing on philosophically about about my beliefs regarding keeping the network plumbing dumb and overlaying security as a flexible, agile, open and extensible services layer.

It’s clear he doesn’t think this way, but I welcome the discourse.  So let me make something clear:

Realistically, and especially in non-segmented flat networks, I think there are certain low-level security functions that will do well by being served up by switching infrastructure as security functionality commoditizes, but I’m not quite sure for the most part how or where yet I draw the line between utility and intelligence.  I do, however, think that NAC is one of those utility services.

I’m also unconvinced that access-grade, wiring closet switches are architected to scale in either functionality, efficacy or performance to provide any more value or differentiation other than port density than the normal bolt-on appliances which continue to cause massive operational and capital expenditure due to continued forklifts over time.  Companies like Nevis and Consentry quietly admit this too, which is why they have both "secure switches" AND appliances that sit on top of the network…

Joseph suggested he was entering into a religious battle in which he summarized many of the approaches to security that I have blogged about previously and I pointed out to him on his blog that this is exactly why I practice polytheism 😉 :

In case you aren’t following the
religious wars going on in the security blogs and elsewhere, let me bring you
up to date.

It goes like this. If you are in
the client software
business, then security has to be done in the endpoints and the network is just
dumb “plumbing,” or rather, it might as well be because you can’t assume
anything about it. If you sell appliances
that sit here and there in the network, the network sprouts two layers, with
the “plumbing” part separated from the “intelligence.” Makes sense, I guess. But
if you sell switches and routers then the intelligence must be integrated in with
the infrastructure. Now I get it. Or maybe I’m missing the point, what if you
sell both appliances
and infrastructure
?

I believe that we’re currently forced to deploy in defense in depth due to the shortcomings of solutions today.  I believe the "network" will not and cannot deliver all the security required.  I believe we’re going to have to invest more in secure operating systems and protocols.  I further believe that we need to be data-centric in our application of security.  I do not believe in single-point product "appliances" that are fundamentally functionally handicapped.  As a delivery mechanism to deliver security that matters across network I believe in this.

Again, the most important difference between what I believe and what Joseph points out above is that the normal class of "appliances" he’s trying to suggest I advocate simply aren’t what I advocate at all.  In fact, one might surprisingly confuse the solutions I do support as "infrastructure" — they look like high-powered switches with a virtualized blade architecture integrated into the solution.

It’s not an access switch, it’s not a single function appliance and it’s not a blade server and it doesn’t suffer from the closed proprietary single vendor’s version of the truth.  To answer the question, if you sell and expect to produce both secure appliances and infrastructure, one of them will come up short.   There are alternatives, however.

So why leave your endpoints,
the ones that have all those vulnerabilities that created the security industry
in the first place, to be hit on by bots, “guests,” and anyone else that wants
to? I don’t know about you, but I would want both something on the endpoint,
knowing it won’t be 100% but better than nothing, and also something in the
network to stop the nasty stuff, preferably before it even got in.

I have nothing to disagree with in the paragraph above — short of the example of mixing active network defense with admission/access control in the same sentence; I think that’s confusing two points.   Back to the religious debate as Joseph drops back to the "Nevis is going to replace all switches in the wiring closet" approach to security via network admission/access control:

Now, let’s talk about getting on
the network. If the switches are just dumb plumbing they will blindly let
anyone on, friend or foe, so you at least need to beef up the dumb plumbing
with admission enforcement points. And you want to put malware sensors where
they can be effective, ideally close to entry points, to minimize the risk of having
the network infrastructure taken down. So, where do you want to put the
intelligence, close to the entry enforcement points or someplace further in the
bowels of the network where the dumb plumbing might have plugged-and-played a
path around your expensive intelligent appliance?

That really depends upon what you’re trying to protect; the end point, the network or the resources connected to it.  Also, I won’t/can’t argue about wanting to apply access/filtering (sounds like IPS in the above example) controls closest to the client at the network layer.  Good design philosophy.   However, depending upon how segmented your network is, the types, value and criticality of the hosts in these virtual/physical domains, one may choose to isolate by zone or VLAN and not invest in yet another switch replacement at the access layer.

If the appliance is to be
effective, it has to sit at a choke point and really be and enforcement point.
And it has to have some smarts of its own. Like the secure switch that we make.

Again, that depends upon your definition of enforcement and applicability.  I’d agree that in flat networks, you’d like to do it at the port/host level, though replacing access switches to do so is usually not feasible in large networks given investments in switching architectures.  Typical fixed configuration appliances overlaid don’t scale, either.

Furthermore, depending upon your definition of what an enforcement zone and it’s corresponding diameter is (port, VLAN, IP Subnet) you may not care.  So putting that "appliance" in place may not be as foreboding as you wager, especially if it overlays across these boundaries satisfactorily.

We will see how long before these new-fangled switch vendors that used to be SSL VPN’s, that then became IPS appliances that have now "evolved" into NAC solutions, will become whatever the next buzzword/technology of tomorrow represents…especially now with Cisco’s revitalized technology refresh for "secure" access switches in the wiring closets.  Caymas, Array, and Vernier (amongst many) are perfect examples.

When it comes down to it, in the markets Crossbeam serves — and especially the largest enterprises — they are happy with their switches, they just want the best security choice on top of it provided in a consolidated, agile and scalable architecture to support it.

Amen.

/Hoff

Rothman Speaks!

June 15th, 2007 4 comments

…You need Flash to see this…

Firefox also seems to have intermittent trouble rendering this.

If you only see 1/2 of Mr. Rothman, right-click and select "show all"
and then click on the little purple play icon.  IE has no issue.

Turn the volume WAY up…I had to whisper this @ work.

/Hoff

Categories: Uncategorized Tags:

McAfee’s Bolin suggests Virtual “Sentinel” Security Model for Uncompromised Security

June 15th, 2007 2 comments

Sentinel
Christopher Bolin, McAfee’s EVP & CTO blogged an interesting perspective on utilizing virtualization technology to instantiate security functionality in an accompanying VM on a host to protect one or more VM’s on the same host.  This approach differs than the standard approach of placing host-based security controls on each VM or the intra-VS IPS models from companies such as Reflex (blogged about that here.)

I want to flush out some of these concepts with a little more meat attached

He defines the concept of running
security software alongside the operating system it is protecting as "the sentinel":

In this approach, the security software itself resides in its own
virtual machine outside and parallel to the system it is meant to
protect, which could be another virtual machine running an operating
system such as Windows. This enables the security technology to look
omnisciently into the subject OS and its operation and take appropriate
action when malware or anomalous behavior is detected.

Understood so far…with some caveats, below.

The security
software would run in an uncompromised environment monitoring in
real-time, and could avoid being disabled, detected or deceived (or
make the bad guys work a lot harder.)

While this supposedly uncompromised/uncompromisable OS could exist, how are you going to ensure that the underlying "routing" traffic flow control actually forces the traffic through the Sentinel VM in the first place? If the house of cards rests on this design element, we’ll be waiting a while…and adding latency.  See below.

This kind of security is not necessarily a one-to-one relationship
between sentinel and OSs. One physical machine can run several virtual
machines, so one virtual sentinel could watch and service many virtual
machines.

I think this is a potentially valid and interesting alternative to deploying more and more host-based security products (which seems odd coming from McAfee) or additional physical appliances, but there are a couple of issues with this premise, some of which Bolin points out, others I’ll focus on here:

  1. Unlike other applications which run in a VM and just require a TCP/IP stack, security applications are extremely topology sensitive.  The ability to integrate sentinels in a VM environment with other applications/VM’s at layer 2 is extremely difficult, especially if these security applications are to act "in-line." 

    Virtualizing transport while maintaining topology context is difficult and when you need to then virtualize the policies based upon this topology, it gets worse.  Blade servers have this problem; they have integrated basic switch/load balancing modules, but implementing policy-driven "serialization" and "parallelization" (which is what we call it @ Crossbeam) is very, very hard.

  2. The notion that the sentinel can "…look
    omnisciently into the subject OS and its operation and take appropriate
    action when malware or anomalous behavior is detected" from a network perspective is confusing.  If you’re outside the VM/Hypervisor, I don’t understand the feasibility of this approach.  This is where Blue Lane’s VirtualShiel ESX plug-in kicks ass — it plugs into the Hypervisor and protects not only directed traffic to the VM but also intra-VM traffic with behavioral detection, not just signatures.

  3. Resource allocation of the sentinel security control as a VM poses a threat vector inasmuch as one could overwhelm/DoS the Sentinel VM and security/availability of the entire system could be compromised; the controls protecting the VMs are competing for the same virtualized resources that the resources are after.
  4. As Bolin rightfully suggests, a vulnerability in the VM/VMM/Chipsets could introduce a serious set of modeling problems.

I maintain that securing virtualization by virtualizing security is nascent at best, but as Bolin rightfully demonstrates, there are many innovative approaches being discussed to address these new technologies.

/Hoff

CSI Working Group on Web Security Reseach Law Concludes…Nothing

June 14th, 2007 1 comment

Hamster3_2
In May I blogged what I thought was an interesting question regarding the legality and liability of reverse engineering in security vulnerability research.  That discussion focused on the reverse engineering and vulnerability research of hardware and software products that were performed locally.

I continued with a follow-on discussion and extended the topic to include security vulnerability research from the web-based perspective in which I was interested to see how different the opinions on the legality and liability were from many of the top security researchers as it relates to the local versus remote vulnerability research and disclosure perspectives.

As part of the last post, I made reference to a working group organized by CSI whose focus and charter were to discuss web security research law.  This group is made up of some really smart people and I was looking forward to the conclusions reached by them on the topic and what might be done to potentially solve the obvious mounting problems associated with vulnerability research and disclosure.

The first report of this group was published yesterday. 

Unfortunately, the conclusions of the working group is an inditement of the sad state of affairs related to the security space and further underscores the sense of utter hopelessness many in the security community experience.

What the group concluded after 14 extremely interesting and well-written pages was absolutely nothing:

The meeting of minds that took place over the past two months advanced the group’s collective knowledge on the issue of Web security research law.  Yet if one assumed that the discussion advanced the group’s collective understanding of this issue, one might be mistaken.

Informative though the work was, it raised more questions than answers.  In the pursuit of clarity, we found, instead, turbidity.

Thus it follows, that there are many opportunities for further thought, further discussion, further research and further stirring up of murky depths.  In the short term, the working group has plans to pursue the following endeavors:

  • Creating disclosure policy guidelines — both to help site owners write disclosure policies, and for security researchers to understand them.
  • Creating guidelines for creating a "dummy" site.
  • Creating a more complete matrix of Web vulnerability research methods, written with the purpose of helping attorneys, lawmakers and law enforcement officers understand the varying degrees of invasiveness

Jeremiah Grossman, a friend and one of the working group members summarized the report and concluded with the following: "…maybe within the next 3-5 years as more incidents like TJX occur, we’ll have both remedies."  Swell.

Please don’t misunderstand my cynical tone and disappointment as a reflection on any of the folks who participated in this working group — many of whom I know and respect.  It is, however, sadly another example of the hamster wheel of pain we’re all on when the best and brightest we have can’t draw meaningful conclusions against issues such as this.

I was really hoping we’d be further down the path towards getting our arms around the problem so we could present meaningful solutions that would make a dent in the space.  Unfortunately, I think where we are is the collective shoulder shrug shrine of cynicism perched periously on the cliff overlooking the chasm of despair which drops off into the trough of disillusionment.

Gartner needs a magic quadrant for hopelessness. <sigh>  I feel better now, thanks.

/Hoff

Evan Kaplan and Co. (Aventail) Take the Next Step

June 12th, 2007 2 comments

Ekaplan_2
So Aventail’s being acquired by SonicWall?  I wish Evan Kaplan and his team well and trust that SonicWall will do their best to integrate the best of Aventail’s technology into their own.  It’s interesting that this news pops up today because I was just thinking about Aventail’s CEO today as part of a retrospective of security over the last 10+ years.

I’ve always admired Evan Kaplan’s messaging from afar and a couple of months ago I got to speak with him for an hour or so.  For someone who has put his stake in the ground the ground for last 11 years as a leader in the SSL VPN market, you might be surprised to know that Evan’s perspective on the world of networking and security isn’t limited to "tunnel vision" as one might expect.

One of my favorite examples of Evan’s beliefs is this article in Network World back in 2005 that resonated so very much with me then and still does today.  The title of the piece is "Smart Networks are Not a Good Investment" and was a "face off" feature between Evan and Cisco’s Rob Redford.

Evan’s point here — and what resonates at the core of what I believe should happen to security — is that the "network" ought to be separated into two strata, the "plumbing" (routers, switches, etc.) and intelligent "service layers" (security being one of them.)

Evan calls these layers "connectivity" and "intelligence."

The plumbing should be fast, resilient, reliable, and robust providing the connectivity and the service layers should be agile, open, interoperable, flexible and focused on delivering service as a core competency.   

Networking vendors who want to leverage the footprint they already have in port density and extend their stranglehold single vendor version of the truth obviously disagree with this approach.  So do those who ultimately suggest that "good enough" is good enough.

Evan bangs the drum:

Network intelligence as promoted by the large network vendors is the
Star Wars defense system of our time – monolithic, vulnerable and
inherently unreliable. Proponents of smart networks want to extend
their hegemony by incorporating application performance and security into a unified, super-intelligent infrastructure. They want to integrate everything into the network and embed security into every node. In theory, you would then have centralized control and strong perimeter defense.

Yup.  As I blogged recently, "Network Intelligence is an Oxymoron."  The port puppets will have you believe that you can put all this intelligence in the routers in switches and solve all the problems these platforms were never designed to solve whilst simultaneously scale performance and features against skyrocketing throughput requirements, extreme latency thresholds, emerging technologies and an avalanche of compounding threats and vulnerabilities…all from one vendor, of course.

While on the surface this sounds reasonable, a deeper look reveals
that this kind of approach presents significant risk for users and
service providers. It runs counter to the clear trends in network
communication, such as today’s radical growth in broadband and wireless networks
, and increased virtualization of corporate networks through use of
public infrastructure. As a result of these trends, much network
traffic is accessing corporate data centers from public networks rather
than the private LAN, and the boundaries of the enterprise are
expanding. Companies must grow by embracing these trends and fully
leveraging public infrastructure and the power of the Internet.

Exactly.  Look at BT’s 21CN network architecture as a clear and unequivocal demonstration of this strategy; a fantastic high-performance, resilient and reliable foundational transport coupled with an open, agile, flexible and equally high-performance and scalable security service layer.  If BT is putting 18 billion pounds of their money investing in a strategy like this and don’t reckon they can rely on "embedded" security, why would you?

Network
vendors are right in recognizing and trying to address the two
fundamental challenges of network communications: application
performance and security. However, they are wrong in believing the best
way to address these concerns is to integrate application performance
and security into the underlying network.

The alternative is to avoid building increasing intelligence into the physical network, which I call the connectivity lane, and building it instead into a higher-level plane I call the intelligence plane.
                     

The connectivity plane covers end-to-end network connectivity in its broadest sense, leveraging IPv4 and eventually IPv6
. This plane’s characteristics are packet-level performance and high
availability. It is inherently insecure but incredibly resilient. The
connectivity plane should be kept highly controlled and standardized,
because it is heavy to manage and expensive to build and update. It
should also be kept dumb, with change happening slowly.

He’s on the money here again.  Let the network evolve at its pace using standards-based technology and allow innovation to deliver service at the higher levels.  The network evolves much more slowly and at a pace that demands stability.  The experientially-focused intelligence layer needs to be much more nimble and agile, taking advantage of the opportunism and the requirements to respond to rapidly emerging technologies and threat/vulnerabilities.

Look at how quickly solutions like DLP and NAC have stormed onto the market.  If we had to wait for Cisco to get their butt in gear and deliver solutions that actually work as an embedded function within the "network," we’d be out of business by now.

I don’t have the time to write it again, but the security implications of having the fox guarding the henhouse by embedding security into the "fabric" is scary.  Just look at the number of security vulnerabilities Cisco has had in their routing, switching, and security products in the last 6 months.  Guess what happens when they’re all one?   I sum it up here and here as examples.

Conversely, the intelligence plane is application centric and policy
driven, and is an overlay to the connectivity plane. The intelligence
plane is where you build relationships, security and policy, because it
is flexible and cost effective. This plane is network independent,
multi-vendor and adaptive, delivering applications and performance
across a variety of environments, systems, users and devices. The
intelligence plane allows you to extend the enterprise boundary using
readily available public infrastructure. Many service and product
vendors offer products that address the core issues of security and
performance on the intelligence plane.

Connectivity vendors should focus their efforts on building faster, easier to manage and more reliable networks. Smart networks
are good for vendors, not customers.

Wiser words have not been spoken…except by me agreeing with them, of course 😉  Not too shabby for an SSL VPN vendor way back in 2005.

Evan, I do hope you won’t disappear and will continue to be an outspoken advocate of flushing the plumbing…best of luck to you and your team as you integrate into SonicWall.

/Hoff

Congratulations to Richard Bejtlich – Good for him, bad for us…

June 11th, 2007 2 comments

BejtlichCongratulations go to Richard Bejtlich as he accepts a position as GE’s Director of Incident Response.  It’s a bittersweet moment as while GE gains an amazing new employee, the public loses one of our best champions, a fantastic teacher, a great wealth of monitoring Tao knowledge and a prolific blogger.

While I am sure Richard won’t stop all elements of what he does, he’s going to have his hands full.  I always privately wondered how he maintained that schedule.  Mine is crazy, his is pure lunacy.

I am grateful that I’m scheduled to attend one of Richard’s last classes — the TCP/IP Weapons School @ Blackhat.  I’ve attended Richard’s classes before and they are excellent.  Get ’em while you still can.

Again, congratulations, Richard.

/Hoff

Categories: Uncategorized Tags: