It’s a sNACdown! Cage Match between Captain Obvious and Me, El Rational.
CAUTION: I use the words "Nostradramatic prescience" in this blog posting. Anyone easily offended by such poetic buggery should stop reading now. You have been forewarned.
That’s it. I’ve had it. I’ve taken some semi-humorous jabs at Mr. Stiennon before, but my contempt for what is just self-serving PFD (Pure F’ing Dribble) has hit an all time high. This is, an out-and-out, smackdown. I make no bones about it.
Richard is at it again. It seems that stating the obvious and taking credit for it has become an art form.
Richard expects to be congratulated for his prophetic statements that
are basically a told-you-so to any monkey dumb enough to rely only on
Network Admission Control (see below) as his/her only security defense. Furthermore, he has the gaul to suggest that by obfuscating the bulk of the arguments made to the contradiction of his point, he wins by default and he’s owed some sort of ass-kissing:
And for my fellow bloggers who I rarely call out using my own blog:
are you ready to retract your "founded on quicksand" statements and
admit that you were wrong and Stiennon was right once again? š
Firstly, there’s a REASON you "rarely call out" other people on your blog, Richard. It has something to do with a lack of frequency of actually being right, or more importantly others being wrong.
I mean the rest of us poor ig’nant blogger folk just cower in the shadows of your earth-shattering predictions for 2007: Cybercrime is on the rise, identify theft is a terrible problem, attacks against financial services companies will increase and folks will upload illegal videos to YouTube.
I’m sure the throngs of those who rise up against Captain Obvious are already sending their apology Hallmarks. I’ll make sure to pre-send those congratulatory balloons now so I can save on shipping, eh?
Secondly, suggesting that others are wrong when you only present 1/10th of the debate is like watching two monkeys screw a football. It’s messy, usually ends up with one chimp having all the fun and nobody will end up wanting to play ball again with the "winner." Congratulations, champ.
What the heck am I talking about? Way back when, a bunch of us had a debate concerning the utility of NAC. More specifically, we had a debate about the utility, efficacy and value of NAC as part of an overall security strategy. The debate actually started between Richard and Alan Shimmel.
I waded in because I found them both to be right and both to be wrong. What I suggested is that NAC by ITSELF is not effective and must be deployed as part of a well-structured layered defense. I went so far as to suggest that Richard’s ideas that the network ‘fabric’ could also do this by itself were also flawed. Interestingly, we all agreed that trusting the end-point ALONE to report on its state and gain admission to the network was a flawed idea.
Basically, I suggested that securing one’s assets came down to common sense, the appropriate use of layered defense in both the infrastructure and on top of it and utilizing NAC when and how appropriate. You know, rational security.
The interesting thing to come out of that debate is that to Richard, it became clear that the acronym "NAC" appeared to only mean Network ADMISSION Control. Even more specifically, it meant Cisco’s version of Network ADMISSION Control. Listen to the Podcast. Read the blogs. It’s completely one dimensional and unrealistic to group every single NAC product and compare it to Cisco. He did this intentionally so as to prove an equally one dimensional point. Everyone already knows that pre-admission control is nothing you solely rely on for assured secure connectivity.
To the rest of us who participated in that debate, NAC meant not only Network ADMISSION Control, but also Network ACCESS Control…and not just Cisco’s which we all concluded, pretty much sucked monkey butt. The problem is that Richard’s assessment of (C)NAC is so myopic that he renders any argument concerning NAC (both) down to a single basal point that nobody actually made.
It goes something like this and was recorded thusly by his lordship himself from up on high on a tablet somewhere. Richard’s "First Law of Network Security":
Thou shalt not trust an end point to report its own state
Well, no shit. Really!? Isn’t it more important to not necessarily trust that the state reported is accurate but take the status with a grain of salt and use it as a component of assessing the fitness of a host to participate as a citizen of the network? Trust but verify?
Are there any other famous new laws of yours I should know about? Maybe like:
Thou shalt not use default passwords
Thou shalt not click on hyperlinks in emails
Thou shalt not use eBanking apps on shared computers in Chinese Internet Cafes
Thou shalt not deploy IDS’ and not monitor them
Thou shalt not use "any any any allow" firewall/ACL rules
Thou shalt not allow SMTP relaying
Thou shalt not use the handle hornyhussy in the #FirewallAdminSingles IRC channel
{By the way, I think using the phrase ‘…shalt not’ is actually a double-negative?} [Ed: No, it’s not]
Today Richard blew his own horn to try and reinforce his Nostradramatic prescience when he commented on how presenters at Blackhat further demonstrated that you can spoof reporting compliance checks of an end-point to the interrogator using Cisco’s NAC product using a toolkit created to do just that.
Oh, the horror! You mean Malware might actually fake an endpoint into thinking it’s not compromised or spoof the compliance in the first place!? What a novel idea. Not. Welcome to the world of amorphous polymorphic malware. Been there, done that, bought the T-Shirt. AV has been dealing with this for quite a while. It ain’t new. Bound to happen again.
Does it make NAC useless. Nope. Does it mean that we need greater levels of integrity checking and further in-depth validation of state. Yep. ‘Nuff said.
Let me give you Hoff’s "First Law of Network Security" Blogging:
Thou shalt not post drivel bait, Troll.
It’s not as sexy sounding as yours, but it’s immutable, non-negotiable and 100% free of trans-fatty acids.
/Hoff
(Written from the lobby of the Westford Regency Hotel. Drinking…nothing, unfortunately.)
Thomas and I were barking at each other regarding something last night and today he left a salient and thought-provoking comment that provided a very concise, pragmatic and objective summation of the embedded vs. overlay security quagmire:
I couldn’t agree more. Most of the security components today, including those that run in our little security ecosystem, really don’t intercommunicate. There is no shared understanding of telemetry or instrumentation and there’s certainly little or no correlation of threats, vulnerabilities, risk or disposition.
The problem is bad inasmuch as even best-of-breed solutions usually
require box sprawl and stacking and don’t necessarily provide for a
more secure posture, especially within context of another of Thomas’
interesting posts on defense in depth/mesh…
That’s changing, however. Our latest generation of NPMs (Network Processing Modules) allow discrete security ISV’s (which run on intelligently load-balanced Application Processor Modules — Intel blades in the same chassis) to interact with and control the network hardware through defined API’s — this provides the first step in that common telemetry such that while application A doesn’t need to know about the specifics of application B, they can functionally interact based upon the common output of disposition and/or classification of flows between them.
Later, they’ll be able to perhaps control each other through the same set of API’s.
So, I don’t think we’re going to solve the interoperability issue completely anytime soon inasmuch as we’ll go from 0 to 100%, but I think that the consolidation of these functions into smaller footprints that allow for intelligent traffic classification and disposition is a first good step.
I don’t expect Thomas to agree or even resonate with my statements below, but I found his explanation of the problem space to be dead on. Here’s my explanation of an incremental step towards solving some of the bigger classes of problems in that space which I believe hinges on consolidation of security functionality first and foremost.
The three options for reducing this footprint are as follows:
Pros: Supposedly less boxes, better communication between components and good coverage
given the fact that the security stuff is in the infrastructure. One vendor from which you get
your infrastructure and your protection. Correlation across the network "fabric" will ultimately
allow for near-time zoning and quarantine. Single management pane across the Enterprise
for availability and security. Did I mention the platform is already there?
Cons: You rely on a single vendor’s version of the truth and you get closer to a monoculture
wherein the safeguards protecting the network put at risk the very assets they seek to protect
because there is no separation of "church and state." Also, the expertise and coverage as well
as the agility for product development based upon evolving threats is hampered by the many
moving parts in this machine. Utility vs Security? Utility wins. Good enough vs. Best of breed?
Probably somewhere in between.
Pros: Reduced footprint, consolidated functionality, single management pane across multiple
security functions within the box. Usually excels in one specific area like AV and can add "good enough" functionality as the needs arise. Software moves up and down the scalability stack depending upon performance needed.
Cons: You again rely on a single vendor’s version of the truth. These boxes tend to want to replace switching infrastructure. Many of these platforms utilize ASICs to accelerate certain functions with the bulk of functionality residing in pure software with limited application or network-level intelligence. You pay the price in terms of performance and scale given the architectures of these boxes which do not easily allow for the addition of new classes of solutions to thwart new threats. Not really routers/switches.
Pros: The customer defines best of breed and can rapidly add new security functionality
at a speed that keeps pace with the threats the customer needs to mitigate. Utilizing a scalable and high-performance switching architecture combined with all the benefits
of an open blade-based security application/appliance delivery mechanism gives the best of all
worlds: self-healing, highly resilient, high performance and highly-available while utilizing
hardened Linux OS across load-balanced, virtualized security applications running on optimized
hardware.
Cons: Currently based upon proprietary (even though Intel reference design) hardware for
the application processing while also utilizing proprietary networking switching fabric and
load balancing. Can only offer software as quickly as it can be adapted and tested on the
platforms. No ASICs means small packet performance @ 64byte zero loss isn’t as high as
ASIC based packet-forwarding engines. No single pane of management.
I think that option #3 is a damned good start towards solving the consolidation issues whilst balancing the need to overlay syngergistically with the network infrastructure. You’re not locked into single vendor’s version of the truth and although the hardware may be "proprietary," the operating system and choice in software is not. You can choose from COTS, Open Source or write your own, all in an scaleable platform that is just as much a collapsed switching/routing platform as it is a consolidated blade server.
I think it has the best chance of evolving to solve more classes of problems than the other two at a rate and level of cost-effectiveness balanced with higher efficacy due to best of breed.
This, of course, depends upon how high the level of integration is between the apps — or at least their dispositions. We’re working very, very hard on that.
At any rate, Thomas ended with:
I like NAT. I think this is Paul Francis. The IETF has been hijacked by aliens, actually, and I’m getting a new tattoo: