Archive

Archive for the ‘Patch Management’ Category

Patching the (Hypervisor) Platform: How Do You Manage Risk?

April 12th, 2010 7 comments

Hi. Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

Reblog this post [with Zemanta]

Redux: Patching the Cloud

September 23rd, 2009 3 comments

Back in 2008 I wrote a piece titled “Patching the Cloud” in which I highlighted the issues associated with the black box ubiquity of Cloud and what that means to patching/upgrading processes:

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  How does an enterprise that has no insight into the “black box” model of the cloud operator, setup a dev/test/staging environment that acceptably mimics the operating environment?

What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)

How does one negotiate the process for determining when and how a patch is deployed?  Where does the cloud operator draw the line?   If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service?  If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.

I followed this up with a practical example when Microsoft’s Azure services experienced a hiccup due to this very thing.  We see wholesale changes that can be instantiated on a whim by Cloud providers that could alter service functionality and service availability such as this one from Google (Published Google Documents to appear in Google search) — have you thought this through?

So now as we witness ISP’s starting to build Cloud service offerings from common Cloud OS platforms and espouse the portability of workloads (*ahem* VM’s) from “internal” Clouds to Cloud Providers — and potentially multiple Cloud providers — what happens when the enterprise is at v3.1 of Cloud OS, ISP A is at version 2.1a and ISP B is at v2.9? Portability is a cruel mistress.

Pair that little nugget with the fact that even “global” Cloud providers such as Amazon Web Services have not maintained parity in terms of functionality/services across their regions*. The US has long had features/functions that the european region has not.  Today, in fact, AWS announced bringing infrastructure capabilities to parity for things like elastic load balancing and auto-scale…

It’s important to understand what happens when we squeeze the balloon.

/Hoff

*corrected – I originally said “availability zones” which was in error as pointed out by Shlomo in the comments. Thanks!

NAC is a Feature not a Market…

March 30th, 2007 7 comments

MarketfeatureI’m picking on NAC in the title of this entry because it will drive
Alan Shimel ape-shit and NAC has become the most over-hyped hooplah
next to Britney’s hair shaving/rehab incident…besides, the pundits come a-flockin’ when the NAC blood is in the water…

Speaking of chumming for big fish, love ’em or hate ’em, Gartner’s Hype Cycles do a good job of allowing
one to visualize where and when a specific technology appears, lives
and dies
as a function of time, adoption rate and utility.

We’ve recently seen a lot of activity in the security space that I
would personally describe as natural evolution along the continuum,
but is often instead described by others as market "consolidation" due to
saturation. 

I’m not sure they are the same thing, but really, I don’t care to argue
that point.  It’s boring.  It think that anyone arguing either side is
probably right.  That means that Lindstrom would disagree with both. 

What I do want to do is summarize a couple of points regarding some of
this "evolution" because I use my blog as a virtual jot pad against which
I can measure my own consistency of thought and opinion.  That and the
chicks dig it.

Without my usual PhD Doctoral thesis brevity, here are just a few
network security technologies I reckon are already doomed to succeed as
features and not markets — those technologies that will, within the
next 24 months, be absorbed into other delivery mechanisms that
incorporate multiple technologies into a platform for virtualized
security service layers:

  1. Network Admission Control
  2. Network Access Control
  3. XML Security Gateways
  4. Web Application Firewalls
  5. NBAD for the purpose of DoS/DDoS
  6. Content Security Accelerators
  7. Network-based Vulnerability Assessment Toolsets
  8. Database Security Gateways
  9. Patch Management (Virtual or otherwise)
  10. Hypervisor-based virtual NIDS/NIPS tools
  11. Single Sign-on
  12. Intellectual Property Leakage/Extrusion Prevention

…there are lots more.  Components like gateway AV, FW, VPN, SSL
accelerators, IDS/IPS, etc. are already settling to the bottom of UTM
suites as table stakes.  Many other functions are moving to SaaS
models.  These are just the ones that occurred to me without much
thought.

Now, I’m not suggesting that Uncle Art is right and there will be no
stand-alone security vendors in three years, but I do think some of this
stuff is being absorbed into the bedrock that will form the next 5
years of evolutionary activity.

Of course, some folks will argue that all of the above will just all be
absorbed into the "network" (which means routers and switches.)  Switch
or multi-function device…doesn’t matter.  The "smoosh" is what I’m
after, not what color it is when it happens.

What’d I miss?

/Hoff

(Written from SFO Airport sitting @ Peet’s Coffee.  Drinking a two-shot extra large iced coffee)

Does the word ‘Matasano’ mean ‘allergic to innovation’ in Lithuanian?

September 27th, 2006 2 comments

Kicknuts(On the advice of someone MUCH smarter than Ptacek or me [my wife] I removed the use of the F/S-bombs in this post.]

Holy crap.  Thomas Ptacek kicked me square in the nuts with his post here in regards to my commentary about Blue Lane’s PatchPoint.

I’m really at a loss for words.  I don’t really care to start a blog war of words with someone like Thomas Ptacek who is eleventy-billion times smarter than I’ll ever hope to be, but I have to admit, his post is the most stupid frigging illustration of derivate label-focused stubborness I have ever witnessed.  For chrissakes, he’s challenging tech with marketing slides?  He’s starting to sound like Marcus Ranum.

Thomas, your assertions about Patch Point (a product you’ve never seen in person) are innaccurate.  Your side-swipe bitch-slap commentary about my motivation is offensive.  Your obvious dislike for IPS is noted — and misdirected.  This is boring.  You assail a product and THEN invite the vendor to respond?  Dude, you’re a vendor, too.  Challenging a technology approach is one thing, but calling into question my integrity and motivation?  Back the hell up.

I just got back from an awesome gathering @ BeanSec!2 and Bourbon6 — so despite the fact that I’m going to hate myself (and this post) in the morning, I have to tell you that 4 of the people that read your post asked "what the hell?"  Did I piss in your corn flakes inadvertenly?

Let me just cut to the chase:

1) I worked with Blue Lane as a customer @ my last job while they were still in stealth.  That’s why the "start date" is befor the "live date"
2) When they went live, I bought their product.  The first, in fact.  It worked aces for me.
3) Call it an IPS.  Call it a salad dressing.  I could care less.  It works.  It solves a business problem.
4) I have ZERO interest in their company other than I think it solves said BUSINESS problem.
5) This *is* third party patching because they apply a "patch" which mitigates the exploit related to the vulnerability.  They "patch" the defect.
6) Your comment answers your own question:

You see what they did there? The box takes in shellcode, and then, by
“emulating the functionality of a patch”, spits out valid
traffic. Wow. That’s amazing. Now, somebody please tell me why that’s
any improvement over taking in shellcode, and then, by “emulating the
functionality of an attack signature”, spitting out nothing?

…ummm, hello!  An IPS BLOCKS traffic as you illustrate…That’s all. 

What if the dumb IPS today kills a valid $50M wire transaction because someone typed 10 more bytes than they should have in a comment field?  Should we truncate they extra 10 bytes or dump the entire transaction? 

IPS’s would dump the entire transaction because of an arbitrary and inexact instantiation of a flawed and rigid "policy" that is inaccurate.  That’s diametrically opposed to what security SHOULD do.

[Note: I recognize that is a poor example because it doesn’t really align with what a ‘patch’ would do — perhaps this comment invites the IPS comparison because of it’s signature-like action?  I’ll come up with a better example and post it in another entry]

Blue Lane does what a security product should; allow good traffic through and make specifically-identified bad traffic good enough.  IPS’s don’t do that.  They are stupid, deny-driven technology.  They illustrate all that is wrong with how security is deployed today.  If we agree on that, great!  You seem to hate IPS.  So do I.  Blue Lane is not an IPS.  You illustrated that yourself.

Blue Lane is not an IPS because PatchPoint does exactly what a patched system would do if it received a malicious packet…it doesn’t toss the entire thing; it takes the good and weeds the bad but allows the request to be processed.  For example, if MS-06-10000 is a patch that mitigates a buffer overflow of a particular application/port such that anything over 1024 bytes can cause the execution or arbitrary code from executing by truncating/removing anything over 1024 bytes, why is this a bad thing to do @ the network layer?

This *IS* a third party patch because within 12 hours (based upon an SLA) they provide a "patch" that mitigates the exploit of a vulnerability and protects the servers behind the applicance WITHOUT touching the host. 

When the vendor issues the real patch, Blue Lane will allow you to flexibly continue to "network patch" with their solution or apply the vendor’s.  It gives you time to defend against a potential attack without destroying your critical machines by prematurely deploying patches on the host without the benefit of a controlled regression test.

You’re a smart guy.  Don’t assail the product in theory without trying it.  Your technical comparisons to the IPS model are flawed from a business and operational perspective and I think that it sucks that you’ve taken such a narrow-minded perspective on this matter.

Look,  I purchased their product  whilst at my last job.  I’d do it again today.  I have ZERO personal interest in this company or its products other than to say it really is a great solution in the security arsenal today.  That said, I’m going to approach them to get their app. on my platform because it is a fantastic solution to a big problem.

The VC that called me about this today seems to think so, too.

Sorry dude, but I really don’t think you get it this time.  You’re still eleventy-billion times smarter than I am, but you’re also wrong.  Also, until you actually meet me, don’t ever call into question my honor, integrity or motivation…I’d never do that to you (or anyone else) so have at least a modicum of respect, eh?

You’re still going to advertise BeanSec! 3, right?

Hoff

Third Party Patching — Why Virtual Patch Emulation is the Host-est with the Most-est…

September 27th, 2006 3 comments

Dentalhygiene
All this hubbub about third party patching is enough to make one cross-eyed…(read on for the ironic analog)

I’ve written about this twice before…once last month here and the original post from my prior blog written over a year ago!  It’s a different approach (that inevitably and incorrectly gets called an IPS) to solving the patching dilemma — by not touching the host but instead performing virtualized patch emulation in real-time via the network.

Specifically I make reference to a product and service from Blue Lane technologies (the PatchPoint gateway) which so very elegantly provides a layer of protection that is a NETWORK-BASED third party patching solution.

You don’t have to touch the host — no rediculous rush to apply patches that might introduce more operational risk in the hurry to patch them than the risk imposed by the likelihood of the vulnerability being exploited…

You can deploy the virtual (third party) patch and THEN execute your rational and controlled approach towards regression testing those servers you’re about to add software to…

Rather than re-hash the obvious and get Alan Shimel designing book covers to attack my post like he did with Ross Brown from eEye (very cool, Shimmy!) you can just read the premise based upon the link above in the first sentence.

I don’t own any Blue Lane stock but I did happen to buy one of the first of their magical boxes 2 years ago and it saved my ass on many occasion.  Patch Tuesday become a non-event (when combined with the use of Skybox’s amazing risk management toolset…another post.)

Keep your mitts off my servers….

NAC Attack: Why NAC doesn’t work and SN(i)F doesn’t, either…alone

August 7th, 2006 12 comments

Hearnospeakno
I have to admit that when I read Alan Shimel’s blog entry whereby he calls out Richard Stiennon on his blog entries titled "Don’t Bother with NAC" and "Network Admission Control is a Blind Alley," I started licking my chops as I couldn’t wait to jump in and take some time to throw fuel on the fire.  Of course, Mike Rothman already did that, but my goal in life is to be more inflammatory than he is, so let the dousing begin!

I’ve actually been meaning to ask for clarification on some points from both of these fellas, so no better time than the present. 

Now, given the fact that I know all of the usual suspects in this
debate, it’s odd that I’m actually not siding with any of them.  In
fact, I sit squarely in the middle because I think that in the same
breath both Richard and Alan are as wrong as they are right.  Mike is always right (in his own mind) but rather than suggest there was a KO here, let’s review the tape and analyze the count before we go to the scorecards for a decision.

This bout is under the jurisdiction of and sanctioned by the Nevada
Gaming Commission and brought to you by FUD — the offical supplier of
facts on the Internet. 😉

Tale of the tape:

Richard Stiennon:

  1. Richard Stiennon highlighted some of Ofir Arkin’s NAC "weaknesses" presented at Black Hat and suggests that NAC is a waste of time based upon not only these technical deficiencies but also that the problems that NAC seeks to solve are already ameliorated by proper patching as machines "…that are patched do not get infected."
  2. Somewhat confusingly he follows on with the statement based upon the previous statement that "The fear of the zero-day worm or virus has
    proved ungrounded. And besides, if it is zero-day, then having the
    latest DAT file from Symantec does you no good."
  3. Richard suggests that integrating host-based and network-based security
    is a bad idea and that the right thing to do is based upon  "de-coupling network and host-based security. Rather than require them to work together let them work alone."
  4. Ultimately he expects that the rest of the problems will be fixed with a paradigm which he has called Secure Network Fabric. 
  5. Richard says the right solution is the concept he calls "Secure Network Fabric (SNF)" wherein "…network security
    solutions will not require switches, routers, laptops, servers,
    and vendors to work in concert with each other" but rather "…relies most heavily
    on a switched network architecture [which] usually involve[s] core switches
    as well as access switches."
  6. SNF relies on the VLANs that "…would be used to provide granularity
    down to the device-level where needed. The switch enforces policy based
    on layer 2 and 3 information. It is directed by the NetFlow based
    behavior monitoring system.
  7. Richard has talked about the need for integrating intelligent IDP (Intrusion Detection and Prevention) systems coupled with NBA/NBAD (Network Behavioral Analysis/Network Behavioral Anomaly Detection) and switching fabric for quite some time and this integration is key in SNF functionality.
  8. Furthermore, Richard maintains that relying on the endpoint to report its health back to the network and the mechanisms designed to allow admission is a bad idea and unnecessary.
  9. Richard maintains that there is a difference between "Admission Control" and "Access Control" and sums it up thusly: "To keep it simple just remember: Access Control, good. Admission Control, bad."

Alan Shimel:

  1. Alan freely admits that there are some technical "issues" with NAC such as implementation concerns with DHCP, static IP addresses, spoofed MAC addresses, NAT, etc.
  2. Alan points out that Richard did not mention that Ofir Arkin also suggests that utilizing a NAC solution based upon 802.1x is actually a robust solution.
  3. Alan alludes to the fact that people deploy NAC for various reasons and (quoting from a prior article) "…an important point is that NAC is not really geared towards stopping the determined hacker, but rather the inadvertant polluter."  Hence I believe he’s saying that 802.1x is the right NAC solution to use if you can as it solves both problems, but that if latter is not your reason for deploying NAC, then the other issues are not as important.
  4. Alan points to the fact that many of Richard’s references are quite dated (such as the comments describing the 2003 acquisition of TippingPoint by 3Com as "recent" and that ultimately SNF is a soapbox upon which Richard can preach his dislike of NAC based upon "…trusting the endpoint to report its health."
  5. De-coupling network and host-based endpoint security is a bad idea, according to Alan, because you miss context and introduce/reinforce the information silos that exist today rather than allow for coordinated, consolidated and correlated security decisions to be made.
  6. Alan wishes to purchase Pot (unless it’s something better) from Richard and go for a long walk on the shore because Mr. Stiennon has "the good stuff" in his opinion since Richard intimates that patching and configuration management have worked well and that zero-day attacks are a non-entity.
  7. Alan suggests that the technology "…we used to call behvior based IPS’s" which will pass "…to the switch to enfore policy" is ultimately "another failed technology" and that the vendors Richard cites in his BAIPS example (Arbor, Mazu, etc.) are all "…struggling in search of a solution for the technology they have developed."
  8. Alan maintains that the SNF "dream" lacks the ability to deliver any time soon because by de-coupling host and network security, you are hamstrung by the lack of "…context, analytics and network performance."
  9. Finally, Alan is unclear on the difference between Network Access Control (good) and Network Admission Control (bad.)

So again, I maintain that they are both right and both wrong.  I am the Switzerland of NAC/SNF! 

I’ll tell you why — not in any particular order or with a particular slant…

(the bout has now turned from a boxing context to a three man Mixed-Martial-Arts Octagon cage-match!  I predict a first round submission via tap-out):

  1. Firstly, endpoint and network security MUST be deployed together and ultimately communicate with one another to ultimately effect the most visible, transparent, collaborative and robust defense-in-depth security strategy available.  Nobody said it’s a 50/50 percentage, but having one without the other is silly.  There are things on endpoints that you can’t discover over a network.  Not having some intelligence about the endpoint means the network cannot possibly determine as accurately the "intent" of the packets spewing from it.
  2. Network Admission Control solutions don’t necessarily blindly trust the endpoint — whether agent or agentless, the NAC controller takes not only what the host "reports" but also how it "responds" to active probes of its state.  While virtualization and covert rootkits have the capability to potentially hide from these probes, suggesting that an endpoint passes these tests does not mean that the endpoint is no longer subject to any other control on the network…
  3. Once Network Admission Control is accomplished, Network Access Control can be applied (continuously) based upon policy and behavioral analysis/behavioral anomaly detection.
  4. Patching doesn’t work — not because the verb is dysfunctional — but because the people who are responsible for implementing it are.  So long as these systems are not completely automated, we’re at risk.
  5. Zero day exploits are not overblown — they’re getting more surgically targeted and the remediation cycles are too long.  Network-based solutions alone cannot protect against anomalies that are not protocol or exploit/vulnerability signature driven…if the traffic patterns are not abnormal, the flags are OK and the content seemingly benign going to something "normal," it *will* hit the target.
  6. You’ll be suprised just how immature many of the largest networks on this planet are in terms of segmentation via VLANs and internal security…FLAT, FLAT, FLAT.  Scary stuff.  If you think the network "understands" the data that is transported over it or can magically determine what is more important by asset/risk relevance, I too would like some of that stuff you’re smoking.
  7. Relying on the SNF concept wherein the "switch enforces policy based
    on layer 2 and 3 information," and "is directed by the NetFlow based
    behavior monitoring system" is wholly shortsighted.  Firstly layer 2/3 information is incredibly limited since most of the attacks today are application-level attacks and NOT L2/L3 and Netflow data (even v9) is grossly coarse and doesn’t provide the context needed to effectively determine these sorts of incursions.  That’s why NetFlow today is mostly used in DDoS activities — because you see egregiously spiked usage and/or traffic patterns.  It’s a colander not a sieve.
  8. Most vendors today are indeed combining IDP functionality with NBA/D to give a much deeper and contextual awareness across the network.  More importantly, big players such as ISS and Cisco include endpoint security and NAC (both "versions") to more granularly define, isolate and ameliorate attacks.  It’s not perfect, but it’s getting better.
  9. Advances in BA/NBA/NBAD are coming (they’re here, actually) and it will produce profound new ways of managing context and actionable intelligence when combined with optimized compute and forwarding engines which are emerging at the same time.   They will begin, when paired with network-wide correlation tools, to solve the holy trinity of issues: context, analytics and performance.
  10. Furthermore, companies such as ISS and StillSecure (Alan’s company) have partnered with switch vendors to actually do just what Richard suggests in concept.  Interestingly enough,
    despite the new moniker, the SNF concept is not new — Cisco’s SDN (albeit
    without NAC) heavily leverages the concepts described above from an
    embedded security perspective and overlay security vendors such as ISS
    and Crossbeam also have solutions (in POC or available) in this area.
  11. Let’s be honest, just like the BA vendors Alan described, NAC is in just the same position — "struggling in search of a solution for the technology they have developed."  There are many reasons for deploying NAC: Pre and Post-inspection/Quarantine/Remediation and there are numerous ways of doing it: agent-based/agentless/in-line/out-of-band… The scary thing is with so many vendors jumping in here and the 800 pound Gorilla (Cisco) even having trouble figuring it out, how long before NAC becomes a feature and not a market?  Spending a bunch of money on a solution (even without potential forklifts) to not "… stop the determined hacker, but rather the inadvertant polluter" seems a little odd to me.  Sure it’s part of a well defined network strategy, but it ain’t all that yet, either.
  12. With Cisco’s CSO saying things like "The concept of having devices join a network in which they are posture-assessed and given access to the network in a granular way is still in its infancy" and even StillSecure’s CTO (Mitchell Ashley) saying "…but I think those interested in NAC today are really trying to avoid infection spread by an unsuspecting network user rather than a knowledgeable intruder" it’s difficult to see how NAC can be considered a core element of a network security strategy WITHOUT something like SNF.

So, they’re both right and both wrong.  Oh, and by the way, Enterprise and Provider-class UTM solutions are combining ALL of this in a unified security service layer… FW, IDP, Anti-X, NBA(D) and SNF-like foundations.

[Tap before it breaks, boys!]

We need NAC and we need SNF and I’ve got the answer:

  1. Take some of Richard’s "good stuff"
  2. Puff, puff, pass
  3. Combine some of Alan’s NAC with Richard’s SNF
  4. Stir
  5. Bake for 30 minutes and you have one F’N (good) SNAC
    (it’s an anagram, get it!)

There you have it.

Chris

Retrospect: The “Morning After Pill for Patch Tuesday”

August 1st, 2006 1 comment

Chill_1
I dredged this post up from my prior blog because I think it’s interesting and relevant today given the current state of vulnerability management and patching.  I wrote this entry almost a year ago on 9/15/05, and I thought I’d bring it back to life here.

The post describes an alternative to the frenzied activities of Patch Tuesday and dovetails on what has become the raging debate of third party patches.  That being said, I don’t think there’s any excuse for crappy code, but today’s realities mean that despite tools like firewalls (even app. level,) NBAD, and IPS, we still have to patch — one way or another.

My motto is work smarter, not harder and I think Blue Lane’s product allows one to do just that.  It helps you manage risk by buying time — and in some cases by patching what is otherwise unpatchable.

I want to bring to light an issue that was raised by Mike Rothman: is the "patch proxy" concept fab or fad?  Market or function?  One could argue that ultimately IPS vendors will provide this functionality in their products.  However, today, IPS products do one of two things:  Allow or Drop. 

I’m not going to argue the "market or function" question, because that’s not the point of this post.

Blue Lane’s PatchPoint provides in-stream remediation at the network layer — the same remediation you’d get on the actual host but without having to touch it until you’ve regression tested the box.  So the product isn’t solely "Intrusion Detection" nor is it solely "Intrusion Prevention" — I like to call it "Intrusion Correction."

When I wrote this, I was one of Blue Lane’s first customers and their technology really proved itself quite well and worked very nicely as a L2 insertion with the rest of the layers of security we already had in place.  It allowed us to make rational decisions about how, when and why we patched so as not to incurr more risk in hasty patching than the exploit might do harm should it hit.

Unfortunately I can’t pull the comments section that went with it as I had a great exchange with Dominick White on why this wasn’t just like "…every other IPS product."  Honestly, in hindsight I would go back and clarify a few points, but perhaps the post in it’s original form will provoke just as many comments/questions as it did before.

Chris

September 15, 2005

About two years ago as I was malcontently slumped over another batch of
vulerabilities which required patches to remediate, it occured to me
that even in light of good vulnerability management tools to prioritize
vulnerabilities and patching efforts as well as tools to deploy them,
the fact that I had to do either in a short period of time, well, stunk.

Patch
too early without proper regression testing and business impact
analysis and you can blow an asset sky high. Downtime resulting from
"Patches Gone Wild" can result in more risk than potentially not
patching at all depending upon whether the exploit is in the wild and
the criticality of the vulnerability.

It was then that a VC
contact turned me on to a company (who at the time was still in stealth
mode at the time) – Blue Lane Technologies – who proposed a better way
to patch.

Namely, instead of patching the servers reactively
without testing, why not patch the "network" instead and apply the same
countermeasures to the streams as the patches do to the servers?

Assuming
all other things such as latency and resiliency are even, this would be
an excellent foothold in the battle to actually patch less while still
patching rationally! You would buy yourself the time to test and plan
and THEN deploy the actual server patch on your own schedule.

Guess what?  It works.  Very well.

We
started testing a couple of months ago and threw all sorts of nastiness
at the solution. It sits in-line in front of servers (with NIC-card
failover capabilities, thankyouverymuch) and works by applying
ActiveFix patches to the network streams in real time. We took a test
box behind the protected interface and had multiple VMWare instances of
various Microsoft OS’s and applications running. We hit it with all
sorts of fun VA scanners, exploit tools and the like. Of course, the
"boxes" were owned. We especially liked toying with MetaSploit since it
allowed us to really play with payloads.

We "applied" the
patches to the machine instances behind the PatchPoint Gateway. Zip.
Nada. We couldn’t exploit a damned thing. It was impressive.

"Ah,"
you say, "but any old NIPS/HIPS/AV/Firewall can do that!" Er, not so,
Sparky. The notion here is that rather than simply dump an entire
session, the actual active streams are "corrected" allowing good
traffic to flow while preventing "bad" traffic from getting through —
on a per flow basis. It doesn’t just send a RST and that $50M wire
transfer to /dev/null, it actually allows legitimate business traffic
to continue unimpeded.

The approach is alarmingly, well, so 20 years ago!  Remember application proxy firewalls?

Well,
if you think about how an FTP proxy works, one defines which "good"
commands may be executed and anything else is merely ignored. A user
could connect via FTP and type "delete" to his heart’s content, but if
"delete" was not allowed, the proxy would simply discard the requested
command and never pass it on to the server. Your session was still up,
you could continue to use FTP, you just could not "delete."

Makes sense, no?

If,
for example, Microsoft’s MS05-1,000,000 patch for, say, IIS was
designed to remediate a buffer overflow vulnerability which truncated
the POSTS to 1024 bytes, then that’s exactly what Blue Lane’s
PatchPoint would do. If it *does* (on the odd chance) do something
nasty to your application, you can simply "un-apply" the patch which
takes about 10 seconds and you’re in no worse shape than you were in
the first place…

It’s an excellent solution that further adds
to reducing our risks associated with patching for a price point that
is easily justified given both the soft-cost cost avoidance issues
associated with patch deployment and the very real costs of potential
downtime associated with patch installation failures.

Other
manufacturers are rumored to be offering this virtual patching
capability in their "deux ex machina" solutions, but I dare say that I
have yet to see a more flexible, accurate and simple deployment than
Blue Lane’s. In fact, I’ve yet to see anyone promise to deliver the
fixes in the timeframe that Blue Lane does.

I give it the "It Kicks Ass" Award of the week.

See Blue Lane Technologies for a far better explanation than this one.

The product provides (now) virtual patching for Microsoft, Oracle, Sun, Unix and Linux.  Sure would be nice not to have to apply 65 patches to your most critical Oracle databases every quarter…

I am certain that elements of this entry will require additional explanation and I hope we can discuss them here as this is really a very well-focused product offering that does a great job without boiling the ocean.

Chris

Categories: Patch Management Tags: