Archive

Archive for the ‘Vulnerability Assessment / Vulnerability Management’ Category

CSI Working Group on Web Security Reseach Law Concludes…Nothing

June 14th, 2007 1 comment

Hamster3_2
In May I blogged what I thought was an interesting question regarding the legality and liability of reverse engineering in security vulnerability research.  That discussion focused on the reverse engineering and vulnerability research of hardware and software products that were performed locally.

I continued with a follow-on discussion and extended the topic to include security vulnerability research from the web-based perspective in which I was interested to see how different the opinions on the legality and liability were from many of the top security researchers as it relates to the local versus remote vulnerability research and disclosure perspectives.

As part of the last post, I made reference to a working group organized by CSI whose focus and charter were to discuss web security research law.  This group is made up of some really smart people and I was looking forward to the conclusions reached by them on the topic and what might be done to potentially solve the obvious mounting problems associated with vulnerability research and disclosure.

The first report of this group was published yesterday. 

Unfortunately, the conclusions of the working group is an inditement of the sad state of affairs related to the security space and further underscores the sense of utter hopelessness many in the security community experience.

What the group concluded after 14 extremely interesting and well-written pages was absolutely nothing:

The meeting of minds that took place over the past two months advanced the group’s collective knowledge on the issue of Web security research law.  Yet if one assumed that the discussion advanced the group’s collective understanding of this issue, one might be mistaken.

Informative though the work was, it raised more questions than answers.  In the pursuit of clarity, we found, instead, turbidity.

Thus it follows, that there are many opportunities for further thought, further discussion, further research and further stirring up of murky depths.  In the short term, the working group has plans to pursue the following endeavors:

  • Creating disclosure policy guidelines — both to help site owners write disclosure policies, and for security researchers to understand them.
  • Creating guidelines for creating a "dummy" site.
  • Creating a more complete matrix of Web vulnerability research methods, written with the purpose of helping attorneys, lawmakers and law enforcement officers understand the varying degrees of invasiveness

Jeremiah Grossman, a friend and one of the working group members summarized the report and concluded with the following: "…maybe within the next 3-5 years as more incidents like TJX occur, we’ll have both remedies."  Swell.

Please don’t misunderstand my cynical tone and disappointment as a reflection on any of the folks who participated in this working group — many of whom I know and respect.  It is, however, sadly another example of the hamster wheel of pain we’re all on when the best and brightest we have can’t draw meaningful conclusions against issues such as this.

I was really hoping we’d be further down the path towards getting our arms around the problem so we could present meaningful solutions that would make a dent in the space.  Unfortunately, I think where we are is the collective shoulder shrug shrine of cynicism perched periously on the cliff overlooking the chasm of despair which drops off into the trough of disillusionment.

Gartner needs a magic quadrant for hopelessness. <sigh>  I feel better now, thanks.

/Hoff

Should Vendors Mitigate All Vulnerabilities Immediately?

May 15th, 2007 1 comment

Swvulnerability
I read an interesting piece by Roger Grimes @ InfoWorld wherein he described the situation of a vendor who was not willing to patch an unsupported version of software even though it was vulnerable and shown to be (remotely) exploitable.

Rather, the vendor suggested that using some other means (such as blocking the offending access port) was the most appropriate course of action to mitigate the threat.

What’s interesting about the article is not that the vendor is refusing to patch older unsupported code, but that ultimately Roger suggests that irrespective of severity, vendors should immediately patch ANY exploitable vulnerability — with or without public disclosure.

A reader who obviously works for a software vendor commented back with a reply that got Roger thinking and it did for me, also.   The reader suggests that they don’t patch lower severity vulnerabilities immediately (they actually "sit on them" until a customer raises a concern) but instead focus on the higher-severity discoveries:

The reader wrote
to say that his company often sits on security bugs until they are
publicly announced or until at least one customer complaint is made.
Before you start disagreeing with this policy, hear out the rest of his
argument.

“Our
company spends significantly to root out security issues," says the
reader. "We train all our programmers in secure coding, and we follow
the basic tenets of secure programming design and management. When bugs
are reported, we fix them. Any significant security bug that is likely
to be high risk or widely used is also immediately fixed. But if we
internally find a low- or medium-risk security bug, we often sit on the
bug until it is reported publicly. We still research the bug and come
up with tentative solutions, but we don’t patch the problem.”

In the best of worlds, I’d agree with Roger — vendors should patch all vulnerabilities as quickly as possible once discovered, irrespective of whether or not the vulnerability or exploit is made public.  The world would be much better — assuming of course that the end-user could actually mitigate the vulnerability by applying the patch in the first place.

Let’s play devil’s advocate for a minute…

Back here on planet Earth, the prioritization of mitigating vulnerabilities and the resource allocation to mitigate the vulnerability is approached by vendors not unlike the way in which the consumers choose to apply patches of the same; most look at the severity of a vulnerability and start from the highest severity and make their way down.  That’s just the reality of my observation.   

So, for the bulk of these consumers, is the vendor’s response out of line?  It seems in total alignment.

As a counterpoint to my own discussion here, I’d suggest that using prudent risk management best practice, one would protect those assets that matter most.  Sometimes this means that one would mitigate a Sev3 (medium) vulnerability over a Sev5 (highest) based upon risk exposure…this is where solutions like Skybox come in to play.  Vendors can’t attach a weight to an asset, all they can do is assess the impact that an exploitable vulnerability might have on their product…

The reader’s last comment caps it off neatly with a challenge:

“Industry pundits such as yourself often say that
it benefits customers more when a company closes all known security
holes, but in my 25 years in the industry, I haven’t seen that to be
true. In fact I’ve seen the exact opposite. And before you reply, I
haven’t seen an official study that says otherwise. Until you can
provide me with a research paper, everything you say in reply is just
your opinion. With all this said, once the hole is publicly announced,
or becomes high-risk, we close it. And we close it fast because we
already knew about it, coded a solution, and tested it.”

I’m not sure I need an official study to respond to this point, but I’d be interested in if there were such a thing.  Gerhard Eschelbeck has been studying vulnerabilities and their half-lives for some time.  I’d be interested to see how this plays.

So, read the gentleman’s posts; in some cases his comments are understandable and in others they’re hard to swallow…this definitely depends upon which (if not both) side of the fence you stand.  All vendors are ultimately consumers in one form or another…

Thoughts?

/Hoff

Liability of Reverse Engineering Security Vulnerability Research?

May 8th, 2007 5 comments

Eula(Ed.: Wow, some really great comments came out of this question.  I did a crappy job framing the query but there exists a cohesiveness to both the comments and private emails I have received that shows there is confusion in both terminology and execution of reverse engineering. 

I suppose the entire issue of reverse engineering legality can just be washed away by what appeared to me as logical and I stated in the first place — there is no implied violation of an EULA or IP if one didn’t agree to it in the first place (duh!) but I wanted to make sure that my supposition was correct.]

I have a question that hopefully someone can answer for me in a straightforward manner.  It  popped into my mind yesterday in an unrelated matter and perhaps it’s one of those obvious questions, but I’m not convinced I’ve ever seen an obvious answer.

If I as an individual or as a representative of a company that performs vulnerability research and assurance engages in reverse engineering of a product that is covered by patent/IP protection and/or EULA’s that expressly forbids reverse engineering, how would I deflect liability for violating these tenets if I disclose that I have indeed engaged in reverse engineering?

HID and Cisco have both shown that when backed into a corner, they will litigate and the researcher and/or company is forced to either back down or defend (usually the former.) (Ed:. Poor examples as these do not really fall into the same camp as the example I give below.)

Do you folks who do this for a living (or own/manage a company that does) simply count on the understanding that if one can show "purity" of non-malicious motivation that nothing bad will occur?

It’s painfully clear that the slippery slope of full-disclosure plays into this, but help me understand how
the principle of the act (finding vulnerability and telling the company/world about it) outweighs the liability involved.

Do people argue that if you don’t purchase the equipment you’re not covered under the EULA?  I’m trying to rationalize this.  How does one side-step the law in these cases without playing Russian Roulette?

Here’s an example of what I mean.  If you watch this video, the researchers that demonstrated the
Cisco NAC attack @ Black Hat clearly articulate the methods they used to reverse engineer Cisco’s products.

I’m not looking for a debate on the up/downside of full disclosure, but
more specifically the mechanics of the process used to identify that a
vulnerability exists in the first place — especially if reverse
engineering is used.

Perhaps this is a naive question or an uncomfortable one to answer, but I’m really interested.

Thanks,

/Hoff

Another Virtualized Solution for VM Security…

March 19th, 2007 10 comments

Virtualmyspace
I got an email reminder from my buddy Grant Bourzikas today pointing me to another virtualized security solution for servers from Reflex Security called Reflex VSA.  VSA stands for Virtual Security Appliance and the premise appears to be that you deploy this software within each guest VM and it provides what looks a lot like host-based intrusion prevention functionality per VM.

The functionality is defined thusly:

Reflex VSA solves the problem that traditional network security such as
IPS and firewall appliances currently can not solve: detecting and preventing attacks within a virtual server. Because Reflex VSA runs as virtualized
application inside the virtualized environment, it can detect and mitigate
        threats between virtual hosts and networks.

Reflex VSA Features:
        • Access firewall for permission enforcement for intra-host and external network
           communication
        • Intrusion Prevention with inline blocking and filtering for virtualized networks
        • Anomaly, signature, and rate-based threat detection capability
       
        • Network Discovery to discover and map all virtual machines and applications
        • Reflex Command Center, providing a centralized configuration and management
           console, comprehensive reporting tools, and real-time event aggregation and
           correlation
   

Reflex_vsa_deploy
It does not appear to wrap around or plug-in to the HyperVisor natively, so I’m a little confused as to the difference between deploying VSA and whatever HIPS/NIPS agent a customer might already have deployed on "physical" server instantiations.

Blue Lane’s product addresses this at the HyperVisor layer and it would be interesting to me to have the pundits/experts argue the pros/cons of each approach. {Ed. This is incorrect.  Blue Lane’s product runs as a VM/virtual appliance also.  With the exposure via API of the hypervisor/virtual switches, products like Blue Lane and Reflex would take advantage to be more flexible, effective and higher performing.}

I’m surprised most of the other "security configuration management" folks haven’t already re-branded their agents as being "Virtualization Compliant" to attack this nascent marketspace. < :rolleyes here: >

It’s good to see that folks are at least owning up to the fact that intra-VM communications via virtual switches are going to drive a spin on risk models, detection and mitigation tools and techniques.  This is what I was getting at in this blog entry here.

I would enjoy speaking to someone from Reflex to understand their positioning and differentiation better, but isn’t this just HIPS per VM?  How’s that different than firewall, AV, etc. per VM?

/Hoff

Blue Lane VirtualShield for VMWare – Here we go…

March 19th, 2007 1 comment

Arms_diagramarmorlg
Greg Ness from Blue Lane and I have known each other for a while now, and ever since I purchased Blue Lane’s first release of products a few years ago (when I was on the "other" side as a *gasp* customer) I have admired and have taken some blog-derived punishment for my position on Blue Lane’s technology.

I have zero interest in Blue Lane other than the fact that I dig their technology and products and think it solves some serious business problems elegantly and efficiently with a security efficacy that is worth its weight in gold.

Vulnerability shielding (or patch emulation…) is a provocative subject and I’ve gone ’round and ’round with many a fine folk online wherein the debate normally dissolves into the intricacies of IPS vs. vulnerability shielding versus the fact that the solutions solve a business problem in a unique way that works and is cost effective.

That’s what a security product SHOULD do.  Yet I digress.

So, back to Greg @ Blue Lane…he let me know a few weeks ago about Blue Lane’s VirtualShield offering for  VMWare environments.  VirtualShield is the first commercial product that I know of that specifically tackles problems that everyone knows exists in VM environments but have, until now, sat around twirling thumbs at.

In fact, I alluded to some of these issues in this blog entry regarding the perceived "dangers" of virtualization a few weeks ago.

In short, VirtualShield is designed to protect guest VM’s running under a VMWare ESX environment in the following manner (and I quote):

  • Protects virtualized servers regardless of physical location or patch-level;
  • Provides up-to-date protection with no configuration changes and no agent installation on each virtual machine;
  • Eliminates remote threats without blocking legitimate application requests or requiring server reboots; and
  • Delivers appropriate protection for specific applications without requiring any manual tuning.

VS basically sits on top of the HyperVisor and performs a similar set of functionality as the PatchPoint solution does for non-VM systems.

Specifically, VirtualShield discovers the virtual servers running on a server and profiles the VM’s, the application(s), ports and protocols utilized to build and provision the specific OS and application protections (vulnerability shielding) required to protect the VM.

Bluelanevs_alt_conceptual_v2 I think the next section is really the key element of VirtualShield:

As traffic flows through VirtualShield inside the
hypervisor, individual sessions are decoded and monitored for
vulnerable conditions. When necessary, VirtualShield can replicate the
function of a software security patch by applying a corrective action
directly within the network stream, protecting the downstream virtual
server.

As new security patches are released by software
application vendors, VirtualShield automatically downloads the
appropriate inline patches from Blue Lane. Updates may be applied
dynamically without requiring any reboots or reconfigurations of the
virtual servers, the hypervisor, or VirtualShield.

While one might suggest that vulnerability shielding is not new and in some cases certain functionality can be parlayed by firewalls, IPS, AV, etc., I maintain that the manner and model in which Blue Lane elegantly executes this compensating control is unique and effective.

If you’re running a virtualized server environment under VMWare’s ESX architecture, check out VirtualShield…right after you listen to the virtualization podcast with yours truly from RSA.

/Hoff

My Take on the future of Vulnerability Management

March 1st, 2007 No comments

Vafuture
I’ve followed Alan Shimel’s musings on the furture of vulnerability assessment (VA) and found myself nodding along for the most part about where Alan sees the VA market and technology heading:

    "Over the past year, many have asked what is next for   
    VA.  I think we
are seeing the answer.  The answer is VA
    is morphing into security
configuration management."

Alan preceded this conclusion by illustrating the progression VA has taken over the lifecycle of offerings wherein pure "scanning"  VA toolsets  evolved with integration into vulnerability management (VM) suites that included reporting, remediation and integration and ultimately into a compliance measurement mechanism.

So Alan’s alluded that ultimately VA/VM was really a risk management play and I wholeheartedly agree with this.  However, I am confused as to how broadly the definition of "security configuration management (SCM)" spreads its arms under the fold of the definition of "risk management;" it seems to me that SCM is a subset of an overall risk management framework and not vice versa. Perhaps this is already clear enough to folks reading his post, but it wasn’t to me.

So, to the punchline:

My vision for where VA is going aligns with Alan’s except it doesn’t end (or even next-step to) configuration management.  It leapfrogs directly to security risk management (SRM.)  It’s also already available in products such as Skybox and RedSeal.  (Disclosure: I am on Skybox’s Customer Advisory Board.)

Before you dismiss this as an ad for Skybox, please realize that I’ve
purchased and implemented this solution in conjunction with the other tools I
describe.  It represents an incredible tool and methodology that provided a level of
transparency and accuracy that allowed me to communicate and make decisions that were totally aligned to the most important elements within my business which is exactly what security should do.

Skybox is the best-kept secret in the risk manager’s arsenal.  It’s an amazing product that solves some very difficult business problems that stymie security professionals due to their inability to truly communicate (in real time) the risk posture of their organization.  What do I focus on first?

SRM is defined thusly (pasted from Skybox’s website because it’s the perfect definition that I couldn’t improve upon):

IT Security Risk Management is the complete process of understanding
threats, prioritizing vulnerabilities, limiting damage from potential
attacks, and understanding the impact of proposed changes or patches on
the target systems.
  –
IT SRM Solution for Vulnerability Management, Gartner, 2005

Security
Risk Management collects network infrastructure and security
configurations, evaluates vulnerability scan results, maps dependencies
among security devices and incorporates the business value of critical
assets. SRM calculates all possible access paths, and highlights
vulnerabilities that can be exploited by internal and external
attackers as well as malicious worms.

By using
Security Risk Management, the information overload associated with
thousands of network security policies, control devices and
vulnerability scans can be demystified and automated. This is
accomplished by prioritizing tens of thousands of vulnerabilities into
just the few that should be mitigated in order to prevent cyber
attacks. The benefit is a more secure network, higher operational
efficiency and reduced IT workload.

That being said, starting some 3 years ago I saw where VA/VM was headed and it was down a rathole that provided very little actionable intelligence in terms of managing risk because the VA/VM tools knew nothing of the definition or value of the assets against which the VA was performed. 

We got 600 page reports of vulnerability dumps with massive amounts of false positives.  While the technology has improved, the underlying "evolution" of VA is occurring only because the information it conveyed is not valuable if you want to make an informed decision.    If you manage purely threats and vulnerabilities, you’ll be patching forever.

Qualys, Foundstone (nee McAfee) and nCircle (for example) all started to evolve their products by attaching qualitative or quantitative weighting to IT assets (or groups of them) which certainly allowed folks to dashboard the relative impact a vulnerability would have should it be exploited.

The problem is that these impact or "risk" statements (and the ensuing compliance reporting) were still disconnected from the linked dependencies and cascading failure modalities that occur when these assets are interconnected via a complex network.   These tools measure compliance and impact one vulnerability at a time and within a vulnerability diameter of a single host.  They don’t have the context of the network, actors, skill sets or hierarchical infrastructure dependencies.  Throw in dynamic routing and numerous controls and network components in between them and these models proved unrealistic and unreliable.

These tools also assume that somehow you’re able to apply the results of a risk assessment (RA) and be able to translate the groups of assets to a singular "asset" against which impact can be measured based upon the existence of a vulnerability but not necessarily the potential for exploit — VA tools have no concept of whether a  control is in place that mitigates the risk.  You also have to understand an map the sub-components of impact against known elements such as confidentiality, integrity and availability.

That’s exactly where Skybox and SRM comes in. [From their materials]

Skybox4_step
The four step process of SRM is a continuous, consistent, automated and repeatable framework:

  • Model the IT environment
  • Simulate access scenarios and attack paths
  • Analyze network connectivity, business risk and regulatory compliance
  • Plan optimal mitigation strategies and safe network changes

What this means is that you can take all that nifty VA/VM data, understand and model the network and the steady-state risk posture of your organization, perform "what-if’s" and ultimately understand what a change will do to your environment from not only a pure threat/vulnerability perspective, but also a risk/impact one. The accuracy of your data depends on how good you risk input  data is and how up-to-date the  network and  vulnerability results are.  Automate this and it’s as close as you can get to "real-time."  Let’s call it "near-time."

You can communicate these results at any level up and down the stack of your organization and truly show compliance as it matters not only to the spirit of a regulation or law, but also to your business (and sometimes those are different things.)  It slots into configuration management/security configuration management programs and interfaces with CMM models and quality and governance management frameworks such as CobiT and ITIL.

This, in my opinion, is where VA is headed — as a vital component of an intelligent risk management portfolio that is called Security Risk Management.

/Hoff

Does the word ‘Matasano’ mean ‘allergic to innovation’ in Lithuanian?

September 27th, 2006 2 comments

Kicknuts(On the advice of someone MUCH smarter than Ptacek or me [my wife] I removed the use of the F/S-bombs in this post.]

Holy crap.  Thomas Ptacek kicked me square in the nuts with his post here in regards to my commentary about Blue Lane’s PatchPoint.

I’m really at a loss for words.  I don’t really care to start a blog war of words with someone like Thomas Ptacek who is eleventy-billion times smarter than I’ll ever hope to be, but I have to admit, his post is the most stupid frigging illustration of derivate label-focused stubborness I have ever witnessed.  For chrissakes, he’s challenging tech with marketing slides?  He’s starting to sound like Marcus Ranum.

Thomas, your assertions about Patch Point (a product you’ve never seen in person) are innaccurate.  Your side-swipe bitch-slap commentary about my motivation is offensive.  Your obvious dislike for IPS is noted — and misdirected.  This is boring.  You assail a product and THEN invite the vendor to respond?  Dude, you’re a vendor, too.  Challenging a technology approach is one thing, but calling into question my integrity and motivation?  Back the hell up.

I just got back from an awesome gathering @ BeanSec!2 and Bourbon6 — so despite the fact that I’m going to hate myself (and this post) in the morning, I have to tell you that 4 of the people that read your post asked "what the hell?"  Did I piss in your corn flakes inadvertenly?

Let me just cut to the chase:

1) I worked with Blue Lane as a customer @ my last job while they were still in stealth.  That’s why the "start date" is befor the "live date"
2) When they went live, I bought their product.  The first, in fact.  It worked aces for me.
3) Call it an IPS.  Call it a salad dressing.  I could care less.  It works.  It solves a business problem.
4) I have ZERO interest in their company other than I think it solves said BUSINESS problem.
5) This *is* third party patching because they apply a "patch" which mitigates the exploit related to the vulnerability.  They "patch" the defect.
6) Your comment answers your own question:

You see what they did there? The box takes in shellcode, and then, by
“emulating the functionality of a patch”, spits out valid
traffic. Wow. That’s amazing. Now, somebody please tell me why that’s
any improvement over taking in shellcode, and then, by “emulating the
functionality of an attack signature”, spitting out nothing?

…ummm, hello!  An IPS BLOCKS traffic as you illustrate…That’s all. 

What if the dumb IPS today kills a valid $50M wire transaction because someone typed 10 more bytes than they should have in a comment field?  Should we truncate they extra 10 bytes or dump the entire transaction? 

IPS’s would dump the entire transaction because of an arbitrary and inexact instantiation of a flawed and rigid "policy" that is inaccurate.  That’s diametrically opposed to what security SHOULD do.

[Note: I recognize that is a poor example because it doesn’t really align with what a ‘patch’ would do — perhaps this comment invites the IPS comparison because of it’s signature-like action?  I’ll come up with a better example and post it in another entry]

Blue Lane does what a security product should; allow good traffic through and make specifically-identified bad traffic good enough.  IPS’s don’t do that.  They are stupid, deny-driven technology.  They illustrate all that is wrong with how security is deployed today.  If we agree on that, great!  You seem to hate IPS.  So do I.  Blue Lane is not an IPS.  You illustrated that yourself.

Blue Lane is not an IPS because PatchPoint does exactly what a patched system would do if it received a malicious packet…it doesn’t toss the entire thing; it takes the good and weeds the bad but allows the request to be processed.  For example, if MS-06-10000 is a patch that mitigates a buffer overflow of a particular application/port such that anything over 1024 bytes can cause the execution or arbitrary code from executing by truncating/removing anything over 1024 bytes, why is this a bad thing to do @ the network layer?

This *IS* a third party patch because within 12 hours (based upon an SLA) they provide a "patch" that mitigates the exploit of a vulnerability and protects the servers behind the applicance WITHOUT touching the host. 

When the vendor issues the real patch, Blue Lane will allow you to flexibly continue to "network patch" with their solution or apply the vendor’s.  It gives you time to defend against a potential attack without destroying your critical machines by prematurely deploying patches on the host without the benefit of a controlled regression test.

You’re a smart guy.  Don’t assail the product in theory without trying it.  Your technical comparisons to the IPS model are flawed from a business and operational perspective and I think that it sucks that you’ve taken such a narrow-minded perspective on this matter.

Look,  I purchased their product  whilst at my last job.  I’d do it again today.  I have ZERO personal interest in this company or its products other than to say it really is a great solution in the security arsenal today.  That said, I’m going to approach them to get their app. on my platform because it is a fantastic solution to a big problem.

The VC that called me about this today seems to think so, too.

Sorry dude, but I really don’t think you get it this time.  You’re still eleventy-billion times smarter than I am, but you’re also wrong.  Also, until you actually meet me, don’t ever call into question my honor, integrity or motivation…I’d never do that to you (or anyone else) so have at least a modicum of respect, eh?

You’re still going to advertise BeanSec! 3, right?

Hoff

People Positing Pooh-Poohing Pre-emptive Patching Practices Please Provide Practical Proof…

June 18th, 2006 2 comments

Microsoft
I was reading Rothman’s latest post on Security Incite regarding patching
and I am left a little confused about his position. Despite his estimation of a high score on the
“boredometer scale” as it relates to the media’s handling of the patching
frenzy ( I *do* agree with that,) I think he’s a little sideways on the issue. At least now we can say that we don’t always
agree.

Mike writes:

I
hate Patch Tuesday. It’s become more of a media circus that anything useful
nowadays. So instead of focusing on what needs to be done, most security
administrators need to focus on what needs to be patched. Or not. And that
takes up more time because in reality, existing defenses reduce (if not
eliminate) the impact of many of the vulnerabilities being patched. Maybe it’s
just my ADD showing, in that these discussions are just not interesting
anymore. If you do the right stuff, then there shouldn’t be this crazy urgency
to patch – you are protected via other defenses. But the lemmings need
something to write about, so there you have it.

One lemming, reporting for duty, sir!

Specifically, Mike’s opinion seems to suggest that basically people who “…do
the right stuff” don’t need to patch because “…in reality, existing defenses
reduce (if not eliminate) the impact of many of the vulnerabilities being
patched.”

Since Mike’s always the champion of the little people, I’ll refer him to the
fact that perhaps not everyone has all the “…existing defenses” to rely upon –
or better yet, keep them up to date (you know, sort of like patching – but for security appliances!)  In fact, I’m going to argue
that despite everyone’s best efforts, currently stealthy little zero-day Trojan
buggery does a damn good job of getting through these defenses, despite the vendor hype
to the contrary.

Emerging technology will make these sorts of vulnerabilities less
susceptible to exploit, but that’s going to mean a whole lot of evolution on
the part of both the network and the host layer security solutions; there are a LOT of solutions out there now and not ONE of them actually works well in the real world.

I still
maintain that relying on the hosts (the things you are protecting – and worried
about) to auto-ameliorate is a dumb idea.  It’s akin to why I think we’re going to have
to spend just as much time defending the “self-defending network” than we do
today with our poorly-defended ones.

I’m going to tippytoe out on the ledge here because I have a feeling that my
response to Mike’s enormous generalization will leave him with just as huge of a hole to bury
me in, but so be it.  I think he was in a hurry to go on vacation, so please cut him some slack! 😉

Specifically, many of the latest critical vulnerabilities were released to
counter exploits targeted at generic desktop applications such as Excel, Powerpoint
and Internet Explorer; things that users rely on everyday to perform their job
duties at work. 

You don’t have to click
on links or open attachments for these beauties to blow up, you just open a
document from “your” IT department over the "trusted" network drive map that was infected by a rogue scanning worm
which deposited Trojans across your enterprise and BOOM! No such thing as “trust but verify” in the
real world, I’m afraid. 

By the way, this little beauty came into your network through a USB drive that someone used to bring their work from home back to the office…sound familiar?

Yep, we can close that hole down with more layers of security software — or better yet, epoxy the USB slots closed! 😉

OK, OK, I’m generalizing, too.  I know it, but everyone else does it …

I don’t know what the “right stuff” is, but if it includes using the
Internet, Word, Powerpoint or Excel, short of additional layers of host-based
security, it’s going to be difficult to defend against those sorts of
vulnerabilities without some form of patching (in combination with reasonable amounts of security — driven by RISK.)

Suggesting that people will do the right thing is noble – laughable, but
noble. 

I’ve heard the CTO’s from several security companies during talks at
computer security tradeshows brag that they don’t use AV on their desktop
computers, always “do the right thing(s),” and have never been compromised.

I think that’s a swell idea – a little contradictory and stupid if you sell
AV software – but swell nonetheless.  I
wish I was as attentive as these guys, but sometimes doing the right thing
means you actually have to know the difference between “right” and “wrong” as
it relates to the inner workings of rootkit installations.   If these experts don’t do the "right thing" based upon
what we hear every day (patch your systems, keep your AV up to date,
run anti-spyware, etc…) what makes you think Aunty Em is going to
listen?

I’ll admit, I know a thing or two about computers and security.  I try to do the “right thing” and I’ve been
lucky in that I have never had any desktop machine I’ve owned compromised.  But it takes lots of technology, work,
diligence, discipline, knowledge and common sense.  That’s a lot of layers. Rot Roh.

Changing gears a little…

It gets even more interesting when we see statistics that uncover that fact
that 1 out of 4 Microsoft flaws are discovered by vulnerability bounty hunters –
professionals paid to discover flaws! That
means we’re going to see more and more of these vulnerabilities discovered
because it’s good for business. Then
will come the immediate exploits and the immediate patches.

Speaking of which, now that Microsoft is at the “Forefront” of the security
space with their desktop security offerings, they will get to charge you for a
product that protects against vulnerabilities in the operating system that you
purchased – from them! Sweet! That is one bad-ass business model.

We’re going to have to keep patching.  Get over it.

/Chris