Archive

Archive for July, 2006

100% Undetectable Malware (?)

July 23rd, 2006 No comments

Bluepillmini
I know I’m checking in late on this story, but for some reason, it just escaped my radar a month or so ago when it appeared…I think that within the context of some of the virtualization discussions in the security realm that it was interesting enough to visit. 

Joanna Rutkowska, a security researcher for Singapore-based IT security firm COSEINC, posts on her Invisible Things blog some amazingly ingenious and frightening glimpses into the possibilities and security implications in terms of malware offered up by the virtualization technologies in AMD’s SVM (Secure Virtual machine)/Pacifica technology.* 

Joanna’s really talking about exploiting the virtualization capabilities of technology like Pacifica to apply stealth by moving the entire operating system into the virtualization layer (in memory — AKA "the matrix.")  If the malware itself controls the virtualization layer, then the "reality" of what is "good" versus "bad" (and detectable as such) is governed within the context of the malware itself.  You can’t detect "bad" via security mechanisms because it’s simply not an available option for the security mechanisms to do so.  Ouch.

This is not quite the same concept that we’ve seen thus far in more "traditional" (?) VM rootkits which load VMM’s below the OS level by exploiting a known vulnerability first.  With Blue Pill, you don’t *need* a vulnerability to exploit.  You should check out this story for more information on this topic such as SubVirt as described on eWeek.

Here is an excerpt from Joanna’s postings thus far:

"Now, imagine a malware (e.g. a network backdoor, keylogger, etc…)
whose capabilities to remain undetectable do not rely on obscurity of
the concept. Malware, which could not be detected even though its
algorithm (concept) is publicly known. Let’s go further and imagine
that even its code could be made public, but still there would be no
way for detecting that this creature is running on our machines…"

"The idea behind Blue Pill is simple: your operating system swallows the
Blue Pill and it awakes inside the Matrix controlled by the ultra thin
Blue Pill hypervisor. This all happens on-the-fly (i.e. without
restarting the system) and there is no performance penalty and all the
devices, like graphics card, are fully accessible to the operating
system, which is now executing inside virtual machine. This is all
possible thanks to the latest virtualization technology from AMD called
SVM/Pacifica."

Intrigued yet? 

This story (once I started researching) was originally commented on by Bill Brenner from techtarget, but I had not seen it until now.  Bill does an excellent job in laying out some of the more relevant points including highlighting the comparisons to the subvirt rootkit as well as some counterpoints aruged from the other side.  That last hyperlink to Kurt Wismer’s blog is just as interesting.  I love the last statement he makes:

"if undetectable virtualization technology can be used to hide the
presence of malware, then equally undetectable virtualization
technology pre-emptively deployed on the system should be able to
detect the undetectable vm-based stealth malware if/when it is encountered…

Alas, I was booked to attend Black Hat in August but my priorities have
been re-clocked, so unfortunately I will not be able to attend Joanna’s
presentation where she is demonstrating her functional prototype of Blue Pill.

I’ve submitted that the notion of virtualization is one of the reasons that embedding more and more security as an embedded function within the "network" as a single pane of glass into the total situational awareness from a security perspective is a flawed proposition as more and more of the "network" will become virtualized within the VM constructs themselves. 

I met with some of Microsoft’s security architects on this very topic and we stared intently at one another hoping for suggestions that would allow us to plan today for what will surely become a more frightening tomorrow.

I’m going to post about this shortly.

Happy reading.  There’s not much light in the rabbit hole, however.

*Here’s a comparison of the Intel/AMD approach to virtualization, including SVM.

Categories: Malware Tags:

Risk Management Requires Sophistication?

July 18th, 2006 2 comments

Excuses
Mike Rothman commented today on another of Michael Farnum’s excellent series on being an "effective security manager."   

Mike R. starts of well enough in defining the value-prop of "Risk Management" as opposed to managing threats and vulnerabilities, and goes on to rightfully suggest that in order to manage risk you need to have a "value component" as part of the weighting metrics for decision making…all good stuff:

But more importantly, you need to get a feel for the RELATIVE value of
stuff (is the finance system more important than the customer
management) before you can figure out where you should be spending your
time and money.

It goes without saying that it’s probably a good idea (and an over-used cliche) that it doesn’t make much sense to spend $100,000 to protect a $100 asset, but strangely enough, that’s what a lot of folks do…and they call it "defense in depth." 

Before you lump me into one of Michael F’s camps, no, I am not saying defense in depth is an invalid and wasteful strategy.  I *am* saying that people hide behind this term because they use it as a substitute for common sense and risk-focused information protection and assurance...

…back to the point at hand…

Here’s where it gets ugly as the conclusion of Mike R’s comments set me
off a little because it really does summarize one of the biggest
cop-outs in the management and execution of information protection/security today:

That is not a technique for the unsophisticated or
those without significant political mojo. If you are new to the space,
you are best off initially focusing on the stuff within your control,
like defense in depth and security awareness.

This is a bullshit lay-down.  It does not take any amount of sophistication to perform a business-driven risk-assessment in order to support a risk-management framework that communicates an organization’s risk posture and investment in controls to the folks that matter and can do something about it. 

It takes a desire to do the right thing for the right reason that protects that right asset at the right price point.  Right?

While it’s true that most good IT folks inherently understand what’s important to an organization from an infrastructure perspective, they may not be able to understand why or be able to provide a transparent explanation as to what impacts based upon threats and exposed attack surfaces really mean to the BUSINESS.

You know how you overcome that shortfall?  You pick a business and asset-focused risk assessment framework and  you start educating yourself and your company on how, what and why you do what you do; you provide transparency in terms of function, ownership, responsibility, effectiveness, and budget.  These are metrics that count.

Don’t think you can do that because you don’t have a fancy title, a corner office or aren’t empowered to do so?  Go get another job because you’re not doing your current one any justice.

Want a great framework that is well-suited to this description and is a good starting point for both small and large companies?  Try Carnegie-Mellon’s OCTAVE.  Read the book.  Here’s a quick summary:

For an organization that wants to understand its information security
needs, OCTAVE® (Operationally Critical Threat, Asset, and
Vulnerability EvaluationSM) is a risk-based strategic assessment
and planning technique for security.

OCTAVE is self-directed. A small team of people from the operational (or
business) units and the IT department work together to address the security
needs of the organization.  The team draws on the knowledge of many employees to
define the current state of security, identify risks to critical assets, and
set a security strategy.

OCTAVE is flexible. It can be tailored for most organizations. 

OCTAVE is different from typical technology-focused assessments. It focuses
on organizational risk and strategic, practice-related issues, balancing operational
risk, security practices, and technology.

Suggesting that you need to have political mojo to ask business unit leaders well-defined, unbiased, interview-based, guided queries is silly.  I’ve done it.  It works.  It doesn’t take a PhD or boardroom experience to pull it off.  I’m not particularly sophisticated and I trained a team of IT (but non-security) folks to do it, too.

But guess what?  It takes WORK.  Lots and lots of WORK.  And it’s iterative, not static.

Because of the fact that Michael’s task list of security admins is so huge, anything that represents a significant investment in time, people or energy usually gets the lowest priority in the grand scheme of things.  That’s the real reason defense-in-depth is such a great hiding place.

With all that stuff to do, you *must* be doing what matters most, right?  You’re so busy!  Unsophisticated, but busy! 😉

Instead of focusing truly on the things that matter, we pile stuff up and claim that we’re doing the best we can with defense in depth without truly understanding that perhaps what we are doing is not the best use of money, time and people afterall.

Don’t cop out.  Risk Management is neither "old school" or a new concept; it’s common sense, it’s reasonable and it’s the right thing to do.

It’s Rational Security.

The Downside of All-in-one Assumptions…

July 16th, 2006 No comments

Assume
I read with some interest a recent Network Computing web posting by Don MacVittie  titled "The Downside of All-in-One Security."  In this post, Don makes some comments that I don’t entirely agree with, so since I can’t sleep, I thought I’d perform an autopsy to rationalize my discomfort.

I’ve posted before regarding Don’s commentary on UTM (this older story is basically the identical story as the one I’m commenting on today?) in which he said:

Just to be entertaining, I’ll start by pointing out that most readers I talk to wouldn’t
consider a UTM at this time. That doesn’t mean most organizations
wouldn’t, there’s a limit to the number I can stay in regular touch
with and still get my job done, but it does say something about the
market.

All I can say is that I don’t know how many readers Don talks to, but the overall UTM market to which he refers can’t be the same UTM market which IDC defines as being set to grow to $2.4 billion in 2009, a 47.9 percent CAGR from 2004-2009.  Conversely, the traditional firewall and VPN appliance market is predicted to decline to $1.3 billion by 2009 with a negative CARG of 4.8%.

The reality is that UTM players (whether perimeter or Enterprise/Service Provider class UTM) continue to post impressive numbers supporting this growth — and customers are purchasing these solutions.  Perhaps they don’t purchase "UTM" devices but rather "multi-function security appliances?" 🙂 

I’m just sayin’…

Don leads of with:


Unified Threat Management (UTM) products combine multiple security
functions, such as firewall, content inspection and antivirus, into a
single appliance. The assumption is UTM reduces management hassles by
reducing the hardware in your security infrastructure … but you know
what happens when you assume.

No real problems thus far.  My response to the interrogative posited by the last portion of Don’s intro is: "Yes, sometimes when you assume, it turns out you are correct."  More on that in a moment…


You can slow the spread of security appliances by collapsing many
devices into one, but most organizations struggle to manage the
applications themselves, not the hardware that runs them.

Bzzzzzzzzttttt.  The first half of the sentence is absolutely a valid and a non-assumptive benefit to those deploying UTM.  The latter half makes a rather sizeable assumption, one I’d like substantiated, please.

If we’re talking about security appliances, today there’s little separation between the application and the hardware that runs them.  That’s the whole idea behind appliances.

In many cases, these appliances use embedded software, RTOS in silicon, or very tightly couple the functional and performance foundations of the solution to the binding of the hardware and software combined.

I can’t rationalize someone not worrying about the "hardware," especially when they deploy things like HA clusters or a large number of branch office installations. 

You mean to tell me that in large enterprises (you notice that Don forces me to assume what market he’s referring to because he’s generalizing here…) that managing 200+ firewall appliances (hardware) is not a struggle?  Don talks about the application as an issue.  What about the operating system?  Patches?  Alerts/alarms?  Logs?  It’s hard enough to do that with one appliance.  Try 200.  Or 1000!

Content
inspection, antivirus and firewall are all generally controlled by
different crowds in the enterprise, which means some arm-wrestling to
determine who maintains the UTM solution.

This is may be an accurate assumption in a large enterprise but in a small company (SME/SMB) it’s incredibly likely that the folks managing the CI, AV and firewall *are* the same people/person.  Chances are it’s Bob in accounting!


Then there’s bundling. Some vendors support best-of-breed security
apps, giving you a wider choice. However, each application has to crack
packets individually–which affects performance.

So there’s another assumptive generalization that somehow taking traffic and vectoring it off at high speed/low latency to processing functions highly tuned for specific tasks is going to worsen performance.  Now I know that Don didn’t say it would worsen performance, he said it  "…affect(s) performance," but we all know what Don meant — even if we have to assume. 😉

Look, this is an over-reaching and generalized argument and the reality is that even "integrated" solutions today perform replay and iterative inspection that requires multiple packet visitations with "individual packet cracking" — they just happen to do it in parallel — either monolithically in one security stack or via separate applications.  Architecturally, there are benefits to this approach.

Don’t throw the baby out with the bath water…

How do you think stand-alone non-in-line IDS/IPS works in conjunction with firewalls today in non-UTM environments?  The firewall gets the packet as does the IDS/IPS via a SPAN port, a load balancer, etc…they crack the packets independently, but in the case of IDS, it doesn’t "affect" the firewall’s performance one bit.  Using this analogy, in an integrated UTM appliance, this example holds water, too.

Furthermore, in a UTM approach the correlation for disposition is usually done on the same box, not via an external SEIM…further saving the poor user from having to deploy yet another appliance.  Assuming, of course, that this is a problem in the first place. 😉

I’d like some proof points and empirical data that clearly demonstrates this assumption regarding performance.  And don’t hide behind the wording.  The implication here is that you get "worse" performance.   With today’s numbers from  dual CPU/multi-core processors, huge busses, NPU’s and dedicated hardware assist, this set of assumptions flawed.

Other vendors tweak
performance by tightly integrating apps, but you’re stuck with the
software they’ve chosen or developed.

…and then there are those vendors that tweak performance by tightly integrating the apps and allow the customer to define what is best-of-breed without being "stuck with the software [the vendor has] chosen or developed."  You get choice and performance.  To assume otherwise is to not perform diligence on the solutions available today.  If you need to guess who I am talking about…


For now, the single platform model isn’t right for enterprises large
enough to have a security staff.

Firstly, this statement is just plain wrong.  It *may* be right if you’re talking about deploying a $500 perimeter UTM appliance (or a bunch of them) in the core of a large enterprise, but nobody would do that.  This argument is completely off course when you’re talking about Enterprise-class UTM solutions.

In fact, if you choose the right architecture, assuming the statement above regarding separate administrative domains is correct, you can have the AV people manage the AV, the firewall folks manage the firewalls, etc. and do so in a very reliable, high speed and secure consolidated/virtualized fashion from a UTM architecture such as this.

That said, the sprawl created by
existing infrastructure can’t go on forever–there is a limit to the
number of security-only ports you can throw into the network. UTM will
come eventually–just not today

So, we agree again…security sprawl cannot continue.  It’s an overwhelming issue for both those who need "good enough" security as well as those who need best-of-breed. 

However, your last statement leaves me scratching my head in confused disbelief, so I’ll just respond thusly:

UTM isn’t "coming," it’s already arrived.  It’s been here for years without the fancy title.  The same issues faced in the datacenter in general are the same facing the microcosm of the security space — from space, power, and cooling to administration, virtualization and consolidation — and UTM helps solve these challenges.  UTM is here TODAY, and to assume anything otherwise is a foolish position.

My $0.02 (not assuming inflation)

/Chris

Got a [Security] question? Ask the Ninja…

July 16th, 2006 2 comments

So, like, why is ‘thr33’ the magic number?  The Ninja answers thusly: "Combine the Wizard of Oz, Reign of Fire, and Jonathan Livingston Seagull and you’ll get the picture."  Then again, you probably won’t.

Confused as to just what the hell this has to do with security?   

So am I, so my apologies go out to any real ninjas who happen to be using their spare time away from battling Magons (half monkey/half dragon — firebreathers with a prehensile tail!) and rather than relax with a Sobe and a stepped down pilates session have decided instead to read my security blog.

That happens you know.  All.  The.  Time.

Seriously, though, there is a security reference in here.  Pay attention.  First person who responds in the comments section below as to the security reference gets a free pouch of homemade guacamole.  You pay shipping.

Click on the little ‘play’ icon in the pic below…

Categories: General Rants & Raves Tags:

Slow News Day + Patch Tuesday = FLANtastic One-liners!

July 11th, 2006 4 comments

198655chkg_w
I was actually going to write about how I think that so many of the FOG (you figure it out) security icons we have in the industry have turned into grumpy old bastards — all telling us how we’re "doing security wrong" and that all you need is a few ACLs, a stick of chewing gum, a tampon and and a teaspoon of Sucalose to secure your network…but then that would just be stating the obvious and I might be mistaken for an analyst…

Rot Roh.  Ah well.  Onto more pressing security matters because I have no interest in talking about privacy breaches, NAC or regulatory pressures today…we’re in the midst of moving our HQ this week and BOTH our existing and new buildings were struck by lightning today.  I figure I’ll use up my other 7 lives and pick on someone else.  Film @ 11.

So anyway, I was reading this fine piece of work today and I swear, this thing is written like page 6 of the Post.  I usually enjoy reading the scribbles over on Dark Reading, but it seems that every damned sentence in this article is gleefully punctuated with some doomsday quotation from the security-expert-rolodex-autobot Outlook 2000 journalistic quotamatron plug-in!

What happened to getting on with it and telling folks what they have to worry about instead of glamming it up with quote after quote of wag!?  If it wasn’t interesting enough to stand on its own as a story, why tart it up and put it on the corner hoping that someone might find it sexy?  Bah!

The fine folks quoted in this article probably gave some salient and well-articulated commentary (sigh) on the state of patching hell (oh, how rare,) but the way it came across in this article, you’d think this was the first Patch Tuesday, evah!

The really funny thing about this story is that comes across as though 80% of it is comprised of a bunch of strung-together quotations from these (mostly) vendors that actually contradict one another in some places.  Two of the quoted are contributing columnists from Dark Reading.

Read the article. You’ll laugh.  You’ll cry.

Check this out (of context):

  1. First, the title: "The Patch Race Is On"   Like, wow.  It’s, like, Patch Tuesday…again!?
  2. Then, the leader: "There were no big surprises among Microsoft’s Patch Tuesday releases today, but there were a couple of holes Microsoft kept under wraps until now." … so why write a big ass fluffy article about nothing then?
  3. The first of many "Captain Obvious" quotations oft times contradicted further on in the article to fill up the word count:
    • But it was the critical holes that caught most security experts’ and managers’ attention.
    • "Anything that is ranked as critical and allows an attacker to take control of a system is very high priority,"
    • "Although there were no real show-stoppers among the patches, the sheer number of vulnerabilities they cover is notable."
    • "Once a system is seized it can be used to penetrate other systems that otherwise would be more secure."
    • "You should jump on any server-side vulnerability quickly."
    • "An anonymous user from outside could deliver malicious traffic."
    • "That’s significant. I don’t think we’ve ever before seen so many vulnerabilities in Office applications."
    • "It’s not too surprising to find a bunch of Excel and
      Office vulnerabilities in here,"
    • "This will continue until we’ve caught all the big ones."
    • "It’s the Holy Grail of hacking,"
    • "Now the race is on for enterprises to test and install their patches before hackers can exploit these vulnerabilities."
    • "The problem with Patch Tuesday is Hack Wednesday,"
    • "I wouldn’t be surprised if you saw an exploit being publicly released tonight or tomorrow."

I think this was a synopsis of the "Idiot’s Guide to the Internet," right?  Or is it a history of the IRC?

I’m certain that within that article there were supposed to be a few useful nuggets of information, but I couldn’t see it for all the comedic value I extracted otherwise.  Many of these stories are becoming progressively anchored on goofy out-of-context quotes from some really notable people whom I respect…but it’s making them sound like total tools.

Save yourself some time, just go here.

Hey, my $0.02 (not accounting for inflation.)  Aw, crap.  I’ve turned into a grumpy bastard myself.

Did I mention you’re doing security wrong?

/Chris

Categories: General Rants & Raves Tags:

A chronology of privacy breaches…

July 7th, 2006 2 comments

Headup
What a staggering number of individuals who have had the privacy of their personally-identifiable information compromised:

    88,795,619

This information comes from the Privacy Rights Clearinghouse and presents a chronology of breaches since the Choicepoint incident in February, 2005. 

I don’t remember seeing or hearing anything about most of these incidents…imagine the many more than none of us do!

Wow.

Chris

[O]ffice of [M]isguided [B]ureaucrats – Going through the Privacy Motions

July 4th, 2006 No comments

Larrymoeandcurly
Like most folks, I’ve been preoccupied with doing nothing over the last few days, so please excuse the tardiness of this entry.  Looks like Alan Shimmel and I are suffering from the same infection of laziness 😉

So, now that the 4 racks of ribs are in the smoker pending today’s festivities celebrating my country’s birth, I find it appropriate to write about this debacle now that my head’s sorted.

When I read this article several days ago regarding the standards that the OMB was "requiring" of federal civilian agencies, I was dismayed (but not surprised) to discover that once again this was another set of toothless "guidelines" meant to dampen the public outrage surrounding the recent string of privacy breaches/disclosures recently. 

For those folks whose opinion it is that we can rest easily and put faith in our government’s ability to federalize legislation and enforcement regarding privacy and security, I respectfully suggest that this recent OMB PR Campaign announcement is one of the most profound illustrations of why that suggestion is about the most stupid thing in the universe. 

Look, I realize that these are "civilian" agencies of our government, but the last time I checked, the "civilian" and "military/intelligence" arms were at least governed by the same set of folks whose responsibility it is to ensure that we, as citizens, are taken care of.  This means that at certain levels, what’s good for the goose is good for the foie gras…kick down some crumbs!

We don’t necessarily need Type 1 encryption for the Dept. of Agriculture, but how about a little knowledge transfer, information sharing and reasonable due care, fellas?  Help a brother out!

<sigh>

The article started off well enough…45 days to implement what should have been implemented years ago:

To comply with the new policy, agencies will have to encrypt all data
on laptop or handheld computers unless the data are classified as
"non-sensitive" by an agency’s deputy director.
Agency employees also
would need two-factor authentication — a password plus a physical
device such as a key card — to reach a work database through a remote
connection, which must be automatically severed after 30 minutes of
inactivity.

Buahahaha!  That’s great.  Is the agency’s deputy director going to personally inspect every file, database transaction and email on every laptop/handheld in his agency?  No, of course not.  Is this going to prevent disclosure and data loss from occuring?  Nope.  It may make it more difficult, but there is no silver bullet.

Again, this is why data classification doesn’t work.  If they knew where the data was and where it was going in the first place, it wouldn’t go missing, now would it?  I posted about this very problem here.

Gee, for a $1.50 and a tour of the white house I could have drafted this.  In fact, I did in a blog post a couple of weeks ago 😉

But here’s the rub in the next paragraph:

OMB said agencies are expected to have the measures in place within 45
days, and that it would work with agency inspectors general to ensure
compliance. It stopped short of calling the changes "requirements,"
choosing instead to label them "recommendations" that were intended "to
compensate for the protections offered by the physical security
controls when information is removed from, or accessed from outside of
the agency location."

Compensate for the protections offered by the physical security controls!?  You mean like the ones that allowed for the removal of data lost in these breaches in the first place!?  Jesus.

I just love this excerpt from the OMB’s document:

Most departments and agencies have these measures already in place.  We intend to work with the Inspectors General community to review these items as well as the checklist to ensure we are properly safeguarding the information the American taxpayer has entrusted to us.  Please ensure these safeguards have been reviewed and are in place within the next 45 days.

Oh really!?  Are the Dept. of the Navy, the Dept. of Agricultre, the IRS among those departments who have these measures in place?  And I love how polite they can be now that tens of millions of taxpayer’s personal information has been displaced…"Please ensure these safeguards…"  Thanks!

Look, grow a pair, stop spending $600 on toilet seats, give these joes some funding to make it stick, make the damned "recommendations" actual "requirements," audit them like you audit the private sector for SoX, and prehaps the idiots running these organizations will take their newfound budgetary allotments and actually improve upon rediculous information security scorecards such as these:

2005_govscorecard

I don’t mean to come off like I’m whining about all of this, but perhaps we should just outsource government agency security to the private sector.  It would be good for the economy and although it would become a vendor love-fest, I reckon we’d have better than a D+…

/Chris

[IN]SECURE Magazine

July 4th, 2006 No comments

[IN]SECURE Magazine

Insecure_magazine
Many of you may already be aware of this fantastic security eZine, but for those of you who are not, treat yourself to a quick PDF download of this great periodical.

Excellent technical articles, great product and show coverage and some impressive interviews to boot.

/Chris

Categories: Information Security Tags: