Archive

Archive for February, 2008

BeanSec! Wednesday, February 20th, 2008 – 6PM to ?

February 19th, 2008 No comments

Beansec3_2
Yo!  BeanSec! is once again upon us.  Wednesday, February 20th, 2008.

BeanSec! is an informal meetup of information security
professionals, researchers and academics in the Greater Boston area
that meets the third Wednesday of each month. 

I say again, BeanSec! is hosted the third Wednesday of every month.  Add it to your calendar.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join
up”, present a zero-day exploit, or defend your dissertation to attend.
Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139.  Look for the Elephant
on the left door next to the Central Kitchen entrance.  Come upstairs.
We sit on the left hand side…

Don’t worry about being "late" because most people just show up when
they can.  6:30 is a good time to aim for.  We’ll try and save you a
seat.  There is a parking garage across the street and 1 block down or
you can try the streets (or take the T)

In case you’re wondering, we’re getting about 30-40 people on
average per BeanSec!  Weld, 0Day and I have been at this for just over
a year and without actually *doing* anything, it’s turned out swell.

We’ve had some really interesting people of note attend lately (I’m
not going to tell you who…you’ll just have to come and find out.)  At
around 9:00pm or so, the DJ shows up…as do the rather nice looking
people from the Cambridge area, so if that’s your scene, you can geek
out first and then get your thang on.

The food selection is basically high-end finger-food appetizers and
the drinks are really good; an attentive staff and eclectic clientèle
make the joint fun for people watching.  I’ll generally annoy you into
participating somehow, even if it’s just fetching napkins. 😉

See you there.

/Hoff

Categories: BeanSec! Tags:

A Worm By Any Other Name Is…An Information Epidemic?

February 18th, 2008 2 comments

Virus
Martin McKeay took exception to some interesting Microsoft research that suggested that the similar methodologies and tactics used by malicious software such as worms/viri, could also be used as an effective distributed defense against them:

Microsoft researchers are hoping to use "information epidemics" to distribute software patches more efficiently.

Milan Vojnović
and colleagues from Microsoft Research in Cambridge, UK, want to make
useful pieces of information such as software updates behave more like
computer worms: spreading between computers instead of being downloaded
from central servers.

The research may also help defend against malicious types of worm, the researchers say.

Software
worms spread by self-replicating. After infecting one computer they
probe others to find new hosts. Most existing worms randomly probe
computers when looking for new hosts to infect, but that is
inefficient, says Vojnović, because they waste time exploring groups or
"subnets" of computers that contain few uninfected hosts.

Despite the really cool moniker (information epidemic,) this isn’t a particularly novel distribution approach and in fact, we’ve seen malware do this.  However, it is interesting to see that an OS vendor (Microsoft) is continuing to actively engage in research to explore this approach despite the opinions of others who simply claim it’s a bad idea.  I’m not convinced either way, however.

I, for one, am all for resilient computing environments that are aware of their vulnerabilities and can actively defend against them.  I will be interested to see how this new paper builds off of work previously produced on the subject and its corresponding criticism.

Vojnović’s team have designed smarter strategies that can exploit the way some subnets provide richer pickings than others.

The
ideal approach uses prior knowledge of the way uninfected computers are
spread across different subnets. A worm with that information can focus
its attention on the most fruitful subnets – infecting a given
proportion of a network using the smallest possible number of probes.

But
although prior knowledge could be available in some cases – a company
distributing a patch after a previous worm attack, for example –
usually such perfect information will not be available. So the
researchers have also developed strategies that mean the worms can
learn from experience.

In
the best of these, a worm starts by randomly contacting potential new
hosts. After finding one, it uses a more targeted approach, contacting
only other computers in the same subnet. If the worm finds plenty of
uninfected hosts there, it keeps spreading in that subnet, but if not,
it changes tack.

That being the case, here’s some of Martin’s heartburn:

But the problem is, if both beneficial and malign
software show the same basic behavior patterns, how do you
differentiate between the two? And what’s to stop the worm from being
mutated once it’s started, since bad guys will be able to capture the
worms and possibly subverting their programs.

The article isn’t clear on how the worms will secure their network,
but I don’t believe this is the best way to solve the problem that’s
being expressed. The problem being solved here appears to be one of
network traffic spikes caused by the download of patches. We already
have a widely used protocols that solve this problem, bittorrents and
P2P programs. So why create a potentially hazardous situation using
worms when a better solution already exists. Yes, torrents can be
subverted too, but these are problems that we’re a lot closer to
solving than what’s being suggested.

I don’t want something that’s viral infecting my computer, whether
it’s for my benefit or not. The behavior isn’t something to be
encouraged. Maybe there’s a whole lot more to the paper, which hasn’t
been released yet, but I’m not comfortable with the basic idea being
suggested. Worm wars are not the way to secure the network.

I think that some of the points that Martin raises are valid, but I also think that he’s reacting mostly out of fear to the word ‘worm.’  What if we called it "distributed autonomic shielding?" 😉

Some features/functions of our defensive portfolio are going to need to become more self-organizing, autonomic and intelligent and that goes for the distribution of intelligence and disposition, also.  If we’re not going to advocate being offensive, then we should at least be offensively defensive.  This is one way of potentially doing this.

Interestingly, this dovetails into some discussions we’ve had recently with Andy Jaquith and Amrit Williams; the notion of herds or biotic propagation and response are really quite fascinating.  See my post titled "Thinning the Herd & Chlorinating the Gene Pool"

I’ve left out most of the juicy bits of the story so you should go read it and churn on some of the very interesting points raised as part of the discussion.

/Hoff

Update: Schneier thinks this is a lousy idea. That doesn’t move me one direction or the other, but I think this is cementing my opinion that had the author not used the word ‘worm’ in his analog the idea might not be dismissed so quickly…

Also, Wismer via a comment on Martin’s blog pointed to an interesting read from Vesselin Bontchev titled "Are "Good" Computer Viruses Still a Bad Idea?"

Update #2: See the comments section about how I think the use case argued by Schneier et. al. is, um, slightly missing the point.  Strangely enough, check out the Network World article that just popped up which says ""This was not the primary scenario targeted for this research," according to a statement."

Duh.

Announcing the Security Star Chamber…

February 17th, 2008 No comments

Starchamber
I had an idea today; a platform upon which to launch a little security parody mixed with an even dose of introspective navel gazing and the odd spoonful of guffaw. The goal is to provide a healthy whilst humorous appraisal of the state of the security industry.

Think InfoSec Sellout meets Monty Python and The Apprentice.

Did you ever see the movie The Star Chamber?

In one of his earlier features,Michael Douglas plays a young judge who
becomes disillusioned with the law system he used to so admire when he finds
himself continually having to aquit particularly dispicable criminals on the
grounds of ridiculous technicalities.

Sensing his frustration,a close friend
(Hal Holbrook) informs him of a secret judicial society that meets and
dishes out the appropriate punishment to those who have escaped the clutches
of the law.
 

Inspired by some conversations this last week at ShmooCon with friends new and old, I am creating the Rational Survivability version of the "Security Star Chamber."

I’m going to play the disillusioned (young) judge.  I’ve recruited my not-so-secret judicial society who will, on a weekly basis, cast judgment against a specific market of the security industry; we’ll pick on a segment in a no-holds barred look at the belly of beast, not to dispense punishment, but to rather provide perspective.

If we can’t take ourselves seriously, we may as well play the fool instead.

We expect to communicate our judgment in the most pompous, self-important and aggrandizing style as we possibly can.  Fair and balanced?  This ain’t Fox News (if you can’t sift through that irony, you’re sure as hell going to hate the SSC…)

Here’s the catch…each of the jury has to summarize his or her argument in one sentence.

This may lend itself to some awkward dialog, but it ought to be mildly interesting for sure.

You’ll meet the other judges shortly 😉

/Hoff

 

Categories: Uncategorized Tags:

Security Innovation & the Bendy Hammer

February 17th, 2008 4 comments

MaxstrikeSee that odd looking hammer to the left?  It’s called the MaxiStrike from Redback Tools.

No, it hasn’t been run over by a Panzer, nor was there grease on the lens  during the photography session. 

Believe it or not, that odd little bend enables this 20 ounce mallet with the following features:

     > maximize strike force

     > reduce missed hits

     > leave clearance for nailing in cramped areas

All from that one little left hand turn from linear thought in product design.

You remember that series of posts I did on Disruptive Innovation?

This is a perfect illustration of how innovation can be "evolutionary" as opposed to revolutionary.

Incrementalism can be just as impacting as one of those tipping point "big-bang" events that have desensitized us to some of the really cool things that pop up and can actually make a difference.

So I know this hammer isn’t going to cure cancer, but it makes for easier, more efficient and more accurate nailing.  Sometimes that’s worth a hell of a lot to someone who does a lot of hammering…

Things like this happen around us all the time — even in our little security puddle of an industry. 

It’s often quite fun when you spot them.

I bet if you tried, you can come up with some examples in security.

Well?

Virtualization Hits the Mainstream…

February 13th, 2008 3 comments

Dilbert20183362080212

Sad, but true…

Categories: Virtualization Tags:

Catbird Says It Has a Better Virtualization Security Mousetrap – “Dedicated Hypervisor Security Solution”

February 13th, 2008 2 comments
Catbirdspoof
I spent quite a bit of time in the Catbird booth at VMworld, initially lured by their rather daring advertising campaign of "running naked."  I came away intrigued by the Security SaaS-like business model provided by their V-Agent offering and saw that as the primary differentiator.

I was particularly interested today when I read a latest press release from Catbird that suggests that their new "HypervisorShield" is specifically designed to secure the hypervisor from network access and attack:


Catbird, provider of the only comprehensive security solution for virtual and physical networks, and developer of the V-Agent virtual appliance, today announced the launch of HypervisorShield, the industrys
first dedicated comprehensive security solution specifically designed
to guard against unauthorized hypervisor network access and attack.

The paragraph above seems to be talking about protecting the "hypervisor" itself from network-borne compromise which is very interesting to me for reasons that should be obvious at this point. 

However, the following paragraph seems to refer to the "hypervisor management network" which I assume is actually referring to the the virtual interface of the management functions like VMware’s service console?   Are we talking about protecting the service console or the network functions provided by the vKernel? 

HypervisorShield, the latest service in Catbirds V-Security product, extends best practice security protection to virtualizations
critical hypervisor layer,
thwarting both inadvertent management error
and malicious threats. Delivering continuous, automated 24×7 monitoring
focused on the precise vulnerabilities, known attack signatures and
guest machine access of the hypervisor management network,
HypervisorShield is the only service to proactively secure this
essential component of a virtualization deployment.

Here’s where it gets a little more confusing because the wording seems again to suggest they are protecting the hypervisor itself — or do they mean the virtual switch as a component of the Hypervisor?:

HypervisorShield is the first virtualized security technology which
can monitor and control access to the hypervisor network, detect
malicious network activity directed at the hypervisor from virtual
machines and validate that the hypervisor network is configured
according to best practices and site security policy.

…sounds like an IPS function that isolates VM’s from one another like Reflex and Blue Lane? 

OK, but here’s where it gets really interesting.  Catbird is suggesting that they are able to "…see inside the hypervisor" which implies they have hooks and exposure to elements within the hypervisor itself versus the vSwitch plumbing that everyone has access to.

Via the groundbreaking Catbird V-Agent virtual appliance, protection
is delivered within the virtual network itself. By contrast,
traditional security solutions retrofitted for virtual deployments
cannot see inside the hypervisor. Monitoring from the inside yields
significantly more effective coverage and eliminates the need to
reroute traffic onto the physical network for validation. As an example
of the benefits of running right on the virtual subnet, HypervisorShields exclusive network access control (NAC) will instantly quarantine unauthorized devices on the management network.

They do talk about NAC from the VM perspective, which is something I’ve been
advocating.

From Catbird’s website we see some more detail regarding HypervisorShield which again introduces an interesting assertion:

How do you monitor the Hypervisor?

Securing a virtual host does not only involve applying the
same security controls to virtual networks as were applied to their
physical counterparts. Virtualization introduces a new layer of
abstraction entirely—the Hypervisor. Hypervisor exploits have grown 35%
in the last several years, with more surely on their way.
Catbird’s
patent-pending HypervisorShield protects and defends this essential
component of a virtual deployment.

Really?  Hypervisor exploits have grown 35% in the last several years?  Which hypervisor exploits, exactly?  You mean exploits against the big, fat, Linux-based service console from VMware?  That’s not the hypervisor!

I’m trying to give Catbird the benefit of the doubt here, but this is confusing as heck as to what exactly Catbird does (with partnering with companies like SourceFire) that folks like Reflex and BlueLane don’t already do.

If anyone, especially Catbird, has some clarification for me, I’d be mighty appreciative.

/Hoff


Categories: Virtualization Tags:

Google Security: Frightening Statistics On Drive-By Malware Downloads…

February 12th, 2008 1 comment

Read a scary report from Google’s security team today titled "All your iFrame Are Point to Us" regarding the evolving trends in search-delivered drive-by malware downloads.  Check out the full post here, but the synopsis follows:

GoogledbmalwareIt has been over a year and a half since we started to identify web pages that infect vulnerable hosts via drive-by downloads,
i.e. web pages that attempt to exploit their visitors by installing and
running malware automatically. During that time we have investigated
billions of URLs and found more than three million unique URLs on over
180,000 web sites automatically installing malware. During the course
of our research, we have investigated not only the prevalence of
drive-by downloads but also how users are being exposed to malware and
how it is being distributed. Our research paper is currently under peer
review, but we are making a technical report [PDF] available now.  Although our technical report contains a lot more detail, we present some high-level findings here:

The
above graph shows the percentage of daily queries that contain at least
one search result labeled as harmful. In the past few months, more than
1% of all search results contained at least one result that we believe
to point to malicious content and the trend seems to be increasing.

Ugh.  The technical report offers some really good background data on infrastructure and methodology,  geographic distribution, properties and delivery mechanisms.  Fascinating reading.

/Hoff

Categories: Google, Malware Tags:

Off The Cuff Review: Nemertes Research’s “Virtualization Risk Analysis”

February 12th, 2008 4 comments

Andreas
I just finished reading a research paper from Andreas Antonopoulous from Nemertes titled "A risk analysis of large-scaled and dynamic virtual server environments."  You can find the piece here: 

Executive Summary

As virtualization has gained acceptance in corporate data centers,
security has gone from afterthought to serious concern. Much of the
focus has been on the technologies of virtualization rather than the
operational, organizational and economic context. This comprehensive
risk analysis examines the areas of risk in deployments of virtualized
infrastructures and provides recommendations

I was interested by two things immediately:

  1. While I completely agree with the fact that in regards to virtualization and security the focus has been about the "…technologies of virtualization rather than the
    operational, organizational and economic context"
    I’m not convinced there is an overwhelming consensus that "…security has gone from afterthought to serious concern" mostly because we’re just now getting to see "large-scaled and dynamic virtual server environments.’  It’s still painted on, not baked in.  At least that’s how people react at my talks.
     
  2. Virtualization is about so much more than just servers, and in order to truly paint a picture of analyzing risk within "large-scaled and dynamic virtual server environments" much of the complexity and issues associated specifically with security stem from the operational and organizational elements associated with virtualizing storage, networking, applications, policies, data and the wholesale shift in operationalizing security and who owns it within these environments.

I’ve excerpted the most relevant element of the issue Nemertes wanted to discuss:

With all the
hype surrounding server virtualization come the inevitable security
concerns: are virtual servers less secure? Are we introducing higher
risk into the data center? For server virtualization to deliver
benefits we have to examine the security risks. As with any new
technology there is much uncertainty mixed in with promise. Part of the
uncertainty arises because most companies do not have a good
understanding of the real risks surrounding virtualization.

I’m easily confused…

While I feel the paper does a good job of describing the various stages of
deployment and many of the "concerns" associated with server
virtualization within these contexts, I’m left unsatisfied that I’m anymore prepared to assess and manage risk regarding server virtualization.  I’m concerned that the term "risk" is being spread about rather liberally because there is the presence of a bit of math.

The formulaic "Virtualization Risk Assessment" section is suggested to establish a quantatative basis for computing "relative risk," in the assessment summary.  However, since the variables introduced in the formulae are subjective and specific per asset, it’s odd that the summary table is then seemingly presented generically so as to describe all assets:

Scenario Vulnerability Impact Probability of Attack Overall Risk
Single virtual server (hypervisor risk) Low High Low Low/Medium
Basic services virtualized Low High Medium Medium
Production applications virtualized Medium High High Medium/High
Complete virtualization High High High High

I’m trying to follow this and then get smacked about by this statement, which explains why people just continue to meander along applying the same security strategies toward virtualized servers as they do in conventional environments:

This conclusion might appear to be pessimistic at first glance.
However, note that we are comparing various stages of deployment of
virtual servers. A large deployment of physical servers will suffer
from many of the same challenges that the “Complete Virtualization”
environment suffers from.

Furthermore, it’s unclear to me how to factor in compensating controls into this rendering given what follows:

What is new here is that there are fewer solutions for providing
virtual security than there are for providing physical security with
firewalls and intrusion prevention appliances in the network. On the
other hand, the cost of implementing virtualized security can be
significantly lower than the cost of dedicated hardware appliances,
just like the cost of managing a virtual server is lower than a
physical server.

The security solutions available today are limited by how much integration exists with the virtualization platforms today.  We’ve yet to see the VMM’s/Hypervisors opened up to allow true low-level integration and topology-sensitive security interaction with flow classification, provisioning, and disposition.

Almost all supposed "virtualization-ready" security solutions today are nothing more than virtual appliance versions of existing solutions or simply the same host-based solutions which run in the VM and manage not to cock it up.  Folding your management piece into something like VMware’s VirtualCenter doesn’t count.

In general, I simply disagree that the costs of implementing virtualized security (today) can be significantly lower than the cost of dedicated hardware appliances — not if you’re expecting the same levels of security you get in the conventional, non-virtualized world.

The reasons (as I give in my VirtSec presentations):  Loss of visibility, constraint of the virtual networking configurations, coverage, load on the hosts, licensing.  All really important.

Cutting to the Chase

I’m left waiting for the punchline, much like I was with Burton’s "Immutable Laws of Virtualization," and I think the reason why is that despite these formulae, the somewhat shallow definition of risk seems to still come down to nothing more than reasonably-informed speculation or subjective perception:

So, in the above risk analysis, one must also
consider that the benefits in virtualization far outweigh the risks.

The question is not so much whether companies should proceed with
virtualization – the market is already answering that resoundingly in
the affirmative. The question is how to do that while minimizing the
risk inherent in such a strategy.

These few sentences above seem to almost obviate the need for risk analysis at all and suggests that for most, security is still an afterthought.  High risk or not, the show must go on?

So given the fact that virtualization is happening at breakneck pace, we have few good security solutions available, we speak of risk "relatively," and that operationally the entire role and duty of "security" within virtualized environments is now shifting, how do we end up with this next statement?

In the long run, virtualized security solutions will not only help
mitigate the risk of broadly deployed infrastructure virtualization,
but will also provide new and innovative approaches to information
security that is in itself virtual. The dynamic, flexible and portable
nature of virtual servers is already leading to a new generation of
dynamic, flexible and portable security solutions.

I like the awareness Andreas tries to bring in this paper, but I fear that I am not left with any new information or tools for assessing risk (let alone quantifying it) in a virtual environment. 

So what do I do?!  I still have no answer to the main points of this paper, "With all the
hype surrounding server virtualization come the inevitable security
concerns: are virtual servers less secure? Are we introducing higher
risk into the data center?"

Well?  Are they?  Am I?

/Hoff

Do The Shmoo: Who’s Going to ShmooCon?

February 12th, 2008 4 comments

Shmoocon_2
I’ll be at in D.C. at ShmooCon the latter part of this week. 

I’m arriving in D.C. the afternoon of the 14th and leaving on Saturday the 17th.

If you’re going to be there, ping me [choff @ packetfilter.com] or call my voice router @ +1.978.631.0302.  I’m looking forward to a number of talks.

See you there.

/Hoff

Categories: Conferences Tags:

On the Chatham House Rule

February 9th, 2008 5 comments

Chathamhouse
James Gardner reminded me of something that I wanted to bring up but had forgotten about for some time.  Yes, he’s Australian, but he can’t help that.

You’d understand why that was funny if you knew that I grew up in New Zealand.  Or perhaps not.

Let me first begin by suggesting that we owe many things to the empire of Great Britain. 

There’s the Queen, crumpets, French jokes, that wonderful derivative affectation that causes all the women to swoon, the incessant need for either a cuppa tea or litres of beer, and some interesting cultural and business customs.

One of those customs is that of the Chatham House Rule

If you’ve ever been to the UK and attended a business meeting discussing sensitive subject matter, there’s a good chance that someone pronounced that all those participating are cloaked under the Chatham House Rule.

If, as a gracious guest, you were not (at least by modern standards) subject to Her Majesty’s sovereign rule, you may have simply smiled and nodded politely not knowing who, what, or where this oddly-named domicile was and what it may have had to do with your meeting.

The same could be said for that guy Robert and all his suggestions, I suppose.

At any rate, for all of you who have wondered just what in Tony Blair’s closet you just agreed to when you attended one of these meeting governed by this odd architectural framework defined in the spirit of Chatham, you may now wonder no longer.

The Chatham House Rule reads as follows:

"When a meeting, or part thereof, is held under the Chatham House
Rule, participants are free to use the information received, but
neither the identity nor the affiliation of the speaker(s), nor that of
any other participant, may be revealed".

The world-famous Chatham House Rule may be invoked at meetings to encourage openness and the sharing of information.

EXPLANATION of the Rule

The Chatham House Rule originated at Chatham House with the aim of
providing anonymity to speakers and to encourage openness and the
sharing of information. It is now used throughout the world as an aid
to free discussion. Meetings do not have to take place at Chatham House
to be held under the Rule.

Meetings, events and discussions held at Chatham House are normally
conducted ‘on the record’ with the Rule occasionally invoked at the
speaker’s request. In cases where the Rule is not considered
sufficiently strict, an event may be held ‘off the record’.

If you’re interested in what the Chatham House is, besides the link to the rule (above) you can check out the following link to learn about the home of the Royal Institute of International Affairs.

Three things will likely come of this post:

  1. You can confidently acknowledge your understanding of The Rule and use it in the spirit under which it was constructed
  2. You’ve now realized that all that stuff you blabbed about from
    those prior meetings under The Rule (which you didn’t understand) is someday going to come back and punt
    you right in the blender
  3. You can now start evoking the Chatham House rule in random places regarding all manner of activities and confuse the hell out of people.  I quite like declaring it before ordering Chili Poppers and girlie drinks at TGI Friday’s, for example.

You can probably guess why I’m writing this.

Some people just never learn.

My work here is done.

Carry on.

/Hoff

Categories: General Rants & Raves Tags: