Archive

Archive for December, 2007

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

Really Interesting Blog Snippets I Don’t Have Time to Comment On…

December 20th, 2007 1 comment

I’m swamped right now and have about 30 tabs open in Mozilla referencing things I expected to blog about but simply haven’t had the time to.  Rather than bloat Mozilla’s memory consumption further and lose these, I figured I’d jot them down.

Yes, I should use any number of the services available to track these sorts of things for this very purpose, but I’m just old fashioned, I guess…

Perhaps you’ll find these snippets interesting, also.

Sadly I may not get around to blogging about many of these.  I’ve got a ton more from the emerging technology, VC and virtualization space, too. 

I don’t want to become another story summarizer, but perhaps I’ll use this format to cover things I can’t get to every week.

/Hoff

Categories: Uncategorized Tags:

BeanSec! Wednesday, December 19th – 6PM to ?

December 15th, 2007 No comments

Beansec3_2
This month’s BeanSec!
will be held in a different location due to a facility booking at the usual location.

This month’s meeting will be located at the Middlesex Lounge, 315 Massachusetts Avenue, Cambridge MA  02139 (right down the street.)


Yo!  BeanSec! is once again upon us.  Wednesday, December 19th, 2007.

BeanSec! is an informal meetup of information security
professionals, researchers and academics in the Greater Boston area
that meets the third Wednesday of each month. 

I say again, BeanSec! is hosted the third Wednesday of every month.  Add it to your calendar.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join
up”, present a zero-day exploit, or defend your dissertation to attend.
Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139.  Look for the Elephant
on the left door next to the Central Kitchen entrance.  Come upstairs.
We sit on the left hand side…
(see above)

Don’t worry about being "late" because most people just show up when
they can.  6:30 is a good time to aim for.  We’ll try and save you a
seat.  There is a parking garage across the street and 1 block down or
you can try the streets (or take the T)

In case you’re wondering, we’re getting about 30-40 people on
average per BeanSec!  Weld, 0Day and I have been at this for just over
a year and without actually *doing* anything, it’s turned out swell.

We’ve had some really interesting people of note attend lately (I’m
not going to tell you who…you’ll just have to come and find out.)  At
around 9:00pm or so, the DJ shows up…as do the rather nice looking
people from the Cambridge area, so if that’s your scene, you can geek
out first and then get your thang on.

The food selection is basically high-end finger-food appetizers and
the drinks are really good; an attentive staff and eclectic clientèle
make the joint fun for people watching.  I’ll generally annoy you into
participating somehow, even if it’s just fetching napkins. 😉

See you there.

/Hoff

Categories: BeanSec! Tags:

It’s On The Internet, It Must Be True!!

December 15th, 2007 5 comments

Internettruth

Case in point, here.

That is all.

/Hoff

Categories: Jackassery Tags:

Breaking News: Successful SCADA Attack Confirmed – Mogull Is pwned!

December 13th, 2007 31 comments

Scada
A couple of weeks ago, right after I wrote my two sets of 2008 (in)security predictions (here and here), Mogull informed me that he was penning an article for Dark Reading on how security predictions are useless.  He even sent me a rough draft to rub it in.

His Dark Reading article is titled "The Perils of Predictions – and Predicting Peril" which you can read here.  The part I liked best was, of course, the multiple mentions that some idiot was going to predict an attack on SCADA infrastructure:


Oh, and there is one specific prediction I’ll make for next year:
Someone will predict a successful SCADA attack, and it won’t happen.
Until it does.

So, I’m obviously guilty as charged.  Yup, I predicted it.  Yup, I think it will happen.

In fact, it already has…

You see, Mogull is a huge geek and has invested large sums of money in his new home and outfitted it with a complete home automation system.  In reality, this home automation system is basically just a scaled down version of a SCADA system (Supervisory Control and Data Acquisition.)  Controlling sensors and integrating telemetry with centralized reporting and control…

Rich and I are always IM’ing and emailing one another, so a few days ago before Rich left town for an international junket, I sent him a little email asking him to review something I was working on.  The email contained a link to my "trusted" website.

The page I sent him to was actually trojaned with the 0day POC code for the QT RTSP vulnerability from a couple of weeks ago.  I guess Rich’s Leopard ipfw rules need to be modified because right after he opened it, the trojan executed and then phoned home (to me) and I was able to open a remote shell on TCP/554 right to his Mac which incidentally controls his home automation system.  I totally pwn his house.

CctvSo a couple of days ago, Rich went out of town and I waited patiently for the DR article to post.  Now that it’s up, I have exacted my revenge.

I must say that I think Rich’s choice of automation controllers was top-shelf, but I think I might have gone with a better hot tub controller because I seem to have confused it and now it will only heat to 73 degrees.

I also think he should have gone with better carpet.

I’m pretty sure his wife is going absolutely bonkers given the fact that the lights in the den keep blinking to the beat of a Lionel Ritchie song and the garage door opener keeps trying to attack the gardener.  I will let you know that I’m being a gentleman and not peeking at the CCTV images…much.

Let this be a lesson to you all.  When it comes to predicting SCADA attacks, don’t hassle the Hoff!

/Hoff

Categories: Punditry Tags:

Complexity: The Enemy of Security? Or, If It Ain’t Fixed, Don’t Break It…

December 12th, 2007 4 comments

Hammerhead
When all you have is a hammer, everything looks like a nail…

A couple of days ago, I was concerned (here) that I had missed Don Weber’s point (here)
regarding how he thinks solutions like UTM that consolidate multiple
security functions into a single solution increased complexity and
increased risk
.

I was interested in more detail regarding Don’s premise for his argument, so I asked him for some substantiating background information before I responded:

The question I have for Don is simple: how is it that you’ve
arrived at the conclusion that the consolidation and convergence of
security functionality from multiple discrete products into a
single-sourced solution adds "complexity" and leads to "increased risk?"

Can you empirically demonstrate this by giving us an example of
where a single function security device that became a multiple function
security product caused this complete set combination of events to occur:

  1. Product complexity increased
  2. Lead to a vulnerability that was exploitable and
  3. Increased "risk" based upon business impact and exposure

Don was kind enough to respond to my request with a rather lengthy
post titled "The Perimeter Is Dead —  Let’s Make It More Complex."  I knew that I wouldn’t get the example I wanted, but I did get
what I expected.  I started to write a very detailed response but stopped when I realized a couple of important things in reading his post as well as many of the comments:

  • It’s clear that many folks simply don’t understand the underlying internal operating principles and architectures of security products on the market, and frankly for the most part they really shouldn’t have to.  However, if you’re going to start debating security architecture and engineering implementation of security software and hardware, it’s somewhat unreasonable to start generalizing and creating bad analogs about things you clearly don’t have experience with. 
     
  • Believe it or not, most security companies that create bespoke security solutions do actually hire competent product management and engineering staff with the discipline, processes and practices that result in just a *little* bit more than copy/paste integration of software.  There are always exceptions, but if this were SOP, how many of them would still be in business?
     
  • The FUD that vendors are accused of spreading to supposedly motivate consumers to purchase their products is sometimes outdone by the sheer lack of knowledge illustrated by the regurgitated drivel that is offered by people suggesting why these same products are not worthy of purchase. 

    In markets that have TAMs of $4+ Billion, either we’re all incompetent lemmings (to be argued elsewhere) or there are some compelling reasons for these products.  Sometimes it’s not solely security, for sure, but people don’t purchase security products with the expectations of being less secure with products that are more complex and put them more at risk.  Silliness.
     

  • I find it odd that the people who maintain that they must have diversity in their security solution providers gag when I ask them for proof that they have invested in multiple switch and router vendors across their entire enterprise, that they deliberately deploy critical computing assets on disparate operating systems and that they have redundancy for all critical assets in their enterprise…including themselves. 
     
  • It doesn’t make a lot of sense arguing about the utility, efficacy, usability and viability of a product with someone who has never actually implemented the solution they are arguing about and instead compares proprietary security products with a breadboard approach to creating a FrankenWall of non-integrated open source software on a common un-hardened Linux distro.
     
  • Using words like complexity and risk within a theoretical context that has no empirical data offered to back it up short of a "gut reaction" and some vulnerability advisories in generally-available open source software lacks relevancy and is a waste of electrons.

I have proof points, ROI studies, security assessment results to the code level, and former customer case studies that demonstrate that some of the most paranoid companies on the planet see fit to purchase millions of dollars worth of supposedly "complex risk-increasing" solutions like these…I can tell you that they’re not all lemmings.

Again, not all of those bullets are directed at Don specifically, but I sense we’re
really just going to talk past one another on this point and the emails I’m getting trying to privately debate this point are agitating to say the least.

Your beer’s waiting, but expect an arm wrestle before you get to take the first sip.

/Hoff

Don’t Hassle the Hoff: Recent Press & Podcast Coverage…

December 12th, 2007 4 comments

Microphone_2
Here’s a rundown of some recent press and some podcast coverage on topics relevant to content on my blog:

/Hoff

Categories: Press Tags:

Consolidating Controls Causes Chaos and Certain Complexity?

December 10th, 2007 6 comments

Simplicity_complexity
Don Weber wrote a post last week describing his thoughts on the consolidation of [security] controls and followed it up with another today titled "Quit Complicating our Controls – UTM Remix" in which he suggests that the consolidation of controls delivers an end-state of additional "complexity" and "higher risk":

Of course I can see why people desire to integrate the technologies. 

  • It is more cost effective to have two or more technologies on one piece of hardware.
  • You only have to manage one box.
  • The controls can augment each other more effectively and efficiently (according to the advertising on the box).
  • Firewalls usually represent a choke point to external and potentially hostile environments.
  • Vendors can market it as the Silver Bullet (no relation to Gary McGraw’s podcast) of controls.
  • “The next-generation firewall will have greater blocking and
    visibility into types of protocols,” says Greg Young, research vice
    president for Gartner.
  • etc

Well, I have a problem with all of this. Why are we making our
controls more complex?
Complexity leads to vulnerabilities.
Vulnerabilities lead to exploits. Exploits lead to compromises.
Compromises lead to loss.

…and:

Don’t get me wrong. I am all for developing new technologies that will
allow organizations to analyze their traffic so that they get a better
picture of what is traversing and exiting their networks. I just think
they will be more effective if they are deployed so that they augment
each other’s control measures instead of threatening them by increasing
the risk through complexity. Controls should reduce risk, not increase
it.

Don’s posts have touched on a myriad of topics I have very strong opinions on: complex simplicity, ("magical") risk, UTM and application firewalls.  I don’t agree with Don’s statements regarding any of them. That’s probably why he called me out.

The question I have for Don is simple: how is it that you’ve arrived at the conclusion that the consolidation and convergence of security functionality from multiple discrete products into a single-sourced solution adds "complexity" and leads to "increased risk?"

Can you empirically demonstrate this by giving us an example of where a single function security device that became a multiple function security product caused this complete set combination of events to occur:

  1. Product complexity increased
  2. Lead to a vulnerability that was exploitable and
  3. Increased "risk" based upon business impact and exposure

I’m being open-minded here and rather than try and address every corner-case I am eager to understand more of the background of Don’s position so I might respond accordingly.

/Hoff

WARNING: Tunneling Traffic Means Filtering On 5-Tuple Insufficient. Welcome to 1995!

December 8th, 2007 4 comments

Tunnelsbad
…just to put your mind at ease, no, that’s not me.  I’m all about the boxers not briefs.  Now you know since you’ve all been wondering, I’m sure…

I really do appreciate it when people dedicate time, energy and expertise to making sure we’re all as informed as we can be about the potential for bad things to happen.  Case in point, hat tip to Mitchell for pointing us to just such a helpful tip from a couple of guys submitting a draft to the IETF regarding the evils of tunneled traffic.

Specifically, the authors are commenting on the "feature" called Teredo in which IPv6 is tunneled in UDP IPv4 datagrams.

Here’s the shocking revelation, sure to come as a complete surprise to anyone it IT/Security today…if you only look at SrcIP, DstIP, SrcPort, DstPort and Protocol, you’ll miss the fact that nasty bits are traversing your networks in a tunneled/encapsulated death ray!

Seriously, welcome to 1995.  If your security infrastructure relies upon technology that doesn’t inspect "deeper" than the 5-tupule above, you’re surely already aware of this problem as 90% of the traffic entering and leaving your network is probably "tunneled" within port 80/443.

Here’s a practical example.  I stuck a Palo Alto Networks box in front of my home network as part of an evaluation for a client I’m doing.  Check out the application profile of the traffic leaving my network via my FIOS connection:
Panacchoff_2

Check out that list of applications above.  Care to tell me how many of them are tunneled over/via/through port 80/443?  True, they’re not IPv6 in IPv4, but it’s really the same problem; obfuscating applications and protocols means you need to have much more precise fidelity and resolution in detecting what’s going through your firewalls colander.

By the way, I’ve got stuff going through SSH port forwarding, in ICMP payloads, via SSL VPN, via IPSec VPN…can’t wait to see what happens when I shove ’em out using Fragrouter.

I’m all for raising awareness, but does this really require an IETF draft to update the Teredo specification?

/Hoff

Read more…

Categories: Uncategorized Tags:

The Seesaw CISO…Changing Places But Similar Faces…

December 8th, 2007 1 comment

Seesaw_shadow
…from geek to business speak…

Dennis Fisher has nice writeup over at the SearchSecurity Security Bytes Blog about the changing role and reporting structure of the CISO.

Specifically, Dennis notes that he was surprised by the number of CISOs who recently told him that they no longer report to the CIO and aren’t a part of IT at all.  Moreover, these same CISOs noted that the skillset and focus is also changing from a technical to a business role:

In the last few months I’ve been hearing more and more from CEOs,
CIOs and CSOs about the changing role of the CSO (or CISO, depending on
your org chart) in the enterprise. In the past, the CSO has nearly
always been a technically minded person who has risen through the IT
ranks and then made the jump to the executive ranks. That lineage
sometimes got in the way when it came time to deal with other upper
managers who typically had little or no technical knowledge and weren’t
interested in the minutiae of authentication schemes, NAC and unified
threat management. They simply wanted things to work and to avoid
seeing the company’s name in the papers for a security breach.

But that seems to be changing rather rapidly. Last month I was on a
panel in Chicago with Howard Schmidt, Lloyd Hession, the CSO of BT
Radianz, and Bill Santille, CIO of Uline, and the conversation quickly
turned to the ways in which the increased focus on risk management in
enterprises has forced CSOs to adapt and expand their skill sets. A
knowledge of IDS, firewalls and PKI is not nearly enough these days,
and in some cases is not even required to be a CSO. One member of the
audience said that the CSO position in his company is rotated regularly
among senior managers, most of whom have no technical background and
are supported by a senior IT staff member who serves as CISO. The CSO
slot is seen as a necessary stop on the management circuit, in other
words. Several other CSOs in the audience said that they no longer
report to the CIO and are not even part of the IT organization.
Instead, they report to the CFO, the chief legal counsel, or in one
case, the ethics officer.

I’ve talked about the fact that "security" should be a business function and not a technical one and quite frankly what Dennis is hearing has been a trend on the uptick for the last 3-4 years as "information security" becomes less relevant and managing risk becomes the focus.  To wit:

The number of organizations making this kind of change surprised me
at the time. But, in thinking more about it, it makes a lot of sense,
given that the daily technical security tasks are handled by people
well below the CSO’s office. And many of the CSOs I know say they spend
most of their time these days dealing with policy issues such as
regulatory compliance. Patrick Conte, the CEO of software maker
Agiliance, which put on the panel, told me that these comments fit with
what he was hearing from his customers, as well. Some of this shift is
clearly attributable to the changing priorities inside these
enterprises. But some of it also is a result of the maturation of the
security industry as a whole, which has translated into less of a focus
on technology and more attention being paid to policies, procedures and
other non-technical matters.

How this plays out in the coming months and years will be quite
interesting. My guess is that as security continues to be absorbed into
the larger IT and operations functions, the CSO’s job will continue to
morph into more of a business role.

I still maintain that "compliance" is nothing more than a gap-filler.  As I said here, we have compliance as an industry [and measurement] today because we manage technology
threats and vulnerabilities and don’t manage risk.  Compliance is
actually nothing more than a way of forcing transparency and plugging a
gap between the two.  For most, it’s the best they’ve got.

Once organizationally we’ve got our act together, compliance will become the floor, not the ceiling and we’ll really start to see the "…maturation of the security industry as a whole."

/Hoff