Archive

Archive for the ‘Intrusion Detection’ Category

UPDATED: How the Hypervisor is Death By a Thousand Cuts to the Network IPS/NAC Appliance Vendors

January 18th, 2008 4 comments

Ipsnacdead_2Attention NAC vendors who continue to barrage me via email/blog postings claiming I don’t understand NAC:  You’re missing the point of this post which basically confirms my point; you’re not paying attention and are being myopic.

I included NAC with IPS in this post to illustrate two things:

(1) Current NAC solutions aren’t particularly relevant when you have centralized and virtualized client infrastructure and

(2) If you understand the issues with offline VM’s in the server world and what it means to compliance and admission control on spin-up or when VMotioned, you could add a lot of value by adapting your products (if you’re software based) to do offline VM conformance/remediation and help prevent VM sprawl and inadvertent non-compliant VM spin-up…

But you go ahead and continue with your strategy…you’re doing swell so far convincing the market of your relevance.

Now back to our regular programming…

— ORIGINAL POST —


From the "Out Of the Loop" Department…

Virtualization is causing IPS and NAC appliance vendors some real pain in the strategic planning department.  I’ve spoken to several product managers of IPS and NAC companies that are having to make some really tough bets regarding just what to do about the impact virtualization is having on their business.

They hmm and haw initially about how it’s not really an issue, but 2 beers later, we’re speaking the same language…

Trying to align architecture, technology and roadmaps to the emerging tidal wave of consolidation that virtualization brings can be really hard.  It’s hard to differentiate where the host starts and the network ends…

In reality, firewall vendors are in exactly the same spot.  Check out this Network World article titled "Options seen lacking in firewall virtual server protection."  In today’s world, it’s almost impossible to distinguish a "firewall" from an "IPS" from a "NAC" device to a new-fangled "highly adaptive access control" solution (thanks, Vernier Autonomic Networks…)

It’s especially hard for vendors whose IPS/NAC software is tied to specialty hardware, unless of course all you care about is enforcing at the "edge" — wherever that is, and that’s the point.  The demarcation of those security domain diameters has now shrunk.  Significantly, and not just for servers, either.  With the resurgence of thin clients and new VDI initiatives, where exactly is the client/server boundary?

Prior to virtualization, network-based IPS/NAC vendors would pick arterial network junctions and either use a tap/SPAN port in an out-of-band deployment or slap a box inline between the "trusted" and "untrusted" sides of the links and that was that.  You’d be able to protect assets based on port, VLAN or IP address.

You obviously only see what traverses those pipes.  If you look at the problem I described here back in August of last year, where much of the communication takes place as intra-VM sessions on the same physical host that never actually touch the externally "physical" network, you’ve lost precious visibility for detection let alone prevention.

I think by now everyone recognizes how server virtualization impacts network and security architecture and basically provides four methods (potentially in combination) today for deploying security solutions:

  1. Keep all your host based protections intact and continue to circle the wagons around the now virtualized endpoint by installing software in the actual VMs
  2. When available, take a security solution provider’s virtual appliance version of their product (if they have one) and install in on a host as a VM and configure the virtual networking within the vSwitch to provide the appropriate connectivity.
  3. Continue to deploy physical appliances between the hosts and the network
  4. Utilize a combination of host-based software and physical IPS/NAC hardware to provide off-load "switched" or "cut-through" policy enforcement between the two.

Each of these options has its pros and cons for both the vendor and the customer; trade-offs in manageability, cost, performance, coverage, scalability and resilience can be ugly.  Those that have both endpoint and network-based solutions are in a far more flexible place than those that do not.

Many vendors who have only physical appliance offerings are basically stuck adding 10Gb/s Ethernet connections to their boxes as they wait impatiently for options 5, 6 and 7 so they can "plug back in":

5.  Virtualization vendors will natively embed more security functionality within the hypervisor and continue integrating with trusted platform models

6.  Virtualization vendors will allow third parties to substitute their own vSwitches as a function of the hypervisor

7. Virtualization vendors will allow security vendors to utilize a "plug-in" methodology and interact directly with the VMM via API

These options would allow both endpoint software installed in the virtual machines as well as external devices to interact directly with the hypervisor with full purview of inter and intra-VM flows and not merely exist as a "bolted-on" function that lacks visibility and best-of-breed functionality.

While we’re on the topic of adding 10Gb/s connectivity, it’s important to note that having 10Gb/s appliances isn’t always about how many Gb/s of IPS
traffic you can handle, but also about consolidating what would
otherwise be potentially dozens of trunked LACP 1Gb/s Ethernet and FC connections pouring
out of each host to manage both the aggregate bandwidth but also the issues driven by a segmented network.

So to get the coverage across a segmented network today, vendors are shipping their appliances with tons of ports — not necessarily because they want to replace access switches, but rather to enable coverage and penetration.

On the other hand, most of the pure-play software vendors today who say they are "virtualization enabled" really mean that their product installs as a virtual appliance on a VM on a host.  The exposure these solutions have to traffic is entirely dependent upon how the vSwitches are configured.

…and it’s going to get even more hairy as the battle for the architecture of the DatacenterOS also rages.  The uptake of 10Gb/s Ethernet is also contributing to the mix as we see
customers:

  • Upgrading from servers to blades
  • Moving from hosts and switches to clusters and fabrics
  • Evolving from hardware/software affinity to gird/utility computing
  • Transitioning from infrastructure to service layers in “the cloud”

Have you asked your IPS and NAC vendors who are hardware-bound how they plan to deal with this Tsunami on their roadmaps within the next 12 months.  If not, grab a lifejacket.

/Hoff

UPDATE:  It appears nobody uses trackbacks anymore, so I’m resorting to activity logs, Google alerts and stubbornness to tell when someone’s referencing my posts.  Here are some interesting references to this post:

…also, this is right on the money:

I think I’ll respond to them on my blog with a comment on theirs pointing back over…

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

On Bandwidth and Botnets…

October 3rd, 2007 No comments

An interesting story in this morning’s New York Times titled "Unlike U.S., Japanese Push Fiber Over Profit" talked about Japan’s long term investment efforts to build the world’s first all-fiber national network and how Japan leads the world’s other industrialized nations, including the U.S., in low-cost, high speed services centered around Internet access.  Check out this illustration:

2007broadbandgraphic_2
The article states that approximately 8 million Japanese subscribe to the fiber-enabled service offerings that provides performance at roughly 30 times that of a corresponding xDSL offering.

For about $55 a month, subscribers have access to up to 100Mb/s download capacity.

France Telecom is rumored to be rolling out services that offer 2.5Gb/s downloads!

I have Verizon FIOS which is delivered via fiber to my home and subscribe at a 20Mb/s download tier.

What I find very interesting about the emergence of this sort of service is that if you look at a typical consumer’s machine, it’s not well hardened, not monitored and usually easily compromised.  At this rate, the bandwidth of some of these compromise-ready consumer’s home connectivity is eclipsing that of mid-tier ISP’s!

This is even more true, through anecdotal evidence gathering, of online gamers who are typically also P2P filesharing participants and early adopters of new shiny kit — it’s a Bot Herder’s dream come true.

At xDSL speeds of a few Mb/s, a couple of infected machines as participants in a targeted synchronized fanning DDoS attack can easily take down a corporate network connected to the Internet via a DS3 (45Mb/s.)  Imagine what a botnet of a couple of 60Mb/s connected endpoints could do — how about a couple of thousand?  Hundreds of thousands?

This is great news for some as this sort of capacity will be economically beneficial to cyber-criminals as it reduces the exposure risk of Botnet Herders; they don’t have to infect nearly the same amount of machines to deliver exponentially higher attack yields given the size of the pipes.  Scary.

I’d suggest that using the lovely reverse DNS entries that service providers use to annotate logical hop connectivity will be even more freely used to target these high-speed users; you know, like (fictional):

bigass20MbpsPipe.vzFIOS-05.bstnma.verizon-gni.net (7x.y4.9z.1)

As an interesting anecdote from the service provider perspective, the need for "Clean Pipes" becomes even more important and the providers will be even more so financially motivated to prevent abuse of their backbone long-hauls by infected machines.

This, in turn, will drive the need for much more intelligent, higher throughput infrastructure and security service layers to mitigate the threat which is forcing folks to take a very hard look about how they architect their networks and apply security.

/Hoff

A Play on Negroponte’s OLPC. I present “OHPC” – One Honeypot per Computer…

August 29th, 2007 1 comment

Poohhoneypotbluesalt
I was catching up with an old friend the other day, and in chatting with Lance Spitzner, we got to talking about virtualization and Honeypots.  Lance, as you no doubt already know, is one of the ringleaders of the Honeynet Project whose charter is the following:

The Honeynet Project is a non-profit (501c3) volunteer, research organization dedicated to improving
the security of the Internet at no cost to the public.  All of our work is released as and we are
firmly committed to the ideals of OpenSource
Our goal, simply put, is to make a difference.  We accomplish this goal in the following three ways.

 

Awareness
We raise awareness of the threats and vulnerabilities that exist in the Internet
today.  Many individuals and organizations do not realize they are a target, nor
understand who is attacking them, how, or why.  We provide this information so
people can better understand they are a target, and understand the basic measures
they can take to mitigate these threats.
This information is provided through our Know Your Enemy
series of papers.

Information
For those who are already aware and concerned, we provide
details to better secure and defend your resources. Historically,
information about attackers has been limited to the tools they use. We
provide critical additional information, such as their motives in attacking,
how they communicate, when they attack systems and their actions after compromising
a system.  We provide this service through our
Know Your Enemy
whitepapers and our Scan of the
Month
challenges.

Tools

For organizations interested in continuing their own research about cyber threats,
we provide the tools and techniques we have developed.  We provide these through
our Tools Site.

Look for an upcoming Take5 Interview with Lance shortly.

We were chatting about the application of Honeypots within a virtualized environment and how, for detection purposes, one might integrate them into virtual environments.  Lance brought up the point that the Honeynet Project already talks about the deployment of virtualized Honeypots and the excellent new book by Provos and Holz titled "Virtual Honeypots: From Botnet Tracking to Intrusion Detection" talks about utilizing virtualization and HN’s.

I clarified that what I meant was actually integrating a HoneyPot running in a VM on a production host as part of a standardized deployment model for virtualized environments.  I suggested that this would integrate into the data collection and analysis models the same was as a "regular" physical HoneyPot machine, but could utilize some of the capabilities built into the VMM/HV’s vSwitch to actually make the virtualization of a single HoneyPot across an entire collection of VM’s on a single physical host.

He seemed intrigued by this slightly different perspective.

We’ve seen some pretty interesting discussions both pro and con for production Honeypots in the last couple of weeks.  First there was this excellent write up by InfoWorld’s Roger Grimes which prompted an "operational yeah, but…" from LonerVamp’s blog.

So, with the hopes that this will actually turn into a discussion, Lance said he was going to bring this up internally within the HN Project forums, but I wanted to raise it here.

I’d be very interested in discussing how folks perceive the  notion of OHPC and whether you’d consider deploying one as a VM on each production virtualized host machine you put into production?  If so, why. If not, why?

/Hoff

Categories: Intrusion Detection, Virtualization Tags: