Archive

Archive for the ‘Intrusion Prevention’ Category

UPDATED: How the Hypervisor is Death By a Thousand Cuts to the Network IPS/NAC Appliance Vendors

January 18th, 2008 4 comments

Ipsnacdead_2Attention NAC vendors who continue to barrage me via email/blog postings claiming I don’t understand NAC:  You’re missing the point of this post which basically confirms my point; you’re not paying attention and are being myopic.

I included NAC with IPS in this post to illustrate two things:

(1) Current NAC solutions aren’t particularly relevant when you have centralized and virtualized client infrastructure and

(2) If you understand the issues with offline VM’s in the server world and what it means to compliance and admission control on spin-up or when VMotioned, you could add a lot of value by adapting your products (if you’re software based) to do offline VM conformance/remediation and help prevent VM sprawl and inadvertent non-compliant VM spin-up…

But you go ahead and continue with your strategy…you’re doing swell so far convincing the market of your relevance.

Now back to our regular programming…

— ORIGINAL POST —


From the "Out Of the Loop" Department…

Virtualization is causing IPS and NAC appliance vendors some real pain in the strategic planning department.  I’ve spoken to several product managers of IPS and NAC companies that are having to make some really tough bets regarding just what to do about the impact virtualization is having on their business.

They hmm and haw initially about how it’s not really an issue, but 2 beers later, we’re speaking the same language…

Trying to align architecture, technology and roadmaps to the emerging tidal wave of consolidation that virtualization brings can be really hard.  It’s hard to differentiate where the host starts and the network ends…

In reality, firewall vendors are in exactly the same spot.  Check out this Network World article titled "Options seen lacking in firewall virtual server protection."  In today’s world, it’s almost impossible to distinguish a "firewall" from an "IPS" from a "NAC" device to a new-fangled "highly adaptive access control" solution (thanks, Vernier Autonomic Networks…)

It’s especially hard for vendors whose IPS/NAC software is tied to specialty hardware, unless of course all you care about is enforcing at the "edge" — wherever that is, and that’s the point.  The demarcation of those security domain diameters has now shrunk.  Significantly, and not just for servers, either.  With the resurgence of thin clients and new VDI initiatives, where exactly is the client/server boundary?

Prior to virtualization, network-based IPS/NAC vendors would pick arterial network junctions and either use a tap/SPAN port in an out-of-band deployment or slap a box inline between the "trusted" and "untrusted" sides of the links and that was that.  You’d be able to protect assets based on port, VLAN or IP address.

You obviously only see what traverses those pipes.  If you look at the problem I described here back in August of last year, where much of the communication takes place as intra-VM sessions on the same physical host that never actually touch the externally "physical" network, you’ve lost precious visibility for detection let alone prevention.

I think by now everyone recognizes how server virtualization impacts network and security architecture and basically provides four methods (potentially in combination) today for deploying security solutions:

  1. Keep all your host based protections intact and continue to circle the wagons around the now virtualized endpoint by installing software in the actual VMs
  2. When available, take a security solution provider’s virtual appliance version of their product (if they have one) and install in on a host as a VM and configure the virtual networking within the vSwitch to provide the appropriate connectivity.
  3. Continue to deploy physical appliances between the hosts and the network
  4. Utilize a combination of host-based software and physical IPS/NAC hardware to provide off-load "switched" or "cut-through" policy enforcement between the two.

Each of these options has its pros and cons for both the vendor and the customer; trade-offs in manageability, cost, performance, coverage, scalability and resilience can be ugly.  Those that have both endpoint and network-based solutions are in a far more flexible place than those that do not.

Many vendors who have only physical appliance offerings are basically stuck adding 10Gb/s Ethernet connections to their boxes as they wait impatiently for options 5, 6 and 7 so they can "plug back in":

5.  Virtualization vendors will natively embed more security functionality within the hypervisor and continue integrating with trusted platform models

6.  Virtualization vendors will allow third parties to substitute their own vSwitches as a function of the hypervisor

7. Virtualization vendors will allow security vendors to utilize a "plug-in" methodology and interact directly with the VMM via API

These options would allow both endpoint software installed in the virtual machines as well as external devices to interact directly with the hypervisor with full purview of inter and intra-VM flows and not merely exist as a "bolted-on" function that lacks visibility and best-of-breed functionality.

While we’re on the topic of adding 10Gb/s connectivity, it’s important to note that having 10Gb/s appliances isn’t always about how many Gb/s of IPS
traffic you can handle, but also about consolidating what would
otherwise be potentially dozens of trunked LACP 1Gb/s Ethernet and FC connections pouring
out of each host to manage both the aggregate bandwidth but also the issues driven by a segmented network.

So to get the coverage across a segmented network today, vendors are shipping their appliances with tons of ports — not necessarily because they want to replace access switches, but rather to enable coverage and penetration.

On the other hand, most of the pure-play software vendors today who say they are "virtualization enabled" really mean that their product installs as a virtual appliance on a VM on a host.  The exposure these solutions have to traffic is entirely dependent upon how the vSwitches are configured.

…and it’s going to get even more hairy as the battle for the architecture of the DatacenterOS also rages.  The uptake of 10Gb/s Ethernet is also contributing to the mix as we see
customers:

  • Upgrading from servers to blades
  • Moving from hosts and switches to clusters and fabrics
  • Evolving from hardware/software affinity to gird/utility computing
  • Transitioning from infrastructure to service layers in “the cloud”

Have you asked your IPS and NAC vendors who are hardware-bound how they plan to deal with this Tsunami on their roadmaps within the next 12 months.  If not, grab a lifejacket.

/Hoff

UPDATE:  It appears nobody uses trackbacks anymore, so I’m resorting to activity logs, Google alerts and stubbornness to tell when someone’s referencing my posts.  Here are some interesting references to this post:

…also, this is right on the money:

I think I’ll respond to them on my blog with a comment on theirs pointing back over…

Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

January 10th, 2008 11 comments

Phatburger_2
Juicy Fat Assets, Ripe For the Picking…

So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

  1. Assume a company that:
    …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
  2. Take the following:
    Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
  3. Combine With:
    Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
  4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
  5. You Get:
    Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

…I mean besides the fact that an entire industry has been leeching off this mess for decades…


I’ll Gladly Pay You Tuesday For A Secure Solution Today…

The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

Relax, Dude…Keep Your Firewalls…

In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

This goes for internal corporate users who are chained to their desks and not just mobile users.

What’s preventing you from doing this today?

/Hoff

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

On Bandwidth and Botnets…

October 3rd, 2007 No comments

An interesting story in this morning’s New York Times titled "Unlike U.S., Japanese Push Fiber Over Profit" talked about Japan’s long term investment efforts to build the world’s first all-fiber national network and how Japan leads the world’s other industrialized nations, including the U.S., in low-cost, high speed services centered around Internet access.  Check out this illustration:

2007broadbandgraphic_2
The article states that approximately 8 million Japanese subscribe to the fiber-enabled service offerings that provides performance at roughly 30 times that of a corresponding xDSL offering.

For about $55 a month, subscribers have access to up to 100Mb/s download capacity.

France Telecom is rumored to be rolling out services that offer 2.5Gb/s downloads!

I have Verizon FIOS which is delivered via fiber to my home and subscribe at a 20Mb/s download tier.

What I find very interesting about the emergence of this sort of service is that if you look at a typical consumer’s machine, it’s not well hardened, not monitored and usually easily compromised.  At this rate, the bandwidth of some of these compromise-ready consumer’s home connectivity is eclipsing that of mid-tier ISP’s!

This is even more true, through anecdotal evidence gathering, of online gamers who are typically also P2P filesharing participants and early adopters of new shiny kit — it’s a Bot Herder’s dream come true.

At xDSL speeds of a few Mb/s, a couple of infected machines as participants in a targeted synchronized fanning DDoS attack can easily take down a corporate network connected to the Internet via a DS3 (45Mb/s.)  Imagine what a botnet of a couple of 60Mb/s connected endpoints could do — how about a couple of thousand?  Hundreds of thousands?

This is great news for some as this sort of capacity will be economically beneficial to cyber-criminals as it reduces the exposure risk of Botnet Herders; they don’t have to infect nearly the same amount of machines to deliver exponentially higher attack yields given the size of the pipes.  Scary.

I’d suggest that using the lovely reverse DNS entries that service providers use to annotate logical hop connectivity will be even more freely used to target these high-speed users; you know, like (fictional):

bigass20MbpsPipe.vzFIOS-05.bstnma.verizon-gni.net (7x.y4.9z.1)

As an interesting anecdote from the service provider perspective, the need for "Clean Pipes" becomes even more important and the providers will be even more so financially motivated to prevent abuse of their backbone long-hauls by infected machines.

This, in turn, will drive the need for much more intelligent, higher throughput infrastructure and security service layers to mitigate the threat which is forcing folks to take a very hard look about how they architect their networks and apply security.

/Hoff

HyperJackStacking? Layers of Chewy VMM Goodness — the BLT of Security Models

August 27th, 2007 1 comment

Blt
So Mogull is back on the bench and I’m glad to see him blogging again. 

As I type this, I’m listening to James Blunt’s  new single "1973" which is unfortunately where Rich’s timing seems to be on this topic.  ‘Salright though.  Can’t blame him.  He’s been out scouting the minors for a while, so being late to practice is nothing to be too wound up about.

<If you can’t tell, I’m being sarcastic.  I only wish that Rich was
when he told me that his favorite TexMex place in his hometown is
called the "Pink Taco."  That’s all I’m going to say about that…>

The notion of the HyperJackStack (Hypervisor Jacking & Stacking) is actually a problem set that has been discussed at length and in the continuum of these discussions happened quite a while ago. 

To put it bluntly, I believe the discussion — for right or wrong — stepped over this naughty little topic months ago in lieu of working from the bottom up for the purpose exposing fundamental architectural deficiencies (or at least their potential) in the core of virtualization technology.  This is an argument that parallels dissecting a BLT sandwich…you’re approaching getting to the center of a symmetric stack so which end you start at is almost irrelevant.

The good/bad VMM/HV problem has really been relegated to push-pin on the to-do board of all of the virtualization vendors and this particular problem has been framed by said vendors to be apparently solved first operationally from the management plane and THEN dealt with from the security perspective.

So Rich argues that after boning up on Joanna and Thom’s research that they’re arguing the wrong case completely for the dangers of virtualized rootkits.  Instead of worrying about undetectability of this or that — pills and poultry be damned — one should be focused on establishing the relative disposition of *any* VMM/Hypervisor running in/on a host:

Problem is, they’re looking at the wrong problem. I will easily concede
that detecting virtualization is always possible, but that’s not the
real problem. Long-term virtualization will be normal, not an
exception, so detecting if you’re virtualized won’t buy you anything.
The bigger problem is detecting a malicious hypervisor, either the main
hypervisor or maybe some wacky new malicious hypervisor layered on top
of the trusted hypervisor.

To Rich’s credit, I think that this is a huge problem and one that deserves to be solved.  That does not mean that I think one is the "right" versus "wrong" problem to solve, however.  Nor does it mean this hasn’t been discussed.  I’ve talked about it many times already.  Maybe not as eloquently…

The flexibility of virtualization is what provides the surface expansion of vectors for threat; you can spin up, move or kill a VM across an enterprise with a point-click.  So the first thing to do before trying to determine if a VMM/HV is malicious is to detect its presence and layering in the first place…this is where Thom/Joanna’s research really does make sense.

You’re approaching this from a different direction, is all.

Jackintheboxceo
Thom responded here, and I have to agree with his overall posture; the notion of putting hooks into the VMM/HV to allow for "external" detection mechanisms for the sake solely of VMM/HV rootkit detection is unlikely given the threat, but we are already witness to the engineered capacities to allow for "plug-ins" such as Blue Lane’s that function "along side" the HV/VMM and there’s nothing saying one couldn’t adapt a similar function for this sort of detection (and/or prevention) as a value-add.

Ultimately though, I think that the point of response boils down to the definition of the mechanisms used in the detection of a malicious VMM/HV.  I ask you Rich, please define a "malicious" VMM/HV from one steeped in goodness. 

This sounds like in practice, it will come down to yet another iteration of the signature-driven IPS circle jerk to fingerprint/profile disposition.  We’ll no doubt see anomaly and behavioral analysis used here, and then we’ll have hashing, memory firewalls, etc…it’s going to be the Hamster Wheel all over again.  For the same reason we have trouble with validating security and compliance state for anything more than the cursory checks @ 30K feet today, you’ll face the same issue with virtualization — only worse.

I’ve got one for you…how about escaping from the entire VM "jail" entirely…Ed Skoudis over @ IntelGuardians just did an interview with the PaulDotCom boys on this topic…

I believe one must start from the bottom and work up; they’re trying to make up for the fact that this stuff wasn’t properly thought through in this iteration and are trying to expose the issue now. In fact, look at what Intel just announced today with vPro:

New in this product is Intel Trusted Execution Technology (Intel
TXT, formerly codenamed LaGrande). Intel TXT protects data within
virtualized computing environments, an important feature as IT managers
are considering the adoption of new virtualization-enabled computer
uses. Used in conjunction with a new generation of the company’s
virtualization technology – Intel Virtualization Technology for
Directed I/O – Intel TXT ensures that virtual machine monitors are less
vulnerable to attacks that cannot be detected by today’s conventional
software-security solutions. By isolating assigned memory through this
hardware-based protection, it keeps data in each virtual partition
protected from unauthorized access from software in another partition.

So no, Ptacek and Joanna aren’t fighting the "wrong" battle, they’re just fighting one that garners much more attention, notoriety, and terms like "HyperJackStack" than the one you’re singling out.  😉

/Hoff

P.S. Please invest in a better setup for your blog…I can’t trackback to you (you need Halo or something) and your comment system requires registration…bah!  Those G-Boys have you programmed… 😉

I Know It’s Been 4 Months Since I Said it, but “NO! DLP is (Still) NOT the Next Big Thing In Security!”

August 24th, 2007 5 comments

Evolution3
Nope.  Haven’t changed my mind.  Sorry.  Harrington stirred it up and Chuvakin reminded me of it.

OK, so way back in April, on the cusp of one of my normal rages against the (security) machine, I blogged how Data Leakage Protection (DLP) is doomed to be a feature and not a market

I said the same thing about NAC, too.  Makin’ friends and influencin’ people.  That’s me!

Oh my how the emails flew from the VP’s of Marketing & Sales from the various "Flying V’s" (see below)  Good times, good times.

Here’s snippets of what I said:


Besides having the single largest collection of vendors that begin with
the letter ‘V" in one segment of the security space (Vontu, Vericept,
Verdasys, Vormetric…what the hell!?) it’s interesting to see how
quickly content monitoring and protection functionality is approaching
the inflection point of market versus feature definition.

The "evolution" of the security market marches on.

Known by many names, what I describe as content monitoring and
protection (CMP) is also known as extrusion prevention, data leakage or
intellectual property management toolsets.  I think for most, the
anchor concept of digital rights management (DRM) within the Enterprise
becomes glue that makes CMP attractive and compelling; knowing what and
where your data is and how its distribution needs to be controlled is
critical.

The difficulty with this technology is the just like any other
feature, it needs a delivery mechanism.  Usually this means yet another
appliance; one that’s positioned either as close to the data as
possible or right back at the perimeter in order to profile and control
data based upon policy before it leaves the "inside" and goes "outside."

I made the point previously that I see this capability becoming a
feature in a greater amalgam of functionality;  I see it becoming table
stakes included in application delivery controllers, FW/IDP systems and
the inevitable smoosh of WAF/XML/Database security gateways (which I
think will also further combine with ADC’s.)

I see CMP becoming part of UTM suites.  Soon.

That being said, the deeper we go to inspect content in order to
make decisions in context, the more demanding the requirements for the
applications and "appliances" that perform this functionality become.
Making line speed decisions on content, in context, is going to be
difficult to solve. 

CMP vendors are making a push seeing this writing on the wall, but
it’s sort of like IPS or FW or URL Filtering…it’s going to smoosh.

Websense acquired PortAuthority.  McAfee acquired Onigma.  Cisco will buy…

I Never Metadata I Didn’t Like…

I didn’t even bother to go into the difficulty and differences in classifying, administering, controlling and auditing structured versus unstructured data, nor did I highlight the differences between those solutions on the market who seek to protect and manage information from leaking "out" (the classic perimeter model) versus management of all content ubiquitously regardless of source or destination.  Oh, then there’s the whole encryption in motion, flight and rest thing…and metadata, can’t forget that…

Yet I digress…let’s get back to industry dynamics.  It seems that Uncle Art is bound and determined to make good on his statement that in three years there will be no stand-alone security companies left.  At this rate, he’s going to buy them all himself!

As we no doubt already know, EMC acquired Tablus. Forrester seems to think this is the beginning of the end of DLP as we know it.  I’m not sure I’d attach *that* much gloom and doom to this specific singular transaction, but it certainly makes my point:

  August 20, 2007

Raschke_2EMC/RSA Drafts Tablus For Deeper Data-Centric Security
The Beginning Of The End Of The Standalone ILP Market

by
Thomas Raschke

with
Jonathan Penn, Bill Nagel, Caroline Hoekendijk

EXECUTIVE SUMMARY

EMC expects Tablus to play a key role in
its information-centric security and storage lineup. Tablus’ balanced
information leak prevention (ILP) offering will benefit both sides of
the EMC/RSA house, boosting the latter’s run at the title of
information and risk market leader. Tablus’ data classification
capabilities will broaden EMC’s Infoscape beyond understanding
unstructured data at rest; its structured approach to data detection
and protection will provide a data-centric framework that will benefit
RSA’s security offerings like encryption and key management. While
holding a lot of potential, this latest acquisition by one of the
industry’s heavyweights will require comprehensive integration efforts
at both the technology and strategic level. It will also increase the
pressure on other large security and systems management vendors to
address their organization’s information risk management pain points.
More importantly, it will be remembered as the turning point that led
to the demise of the standalone ILP market as we know it today.

So Mogull will probably (still) disagree, as will the VP’s of Marketing/Sales working for the Flying-V’s who will no doubt barrage me with email again, but it’s inevitable.  Besides, when an analyst firm agrees with you, you can’t be wrong, right Rich!?

/Hoff

 

Quick Post of a Virtualization Security Presentation: “Virtualization and the End of Network Security As We Know It…”

August 20th, 2007 7 comments

Virtualizationagenda
"Virtualization and the End of Network Security As We Know It…
The feel good hit of the summer!"

Ye olde blog gets pinged quite a lot with searches and search engine redirects for folks looking for basic virtualization and virtualized security information. 

I had to drum up a basic high-level virtualization security presentation for the ISSA Charlotte Metro gathering back in April and I thought I may as well post it.

It’s in .PDF format.  If you want it in .PPT or Keynote, let me know, I’ll be glad to send it to you.  If it’s useful or you need some explanation regarding the visual slides, please get back to me and I’ll be more than glad to address anything you want.  I had 45 minutes to highlight how folks were and might deal with "securing virtualization by virtualizing security."

Yes, some of it is an ad for the company I used to work for who specializes in virtualized security service layers (Crossbeam) but I’m sure you can see how it is relevant in the preso.  You’ll laugh, you’ll cry, you’ll copy/paste the text and declare your own brilliance.  Here’s the summary slide so those of you who haven’t downloaded this yet will know the sheer genius you will be missing if you don’t:

Issavirtualization034

At any rate, it’s not earth shattering but does a decent job at the high level of indicating some of the elements regarding virtualized security. I apologize for the individual animation slide page build-ups.  I’ll re-upload without them when I can get around to it. (Ed: Done.  I also uploaded the correct version 😉

Here’s the PDF.

/Hoff

(As of 11pm EST — 5.5 hours later 1:45pm EST the next day, you lot have downloaded this over 150 380 times.  Since there are no comments, it’s either the biggest piece of crap I’ve ever produced or you are all just so awe stricken you are unable to type.  Newby, you are not allowed to respond to this rhetorical question…)

Oh SNAP! VMware acquires Determina! Native Security Integration with the Hypervisor?

August 19th, 2007 12 comments

Determinalogo
Hot on the trails of becoming gigagillionaires, the folks at VMware make my day with this.  Congrats to the folks @ Determina.

Methinks that for the virtualization world, it’s a very, very good thing.  A step in the right direction.

I’m going to prognosticate that this means that Citrix will buy Blue Lane or Virtual Iron next (see bottom of the post) since their acquisition of XenSource leaves them with the exact same problem that this acquisition for VMware tries to solve:

VMware Inc., the market leader in virtualization software, has acquired
Determina Inc., a Silicon Valley maker of host intrusion prevention
products.

…the security of virtualized
environments has been something of an unknown quantity due to the
complexity of the technology and the ways in which hypervisors interact
with the host OS. 
Determina’s technology is designed specifically to protect the OS
from malicious code, regardless of the origin of the attack, so it
would seem to be a sensible fit for VMware, analysts say.Memoryfirewall

In his analysis of the deal, Gartner’s MacDonald sounded many of
the same notes. "By potentially integrating Memory Firewall into the
ESX hypervisor, the hypervisor itself can provide an additional level
of protection against intrusions. We also believe the memory protection
will be extended to guest OSs as well: VMware’s extensive use of binary
emulation for virtualization puts the ESX hypervisor in an advantageous
position to exploit this style of protection," he wrote.

I’ve spoken a lot recently  about how much I’ve been dreading the notion that security was doomed to repeat itself with the accelerated take off of server virtualization since we haven’t solved many of the most basic security problem classes.  Malicious code is getting more targeted and more intelligent and when you combine an emerging market using hot technology without an appropriate level of security… 

Basically, my concerns have stemmed from the observation that if we can’t do a decent job protecting physically-seperate yet interconnected network elements with all the security fu we have, what’s going to happen when the "…network is the computer" (or vice versa.)  Just search for "virtualization" via the Lijit Widget above for more posts on this…

Some options for securing virtualized guest OS’s in a VM are pretty straight foward:

  1. Continue to deploy layered virtualized security services across VLAN segments of which each VM is a member (via IPS’s, routers, switches, UTM devices…)
  2. Deploy software like Virtual Iron’s which looks like a third party vSwitch IPS on each VM
  3. Integrate something like Blue Lane’s ESX plugin-in which interacts with and at the VMM level
  4. As chipset level security improves, enable it
  5. Deploy HIPS as part of every guest OS.

Each of these approaches has its own sets of pros and cons, and quite honestly, we’ll probably see people doing all five at the same time…layered defense-in-depth.  Ugh.

What was really annoying to me, however, is that it really seemed that in many cases, the VM solution providers were again expecting that we’d just be forced to bolt security ON TO our VM environments instead of BAKING IT IN.  This was looking like a sad reality.

I’ll get into details in another post about Determina’s solution, but I am encouraged by VMware’s acquisition of a security company which will be integrated into their underlying solution set.  I don’t think it’s  a panacea, but quite honestly, the roadmap for solving these sorts of problems were blowing in the wind for VMware up until this point.

"Further, by
using the LiveShield capabilities, the ESX hypervisor could be used
‘introspectively’ to shield the hypervisor and guest OSs from attacks
on known vulnerabilities in situations where these have not yet been
patched. Both Determina technologies are fairly OS- and
application-neutral, providing VMware with an easy way to protect ESX
as well as Linux- and Windows-based guest OSs."

Quite honestly, I hoped they would have bought Blue Lane since the ESX Hypervisor is now going to be a crowded space for them…

We’ll see how well this gets integrated, but I smiled when I read this.

Oh, and before anyone gets excited, I’m sure it’s going to be 100% undetectable! 😉

/Hoff

I see your “More on Data Centralization” & Raise You One “Need to Conduct Business…”

June 19th, 2007 1 comment

Pokerhand
Bejtlich continues to make excellent points regarding his view on centralizing data within an enterprise.  He cites the increase in litigation regarding inadequate eDiscovery investment and the increasing pressures amassed from compliance.

All good points, but I’d like to bring the discussion back to the point I was trying to make initially and here’s the perfect perch from which to do it.  Richard wrote:

Christopher Christofer Hoff used the term "agile" several times in his good blog post. I think "agile" is going to be thrown out the window when corporate management is staring at $50,000 per day fines for not being able to produce relevant documents during ediscovery. When a company loses a multi-million dollar lawsuits because the judge issued an adverse inference jury instruction, I guarantee data will be centralized from then forward. "

…how about when a company loses the ability to efficiently and effectively conduct business because they spend so much money and time on "insurance policies" against which a balanced view of risk has not been applied?  Oh, wait.  That’s called "information security." 😉

Fear.  Uncertainty.  Doubt.  Compliance.  Ugh.  Rinse, later, repeat.

I’m not taking what you’re proposing lightly, Richard, but the notion of agility, time to market, cost transformation and enhancing customer experience are being tossed out with the bathwater here. 

Believe it or not, we have to actually have a sustainable business in order to "secure" it. 

It’s fine to be advocating Google Gears and all these other Web 2.0
applications and systems. There’s one force in the universe that can
slap all that down, and that’s corporate lawyers. If you disagree, whom
do you think has a greater influence on the CEO: the CTO or the
corporate lawyer? When the lawyer is backed by stories of lost cases,
fines, and maybe jail time, what hope does a CTO with plans for
"agility" have?

But going back to one of your own mantras, if you bake security into your processes and SDLC in the first place, then the CEO/CTO/CIO and legal counsel will already have assessed the position the company has and balance the risk scorecard to ensure that they have exercised the appropriate due care in the first place. 

The uncertainty and horrors associated with the threat of punitive legal impacts have, are, and will always be there…and they will continue to be exploited by those in the security industry to buy more stuff and justify a paycheck.

Given the business we’re in, it’s not a surprise that the perspective presented is very, very siloed and focused on the potential "security" outcomes of what happens if we don’t start centralizing data now; everything looks like a nail when you’re a hammer.

However, you still didn’t address the other two critical points I made previously:

  1. The underlying technology associated with decentralization of data and applications is at complete odds with the "curl up in a fetal position and wait for the sky to fall" approach
  2. The only reason we have security in the first place is to ensure survivability and availability of service — and make sure that we stay in business.  That isn’t really a technical issue at all, it’s a business one.  I find it interesting that you referenced this issue as the CTO’s problem and not the CIO.

As to your last point, I’m convinced that GE — with the resources, money and time it has to bear on a problem — can centralize its data and resources…they can probably get cold fusion out of a tuna fish can and a blow pop, but for the rest of us on planet Earth, we’re going to have to struggle along trying to cram all the ‘agility’ and enablement we’ve just spent the last 10 years giving to users back into the compliance bottle.

/Hoff

Really, There’s More to Security than Admission/Access Control…

June 16th, 2007 2 comments

Wired_science_religion
Dr. Joseph Tardo over at the Nevis Networks Illuminations blog composed a reasonably well-balanced commentary regarding one or more of my posts in which I was waxing on philosophically about about my beliefs regarding keeping the network plumbing dumb and overlaying security as a flexible, agile, open and extensible services layer.

It’s clear he doesn’t think this way, but I welcome the discourse.  So let me make something clear:

Realistically, and especially in non-segmented flat networks, I think there are certain low-level security functions that will do well by being served up by switching infrastructure as security functionality commoditizes, but I’m not quite sure for the most part how or where yet I draw the line between utility and intelligence.  I do, however, think that NAC is one of those utility services.

I’m also unconvinced that access-grade, wiring closet switches are architected to scale in either functionality, efficacy or performance to provide any more value or differentiation other than port density than the normal bolt-on appliances which continue to cause massive operational and capital expenditure due to continued forklifts over time.  Companies like Nevis and Consentry quietly admit this too, which is why they have both "secure switches" AND appliances that sit on top of the network…

Joseph suggested he was entering into a religious battle in which he summarized many of the approaches to security that I have blogged about previously and I pointed out to him on his blog that this is exactly why I practice polytheism 😉 :

In case you aren’t following the
religious wars going on in the security blogs and elsewhere, let me bring you
up to date.

It goes like this. If you are in
the client software
business, then security has to be done in the endpoints and the network is just
dumb “plumbing,” or rather, it might as well be because you can’t assume
anything about it. If you sell appliances
that sit here and there in the network, the network sprouts two layers, with
the “plumbing” part separated from the “intelligence.” Makes sense, I guess. But
if you sell switches and routers then the intelligence must be integrated in with
the infrastructure. Now I get it. Or maybe I’m missing the point, what if you
sell both appliances
and infrastructure
?

I believe that we’re currently forced to deploy in defense in depth due to the shortcomings of solutions today.  I believe the "network" will not and cannot deliver all the security required.  I believe we’re going to have to invest more in secure operating systems and protocols.  I further believe that we need to be data-centric in our application of security.  I do not believe in single-point product "appliances" that are fundamentally functionally handicapped.  As a delivery mechanism to deliver security that matters across network I believe in this.

Again, the most important difference between what I believe and what Joseph points out above is that the normal class of "appliances" he’s trying to suggest I advocate simply aren’t what I advocate at all.  In fact, one might surprisingly confuse the solutions I do support as "infrastructure" — they look like high-powered switches with a virtualized blade architecture integrated into the solution.

It’s not an access switch, it’s not a single function appliance and it’s not a blade server and it doesn’t suffer from the closed proprietary single vendor’s version of the truth.  To answer the question, if you sell and expect to produce both secure appliances and infrastructure, one of them will come up short.   There are alternatives, however.

So why leave your endpoints,
the ones that have all those vulnerabilities that created the security industry
in the first place, to be hit on by bots, “guests,” and anyone else that wants
to? I don’t know about you, but I would want both something on the endpoint,
knowing it won’t be 100% but better than nothing, and also something in the
network to stop the nasty stuff, preferably before it even got in.

I have nothing to disagree with in the paragraph above — short of the example of mixing active network defense with admission/access control in the same sentence; I think that’s confusing two points.   Back to the religious debate as Joseph drops back to the "Nevis is going to replace all switches in the wiring closet" approach to security via network admission/access control:

Now, let’s talk about getting on
the network. If the switches are just dumb plumbing they will blindly let
anyone on, friend or foe, so you at least need to beef up the dumb plumbing
with admission enforcement points. And you want to put malware sensors where
they can be effective, ideally close to entry points, to minimize the risk of having
the network infrastructure taken down. So, where do you want to put the
intelligence, close to the entry enforcement points or someplace further in the
bowels of the network where the dumb plumbing might have plugged-and-played a
path around your expensive intelligent appliance?

That really depends upon what you’re trying to protect; the end point, the network or the resources connected to it.  Also, I won’t/can’t argue about wanting to apply access/filtering (sounds like IPS in the above example) controls closest to the client at the network layer.  Good design philosophy.   However, depending upon how segmented your network is, the types, value and criticality of the hosts in these virtual/physical domains, one may choose to isolate by zone or VLAN and not invest in yet another switch replacement at the access layer.

If the appliance is to be
effective, it has to sit at a choke point and really be and enforcement point.
And it has to have some smarts of its own. Like the secure switch that we make.

Again, that depends upon your definition of enforcement and applicability.  I’d agree that in flat networks, you’d like to do it at the port/host level, though replacing access switches to do so is usually not feasible in large networks given investments in switching architectures.  Typical fixed configuration appliances overlaid don’t scale, either.

Furthermore, depending upon your definition of what an enforcement zone and it’s corresponding diameter is (port, VLAN, IP Subnet) you may not care.  So putting that "appliance" in place may not be as foreboding as you wager, especially if it overlays across these boundaries satisfactorily.

We will see how long before these new-fangled switch vendors that used to be SSL VPN’s, that then became IPS appliances that have now "evolved" into NAC solutions, will become whatever the next buzzword/technology of tomorrow represents…especially now with Cisco’s revitalized technology refresh for "secure" access switches in the wiring closets.  Caymas, Array, and Vernier (amongst many) are perfect examples.

When it comes down to it, in the markets Crossbeam serves — and especially the largest enterprises — they are happy with their switches, they just want the best security choice on top of it provided in a consolidated, agile and scalable architecture to support it.

Amen.

/Hoff