Archive

Archive for the ‘DLP’ Category

Cloud Providers and Security “Edge” Services – Where’s The Beef?

September 30th, 2009 16 comments

usbhamburgerPreviously I wrote a post titled “Oh Great Security Spirit In the Cloud: Have You Seen My WAF, IPS, IDS, Firewall…” in which I described the challenges for enterprises moving applications and services to the Cloud while trying to ensure parity in compensating controls, some of which are either not available or suffer from the “virtual appliance” conundrum (see the Four Horsemen presentation on issues surrounding virtual appliances.)

Yesterday I had a lively discussion with Lori MacVittie about the notion of what she described as “edge” service placement of network-based WebApp firewalls in Cloud deployments.  I was curious about the notion of where the “edge” is in Cloud, but assuming it’s at the provider’s connection to the Internet as was suggested by Lori, this brought up the arguments in the post
above: how does one roll out compensating controls in Cloud?

The level of difficulty and need to integrate controls (or any “infrastructure” enhancement) definitely depends upon the Cloud delivery model (SaaS, PaaS, and IaaS) chosen and the business problem trying to be solved; SaaS offers the least amount of extensibility from the perspective of deploying controls (you don’t generally have any access to do so) whilst IaaS allows a lot of freedom at the guest level.  PaaS is somewhere in the middle.  None of the models are especially friendly to integrating network-based controls not otherwise supplied by the provider due to what should be pretty obvious reasons — the network is abstracted.

So here’s the rub, if MSSP’s/ISP’s/ASP’s-cum-Cloud operators want to woo mature enterprise customers to use their services, they are leaving money on the table and not fulfilling customer needs by failing to roll out complimentary security capabilities which lessen the compliance and security burdens of their prospective customers.

While many provide commoditized solutions such as anti-spam and anti-virus capabilities, more complex (but profoundly important) security services such as DLP (data loss/leakage prevention,) WAF, Intrusion Detection and Prevention (IDP,) XML Security, Application Delivery Controllers, VPN’s, etc. should also be considered for roadmaps by these suppliers.

Think about it, if the chief concern in Cloud environments is security around multi-tenancy and isolation, giving customers more comfort besides “trust us” has to be a good thing.  If I knew where and by whom my data is being accessed or used, I would feel more comfortable.

Yes, it’s difficult to do properly and in many cases means the Cloud provider has to make a substantial investment in delivery platforms and management/support integration to get there.  This is why niche players who target specific verticals (especially those heavily regulated) will ultimately have the upper hand in some of these scenarios – it’s not socialist security where “good enough” is spread around evenly.  Services like these need to be configurable (SELF-SERVICE!) by the consumer.

An example? How about Google: where’s DLP integrated into the messaging/apps platforms?  Amazon AWS: where’s IDP integrated into the VMM for introspection?

I wrote a couple of interesting posts about this (that may show up in the automated related posts lists below):

My customers in the Fortune 500 complain constantly that the biggest providers they are being pressured to consider for Cloud services aren’t listening to these requests — or aren’t in a position to respond.

That’s bad for everyone.

So how about it? Are services like DLP, IDP, WAF integrated into your Cloud providers’ offerings something you’d like to see rather than having to add additional providers as brokers and add complexity and cost back into Cloud?

/Hoff

Jaquith: Data-Centric Security Requires Devolution, Not a Revolution

January 6th, 2009 1 comment

If I may be as bold to call Andy Jaquith a friend, I'll do so as I welcomed both his first research report and blog as an analyst for Forrester.

Andy's first topic — Data-Centric Security Requires Devolution, Not a Revolution — is a doozy, and an important one given the recent re-focus on information protection.  The notion of data-centric security has caused quite the stir over the last year with the maturation, consolidation and (some might say) commoditzation of certain marketspaces (DLP) into larger mainstream security product suites.

I will admit that I did not spend the $350 to read Andy's research.  As much as I like to support the ever-turning wheels of the analyst sausage machine, I'm going to upgrade to Apple's newly-announced iLife/iWork '09 bundle instead.  Sorry, Andy.  I'll buy you that beer instead.

However, Andy wrote a great blog entry summarizing the research here:

All of the enterprise's data must be secured… that is obvious. Enterprises have been trying to do this for years with e-mail filtering, hard disk encryption, data leak prevention (DLP) and other technologies. Every few years, another hot technology emerges. But what's less obvious is that the accepted way of tacking the problem — making IT Security the primary responsible party — isn't necessarily the most effective way to do it.

In the report, I take the position that devolution of responsibilities from IT Security to business units is the most important success factor. I'd urge you to read the report for yourself. But in short: as long as data security is just "an IT thing," it's virtually certain that the most accountable parties (BUs) will be able to wash their hands of any responsibility. Depending on the organization, the centralized approach tends to lead to two scenarios:

(1) IT throws up its hands, saying "it's too hard!" — guaranteeing that data security problems breed like rabbits
(2) IT dials up the data controls so tight that end-users and business units rebel against or subvert the controls — leading to even worse problems


What's worse? No controls, or too many? The truth lies somewhere in between, and results vary widely depending on who's accountable: the boss you already know and have a relationship with, or an amorphous cost center whose workers don't know what you do all day. Your boss knows what work products are appropriate to protect, and what aren't. IT Security's role should be supply the tools to enforce the businesses' wishes, not operate them themselves.

Want to secure enterprise data? Stop trying so hard, and devolve!

My only comments are that much like the X-Files, the truth is "out there."  It is most certainly somewhere in between as users and the business will always take the convenient path of least resistance and security will impose the iron fist. 

Securing information must be a cooperative effort that involves the broader adoption of pervasive discovery and classification capabilities across the entire information lifecycle.  The technology has to become as transparent as possible such that workflow isn't interrupted.  That's no easy task

Rich Mogull and I have been writing and presenting about this for quite some time, and we're making evolutionary progress, but not revolutionary progress.

To that point, I might have chosen a different by-line.  Instead of "devolution, not a revolution," I would suggest that perhaps "goverened delegation, not regulation" might be appropriate, too.

Can't wait for that iLife/iWork bundle!

/Hoff

Is There a Difference Between Data LOSS and Data LEAKAGE Prevention?

June 7th, 2008 21 comments

Leakage
I was reading Stuart King’s blog entry titled "Is Data Loss Prevention Really Possible?"

Besides a very interesting and reasonable question to ask, I was also intrigued by a difference I spotted between the title of his article and the first sentence in the body.

Specifically, in the title Stuart asked if "Data Loss Prevention [is] Really Possible?" but in the body he asked if it "…is really possible to prevent data leakage?"

In my opinion, data loss and data leakage are two different issues, albeit with some degree of subtlety. I’m interested in your position.

I will explanin my opinion via an update here once folks comment so as to not color the outcome.

What’s your opinion?  Loss versus leakage?  Talk amongst yourselves.

/Hoff

Categories: DLP Tags:

Endpoint Security vs. DLP? That’s Part Of the Problem…

March 31st, 2008 6 comments

Sandisk
Larry Walsh wrote something (Defining the Difference Between Endpoint Security and Data Loss Prevention) that sparked an interesting debate based upon a vendor presentation given to him on "endpoint security" by SanDisk.

SanDisk is bringing to market a set of high-capacity USB flash drives that feature built-in filesystem encryption as well as strong authentication and access control.  If the device gets lost with the data on it, it’s "safe and secure" because it’s encrypted.  They are positioning this as an "endpoint security" solution.

I’m not going to debate the merits/downsides of that approach because I haven’t seen their pitch, but suffice it to say, I think it’s missing a "couple" of pieces to solve anything other than a very specific set of business problems.

Larry’s dilemma stems from the fact that he maintains that this capability and functionality is really about data loss protection and doesn’t have much to do with "endpoint security" at all:

We debated that in my office for a few minutes. From my perspective, this solution seems more like a data loss prevention solution than endpoint security. Admittedly, there are many flavors of endpoint security. When I think of endpoint security, I think of network access control (NAC), configuration management, vulnerability management and security policy enforcement. While this solution is designed for the endpoint client, it doesn’t do any of the above tasks. Rather, it forces users to use one type of portable media and transparently applies security protection to the data. To me, that’s DLP.

In today’s market taxonomy, I would agree with Larry.  However, what Larry is struggling with is not really the current state of DLP versus "endpoint security," but rather the future state of converged information-centric governance.  He’s describing the problem that will drive the solution as well as the inevitable market consolidation to follow.

This is actually the whole reason Mogull and I are talking about the evolution of DLP as it exists today to a converged solution we call CMMP — Content Management, Monitoring and Protection. {Yes, I just added another M for Management in there…}

What CMMP represents is the evolved and converged end-state technology integration of solutions that today provide a point solution but "tomorrow" will be combined/converged into a larger suite of services.

Off the cuff, I’d expect that we will see at a minimum the following technologies being integrated to deliver CMMP as a pervasive function across the information lifecycle and across platforms in flight/motion and at rest:

  • Data leakage/loss protection (DLP)
  • Identity and access management (IAM)
  • Network Admission/Access Control (NAC)
  • Digital rights/Enterprise rights management (DRM/ERM)
  • Seamless encryption based upon "communities of interest"
  • Information classification and profiling
  • Metadata
  • Deep Packet Inspection (DPI)
  • Vulnerability Management
  • Configuration Management
  • Database Activity Monitoring (DAM)
  • Application and Database Monitoring and Protection (ADMP)
  • etc…

That’s not to say they’ll all end up as a single software install or network appliance, but rather a consolidated family of solutions from a few top-tier vendors who have coverage across the application, host and network space. 

If you were to look at any enterprise today struggling with this problem, they likely have or are planning to have most of the point solutions above anyway.  The difficulty is that they’re all from different vendors.  In the future, we’ll see larger suites from fewer vendors providing a more cohesive solution.

This really gives us the "cross domain information protection" that Rich talks about.

We may never achieve the end-state described above in its entirety, but it’s safe to say that the more we focus on the "endpoint" rather than the "information on the endpoint," the bigger the problem we will have.

/Hoff

The Walls Are Collapsing Around Information Centricity

March 10th, 2008 2 comments

Since Mogull and I collaborate quite a bit on projects and share many thoughts and beliefs, I wanted to make a couple of comments on his last post on Information Centricity and remind the audience at home of a couple of really important points.

Rich’s post was short and sweet regarding the need for Information-Centric solutions with some profound yet subtle guideposts:

For information-centric security to become a reality, in the long term it needs to follow the following principles:

  1. Information (data) must be self describing and defending.
  2. Policies and controls must account for business context.
  3. Information must be protected as it moves from structured to
    unstructured, in and out of applications, and changing business context.
  4. Policies must work consistently through the different defensive layers and technologies we implement.

I’m not convinced this is a complete list, but I’m trying to keep to
my new philosophy of shorter and simpler. A key point that might not be
obvious is that while we have self-defending data solutions, like DRM
and label security, for success they must grow to account for business
context. That’s when static data becomes usable information.

Mike Rothman gave an interesting review of Rich’s post:


The Mogull just laid out your work for the next 10 years. You just
probably don’t know it yet. Yes, it’s all about ensuring that the
fundamental elements of your data are protected, however and wherever
they are used. Rich has broken it up into 4 thoughts. The first one
made my head explode: "Information (data) must be self-describing and
defending."

Now I have to clean up the mess. Sure things like DRM are a
bad start, and have tarnished how we think about information-centric
security, but you do have to start somewhere. The reality is this is a
really long term vision of a problem where I’m not sure how you get
from Point A to Point B. We all talk about the lack of innovation in
security. And how the market just isn’t exciting anymore. What Rich
lays out here is exciting. It’s also a really really really big
problem. If you want a view of what the next big security company does,
it’s those 4 things. And believe me, if I knew how to do it, I’d be
doing it – not talking about the need to do it.

The comments I want to make are three-fold:

  1. Rich is re-stating and Mike’s head is exploding around the exact concepts that Information Survivability represents and the Jericho Forum trumpets in their Ten Commandments.  In fact, you can read all about that in a prior posts I made on the subjects of the Jericho Forum, re-perimeterization, information survivability and information centricity.  I like this post on a process I call ADAPT (Applied Data and Application Policy Tagging) a lot.

    For reference, here are the Jericho Forum’s Ten Commandments. Please see #9:

    Jericho_comm1Jericho_comm2

  2. As mike alluded, DRM/ERM has received a bad rap because of how it’s implemented — which has really left a sour taste in the mouths of the consumer consciousness.  As a business tool, it is the precursor of information centric policy and will become the lynchpin in how we will ultimately gain a foothold on solving the information resiliency/assurance/survivability problem.
  3. As to the innovation and dialog that Mike suggests is lacking in this space, I’d suggest he’s suffering from a bit of Shitake-ism (a-la mushroom-itis.)  The next generation of DLP solutions that are becoming CMP (Content Monitoring and Protection — a term I coined) are evolving to deal with just this very thing.  It’s happening.  Now.

    Further to that, I have been briefed by some very, very interesting companies that are in stealth mode who are looking to shake this space up as we speak.

So, prepare for Information Survivability, increased Information Resilience and assurance.  Coming to a solution near you…

/Hoff

Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

January 10th, 2008 11 comments

Phatburger_2
Juicy Fat Assets, Ripe For the Picking…

So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

  1. Assume a company that:
    …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
  2. Take the following:
    Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
  3. Combine With:
    Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
  4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
  5. You Get:
    Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

…I mean besides the fact that an entire industry has been leeching off this mess for decades…


I’ll Gladly Pay You Tuesday For A Secure Solution Today…

The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

Relax, Dude…Keep Your Firewalls…

In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

This goes for internal corporate users who are chained to their desks and not just mobile users.

What’s preventing you from doing this today?

/Hoff

Understanding & Selecting a DLP Solution…Fantastic Advice But Wholesale Misery in 10,000 Words or More…

November 6th, 2007 9 comments

Secbreach
If you haven’t been following Rich Mogull’s amazing writeup on how to "Understand and Select a DLP Data Leakage Prevention Solution" you’re missing one of the best combinatorial market studies, product dissection and consumer advice available on the topic from The Man who covered the space at Gartner.

Here’s a link to the latest episode (part 7!) that you can use to work backwards from.

This is not a knock on the enormous amount of work Rich has done to educate us all, in fact it’s probably one of the reasons he chose to write this opus magnum; this stuff is complicated which explains why we’re still having trouble solving this problem… 

If it takes 7 large blog posts and over 10,000 words to enable someone
to make a reasonably educated decision on how to consider approaching the purchase of one of these solutions, there are two possible reasons for this:

  1. Rich is just a detail-oriented, anal-retentive ex-analyst who does a fantastic job of laying out everything you could ever want to know about this topic given his innate knowledge of the space, or
  2. It’s a pie that ain’t quite baked.

I think the answer is "C – All of the above," and t’s absolutely
no wonder why this market feature has a cast of vendors who are
shopping themselves to the highest bidder faster that you can say
"TablusPortAuthorityOakelyOnigmaProvillaVontu."

Yesterday we saw the leader in this space (Vontu) finally submit to the giant Yellow Sausage Machine.

The sales cycle and adoption attach rate for this sort of product must
be excruciating if one must be subjected to the equivalent of the Old
Testament just to understand the definition and scope of the solution…as a consumer, I know I have a pain that needs amelioration in this category, but which one of these ointments is going to stop the itching?

I dig one of the first paragraphs in Part I which is probably the first clue we’re going to hit a slippery slope: 

The first problem in understanding DLP is figuring out what we’re
actually talking about. The following names are all being used to
describe the same market:

  • Data Loss Prevention/Protection
  • Data Leak Prevention/Protection
  • Information Loss Prevention/Protection
  • Information Leak Prevention/Protection
  • Extrusion Prevention
  • Content Monitoring and Filtering
  • Content Monitoring and Protection

And I’m sure I’m missing a few. DLP seems the most common term, and
while I consider its life limited, I’ll generally use it for these
posts for simplicity. You can read more about how I think of this progression of solutions here.

So you’ve got that goin’ for ya… ;)

In the overall evolution of the solution landscape, I think that this iteration of the DLP/ILP/EP/CMF/CMP (!) solution sets raise the visibility of the need to make decisions on content in context and focus on information centricity (data-centric "security" for the technologists) instead  of the continued deployment of packet-filtering 5-tuple network colanders and host-based agent bloatscapes being foisted upon us.

More on the topic of Information Centricity and its relevance to Information Survivability soon.  I spent a fair amount of time talking about this as a source of disruptive innovation/technology during my keynote at the Information Security Decisions conference yesterday.

Great conversations were had afterwards with some *way* smart people on the topic, and I’m really excited to share them once I can digest the data and write it down.

/Hoff

(Image Credit: Stephen Montgomery)

I Know It’s Been 4 Months Since I Said it, but “NO! DLP is (Still) NOT the Next Big Thing In Security!”

August 24th, 2007 5 comments

Evolution3
Nope.  Haven’t changed my mind.  Sorry.  Harrington stirred it up and Chuvakin reminded me of it.

OK, so way back in April, on the cusp of one of my normal rages against the (security) machine, I blogged how Data Leakage Protection (DLP) is doomed to be a feature and not a market

I said the same thing about NAC, too.  Makin’ friends and influencin’ people.  That’s me!

Oh my how the emails flew from the VP’s of Marketing & Sales from the various "Flying V’s" (see below)  Good times, good times.

Here’s snippets of what I said:


Besides having the single largest collection of vendors that begin with
the letter ‘V" in one segment of the security space (Vontu, Vericept,
Verdasys, Vormetric…what the hell!?) it’s interesting to see how
quickly content monitoring and protection functionality is approaching
the inflection point of market versus feature definition.

The "evolution" of the security market marches on.

Known by many names, what I describe as content monitoring and
protection (CMP) is also known as extrusion prevention, data leakage or
intellectual property management toolsets.  I think for most, the
anchor concept of digital rights management (DRM) within the Enterprise
becomes glue that makes CMP attractive and compelling; knowing what and
where your data is and how its distribution needs to be controlled is
critical.

The difficulty with this technology is the just like any other
feature, it needs a delivery mechanism.  Usually this means yet another
appliance; one that’s positioned either as close to the data as
possible or right back at the perimeter in order to profile and control
data based upon policy before it leaves the "inside" and goes "outside."

I made the point previously that I see this capability becoming a
feature in a greater amalgam of functionality;  I see it becoming table
stakes included in application delivery controllers, FW/IDP systems and
the inevitable smoosh of WAF/XML/Database security gateways (which I
think will also further combine with ADC’s.)

I see CMP becoming part of UTM suites.  Soon.

That being said, the deeper we go to inspect content in order to
make decisions in context, the more demanding the requirements for the
applications and "appliances" that perform this functionality become.
Making line speed decisions on content, in context, is going to be
difficult to solve. 

CMP vendors are making a push seeing this writing on the wall, but
it’s sort of like IPS or FW or URL Filtering…it’s going to smoosh.

Websense acquired PortAuthority.  McAfee acquired Onigma.  Cisco will buy…

I Never Metadata I Didn’t Like…

I didn’t even bother to go into the difficulty and differences in classifying, administering, controlling and auditing structured versus unstructured data, nor did I highlight the differences between those solutions on the market who seek to protect and manage information from leaking "out" (the classic perimeter model) versus management of all content ubiquitously regardless of source or destination.  Oh, then there’s the whole encryption in motion, flight and rest thing…and metadata, can’t forget that…

Yet I digress…let’s get back to industry dynamics.  It seems that Uncle Art is bound and determined to make good on his statement that in three years there will be no stand-alone security companies left.  At this rate, he’s going to buy them all himself!

As we no doubt already know, EMC acquired Tablus. Forrester seems to think this is the beginning of the end of DLP as we know it.  I’m not sure I’d attach *that* much gloom and doom to this specific singular transaction, but it certainly makes my point:

  August 20, 2007

Raschke_2EMC/RSA Drafts Tablus For Deeper Data-Centric Security
The Beginning Of The End Of The Standalone ILP Market

by
Thomas Raschke

with
Jonathan Penn, Bill Nagel, Caroline Hoekendijk

EXECUTIVE SUMMARY

EMC expects Tablus to play a key role in
its information-centric security and storage lineup. Tablus’ balanced
information leak prevention (ILP) offering will benefit both sides of
the EMC/RSA house, boosting the latter’s run at the title of
information and risk market leader. Tablus’ data classification
capabilities will broaden EMC’s Infoscape beyond understanding
unstructured data at rest; its structured approach to data detection
and protection will provide a data-centric framework that will benefit
RSA’s security offerings like encryption and key management. While
holding a lot of potential, this latest acquisition by one of the
industry’s heavyweights will require comprehensive integration efforts
at both the technology and strategic level. It will also increase the
pressure on other large security and systems management vendors to
address their organization’s information risk management pain points.
More importantly, it will be remembered as the turning point that led
to the demise of the standalone ILP market as we know it today.

So Mogull will probably (still) disagree, as will the VP’s of Marketing/Sales working for the Flying-V’s who will no doubt barrage me with email again, but it’s inevitable.  Besides, when an analyst firm agrees with you, you can’t be wrong, right Rich!?

/Hoff

 

Intellectual Property/Data Leakage/Content Monitoring & Protection – Another Feature, NOT a Market.

April 8th, 2007 8 comments

Evolution3
Besides having the single largest collection of vendors that begin with the letter ‘V" in one segment of the security space (Vontu, Vericept, Verdasys, Vormetric…what the hell!?) it’s interesting to see how quickly content monitoring and protection functionality is approaching the inflection point of market versus feature definition.

The "evolution" of the security market marches on.

Known by many names, what I describe as content monitoring and protection (CMP) is also known as extrusion prevention, data leakage or intellectual property management toolsets.  I think for most, the anchor concept of digital rights management (DRM) within the Enterprise becomes glue that makes CMP attractive and compelling; knowing what and where your data is and how its distribution needs to be controlled is critical.

The difficulty with this technology is the just like any other feature, it needs a delivery mechanism.  Usually this means yet another appliance; one that’s positioned either as close to the data as possible or right back at the perimeter in order to profile and control data based upon policy before it leaves the "inside" and goes "outside."

I made the point previously that I see this capability becoming a feature in a greater amalgam of functionality;  I see it becoming table stakes included in application delivery controllers, FW/IDP systems and the inevitable smoosh of WAF/XML/Database security gateways (which I think will also further combine with ADC’s.)

I see CMP becoming part of UTM suites.  Soon.

That being said, the deeper we go to inspect content in order to make decisions in context, the more demanding the requirements for the applications and "appliances" that perform this functionality become.  Making line speed decisions on content, in context, is going to be difficult to solve. 

CMP vendors are making a push seeing this writing on the wall, but it’s sort of like IPS or FW or URL Filtering…it’s going to smoosh.

Websense acquired PortAuthority.  McAfee acquired Onigma.  Cisco will buy…?

/Hoff

Categories: DLP, IP/Data Leakage Tags: