Archive for the ‘Information Centricity’ Category

Endpoint Security vs. DLP? That’s Part Of the Problem…

March 31st, 2008 6 comments

Larry Walsh wrote something (Defining the Difference Between Endpoint Security and Data Loss Prevention) that sparked an interesting debate based upon a vendor presentation given to him on "endpoint security" by SanDisk.

SanDisk is bringing to market a set of high-capacity USB flash drives that feature built-in filesystem encryption as well as strong authentication and access control.  If the device gets lost with the data on it, it’s "safe and secure" because it’s encrypted.  They are positioning this as an "endpoint security" solution.

I’m not going to debate the merits/downsides of that approach because I haven’t seen their pitch, but suffice it to say, I think it’s missing a "couple" of pieces to solve anything other than a very specific set of business problems.

Larry’s dilemma stems from the fact that he maintains that this capability and functionality is really about data loss protection and doesn’t have much to do with "endpoint security" at all:

We debated that in my office for a few minutes. From my perspective, this solution seems more like a data loss prevention solution than endpoint security. Admittedly, there are many flavors of endpoint security. When I think of endpoint security, I think of network access control (NAC), configuration management, vulnerability management and security policy enforcement. While this solution is designed for the endpoint client, it doesn’t do any of the above tasks. Rather, it forces users to use one type of portable media and transparently applies security protection to the data. To me, that’s DLP.

In today’s market taxonomy, I would agree with Larry.  However, what Larry is struggling with is not really the current state of DLP versus "endpoint security," but rather the future state of converged information-centric governance.  He’s describing the problem that will drive the solution as well as the inevitable market consolidation to follow.

This is actually the whole reason Mogull and I are talking about the evolution of DLP as it exists today to a converged solution we call CMMP — Content Management, Monitoring and Protection. {Yes, I just added another M for Management in there…}

What CMMP represents is the evolved and converged end-state technology integration of solutions that today provide a point solution but "tomorrow" will be combined/converged into a larger suite of services.

Off the cuff, I’d expect that we will see at a minimum the following technologies being integrated to deliver CMMP as a pervasive function across the information lifecycle and across platforms in flight/motion and at rest:

  • Data leakage/loss protection (DLP)
  • Identity and access management (IAM)
  • Network Admission/Access Control (NAC)
  • Digital rights/Enterprise rights management (DRM/ERM)
  • Seamless encryption based upon "communities of interest"
  • Information classification and profiling
  • Metadata
  • Deep Packet Inspection (DPI)
  • Vulnerability Management
  • Configuration Management
  • Database Activity Monitoring (DAM)
  • Application and Database Monitoring and Protection (ADMP)
  • etc…

That’s not to say they’ll all end up as a single software install or network appliance, but rather a consolidated family of solutions from a few top-tier vendors who have coverage across the application, host and network space. 

If you were to look at any enterprise today struggling with this problem, they likely have or are planning to have most of the point solutions above anyway.  The difficulty is that they’re all from different vendors.  In the future, we’ll see larger suites from fewer vendors providing a more cohesive solution.

This really gives us the "cross domain information protection" that Rich talks about.

We may never achieve the end-state described above in its entirety, but it’s safe to say that the more we focus on the "endpoint" rather than the "information on the endpoint," the bigger the problem we will have.


A Cogent Example of Information Centricity

March 21st, 2008 7 comments

My buddy Adrian Lane over @ IPLocks wrote up a really nice example of an information centric security model that is based off the discussions Mogull has been having on his blog regarding the same that I commented on a couple of weeks ago here and here:

I want to provide the simplest example of what I consider to be an information centric security. I
have never spoken with Rich directly on this subject and he may
completely disagree, but this is one of the simplest examples I can
come up with. It embodies the basic tenants, but it also exemplifies the model’s singular greatest challenge. Of course there is a lot more possible than what I am going to propose here, but this is a starting point.

Consider a digitally signed email encrypted with PGP as a tangible example.

Following Rich Mogull’s defining tenets/principles post:

  • The
    data is self describing as it carries MIME type and can encrypt the
    payload and leave business context (SMTP) exposed.
  • The
    data is self defending in both confidentiality (encrypted with the
    recipient public key) and integrity (digitally signed by the sender).
  • While
    the business context in this example is somewhat vague, it can be
    supplied in the email message itself, or added as a separate packet and
    interpreted by the application(s) that decrypt, verify hash or read the
    contents. Basically, it’s variable.
  • The
    data is protected in motion, does not need network support for
    security, and really does not care about the underlying medium of
    conveyance for security, privacy or integrity. The verification can be      performed independently once it reaches its destination. And the payload, the message itself,      could be wrapped up and conveyed into different applications as well. A
    trouble ticket application or customer relationship management
    application are but two examples of changing business contexts.
  • The policies can work consistently      provided there is an agreed upon application processing. I think Rich’s intention was business      processing, but it holds for security policies as well. Encryption
    provides a nice black & white example as anyone without the
    appropriate private key is not going to gain access to the email
    message. Business rules and processes
    embedded should have some verification that they have not been altered
    or tampered with, but cryptographic hashes can provide that. We can even add a      signed audit trail, verifiable to receiving parties, within the      payload.

I might add that there should be independent
‘Brokerage’ facilities for dispute resolution or verification of some
types of rules, process or object state in workflow systems. If recipients can add or even alter some subset of the information, who’s copy is the latest and greatest? But anyway, that is too much detail for this example.

I’m not sure what Adrian meant when he said (in boldface) "The
data is self describing as it carries MIME type and can encrypt the
payload and leave business context (SMTP) exposed.
"  Perhaps that the traffic is still identified as SMTP (via port 25) even though the content is encrypted?

For this example Adrian used MIME Type as the descriptor.  MIME types
provide an established "standardized" format that makes for an easy
transition to making decisions and enacting dispositions based on (at
least) SMTP content in context easy, but I maintain that depending on
where and when you make these decisions (in motion, at rest, etc.) we still need a common metadata format that is independent of
protocol/application that would allow analysis even on encrypted data at rest or in

Need versus ability to deliver is a valid concern, of course…

A note on DLP and Information Centric Security: Security that acts directly upon information, and information that embeds it’s security are different concepts. IMO. Under
a loose definition, I understand how one could view Data Loss
Prevention, in context Monitoring/IDS and even Assessment as a data
centric examination of security. But this is really not what I am attempting to describe. Maybe we change the name to Embedded Information Security, but that is semantics we can work out later.

I would agree that in the end game, the latter requires less (or perhaps none) of the former.  If the information is self-governing and enforcement of policy is established based upon controls such as strong mutual authentication and privacy-enforcing elements such as encryption, then really the information that has embedded "security" is, in an of itself, "…security that acts directly on information."

It’s a valid point but in the interim we’re going to need this functionality because we don’t have a universal method of applying yet alone enforcing self-described policies on information.



Categories: Information Centricity Tags:

No Good Deed Goes Unpunished (Or Why NextGen DLP Is a Step On The Information Centric Ladder…)

March 19th, 2008 4 comments

Rothman wrote a little ditty today commenting on a blog I scribbled last week titled "The Walls Are Collapsing Around Information Centricity"

Information centricity – Name that tune.

Of course, the Hoff needs to pile on to Rich’s post about information-centric security. He even finds means to pick apart a number of my statements. Now that he is back from down under, maybe he could even show us some examples of how a DLP solution is doing anything like information-centricity. Or maybe I’m just confused by the uber-brain of the Hoff and how he thinks maybe 500 steps ahead of everyone else.

Based on my limited brain capacity, the DLP vendors can profile and maybe even classify the types of data. But that information is neither self-describing, nor is it portable. So once I make it past the DLP gateway, the data is GONE baby GONE.

In my world of information-centricity, we are focused on what the fundamental element of data can do and who can use it. It needs to be enforced anywhere that data can be used. Yes, I mean anywhere. Name that tune, Captain Hoff. I’d love to see something like this in use. I’m not going to be so bold as to say it isn’t happening, but it’s nothing I’ve seen before. Please please, edumacate me.

I’m always pleased when Uncle Mike shows me some blog love, so I’ll respond in kind, if not only to defend my honor.  Each time Mike "compliments" me on how forward-looking I am, it’s usually accompanied by a gnawing sense that his use of "uber-brained" is Georgian for "dumbass schlock." 😉

Yes, you’re confused by my "uber-brain…" {roll eyes here}

I believe Mike missed a couple of key words in my post, specifically that the next generation of solutions would start to deliver the functionality described in both my and Rich’s posts.

What I referred to was that the evolution of the current generation of DLP solutions as well as the incremental re-tooling of DRM/ERM, ADMP, CMP, and data classification at the point of creation and across the wire gets us closer to being able to enforce policy across a greater landscape.

The current generation of technologies/features such as DLP do present useful solutions in certain cases but in their current incarnation are not complete enough to solve all of the problems we need to solve.  I’ve said this many times.  They will, however, evolve, which is what I was describing.

Mike is correct that today data is not self-describing, but that’s a problem that we’ll need standardization to remedy — a common metadata format would be required if cross-solution policy enforcement were to be realized.  Will we ever get there?  It’ll take a market leader to put a stake in the ground to get us started, for sure (wink, wink.)

As both Mogull and I alluded in our posts and our SOURCEBoston presentation, we’re keyed into many companies in stealth mode as well as the roadmaps of many of the companies in this space and the solutions represented by the intersection of technologies and solutions that are becoming CMP are very promising.

That shouldn’t be mistaken for near-term success, but since my job is to look 3-5 years out on the horizon, that’s what I wrote about.  Perhaps Mike mistook my statement about the fact that companies are beginning to circle the wagons on this issue to mean that they are available now.  That’s obviously not the case.

Hope that helps, Mike.


Categories: Information Centricity Tags:

The Walls Are Collapsing Around Information Centricity

March 10th, 2008 2 comments

Since Mogull and I collaborate quite a bit on projects and share many thoughts and beliefs, I wanted to make a couple of comments on his last post on Information Centricity and remind the audience at home of a couple of really important points.

Rich’s post was short and sweet regarding the need for Information-Centric solutions with some profound yet subtle guideposts:

For information-centric security to become a reality, in the long term it needs to follow the following principles:

  1. Information (data) must be self describing and defending.
  2. Policies and controls must account for business context.
  3. Information must be protected as it moves from structured to
    unstructured, in and out of applications, and changing business context.
  4. Policies must work consistently through the different defensive layers and technologies we implement.

I’m not convinced this is a complete list, but I’m trying to keep to
my new philosophy of shorter and simpler. A key point that might not be
obvious is that while we have self-defending data solutions, like DRM
and label security, for success they must grow to account for business
context. That’s when static data becomes usable information.

Mike Rothman gave an interesting review of Rich’s post:

The Mogull just laid out your work for the next 10 years. You just
probably don’t know it yet. Yes, it’s all about ensuring that the
fundamental elements of your data are protected, however and wherever
they are used. Rich has broken it up into 4 thoughts. The first one
made my head explode: "Information (data) must be self-describing and

Now I have to clean up the mess. Sure things like DRM are a
bad start, and have tarnished how we think about information-centric
security, but you do have to start somewhere. The reality is this is a
really long term vision of a problem where I’m not sure how you get
from Point A to Point B. We all talk about the lack of innovation in
security. And how the market just isn’t exciting anymore. What Rich
lays out here is exciting. It’s also a really really really big
problem. If you want a view of what the next big security company does,
it’s those 4 things. And believe me, if I knew how to do it, I’d be
doing it – not talking about the need to do it.

The comments I want to make are three-fold:

  1. Rich is re-stating and Mike’s head is exploding around the exact concepts that Information Survivability represents and the Jericho Forum trumpets in their Ten Commandments.  In fact, you can read all about that in a prior posts I made on the subjects of the Jericho Forum, re-perimeterization, information survivability and information centricity.  I like this post on a process I call ADAPT (Applied Data and Application Policy Tagging) a lot.

    For reference, here are the Jericho Forum’s Ten Commandments. Please see #9:


  2. As mike alluded, DRM/ERM has received a bad rap because of how it’s implemented — which has really left a sour taste in the mouths of the consumer consciousness.  As a business tool, it is the precursor of information centric policy and will become the lynchpin in how we will ultimately gain a foothold on solving the information resiliency/assurance/survivability problem.
  3. As to the innovation and dialog that Mike suggests is lacking in this space, I’d suggest he’s suffering from a bit of Shitake-ism (a-la mushroom-itis.)  The next generation of DLP solutions that are becoming CMP (Content Monitoring and Protection — a term I coined) are evolving to deal with just this very thing.  It’s happening.  Now.

    Further to that, I have been briefed by some very, very interesting companies that are in stealth mode who are looking to shake this space up as we speak.

So, prepare for Information Survivability, increased Information Resilience and assurance.  Coming to a solution near you…


Security Today == Shooting Arrows Through Sunroofs of Cars?

February 7th, 2008 14 comments

In this Dark Reading post, Peter Tippett, described as the inventor of what is now Norton Anti-virus, suggests that the bulk of InfoSec practices are "…outmoded or outdated concepts that don’t apply to today’s computing

As I read through this piece, I found myself flip-flopping between violent agreement and incredulous eye-rolling from one paragraph to the next, caused somewhat by the overuse of hyperbole in some of his analogies.  This was disappointing, but overall, I enjoyed the piece.

Let’s take a look at Peter’s comments:

For example, today’s security industry focuses way too much time
on vulnerability research, testing, and patching, Tippett suggested.
"Only 3 percent of the vulnerabilities that are discovered are ever
exploited," he said. "Yet there is huge amount of attention given to
vulnerability disclosure, patch management, and so forth."

I’d agree that the "industry" certainly focuses their efforts on these activities, but that’s exactly the mission of the "industry" that he helped create.  We, as consumers of security kit, have perpetuated a supply-driven demand security economy.

There’s a huge amount of attention paid to vulnerabilities, patching and prevention that doesn’t prevent because at this point, that’s all we’ve got.  Until we start focusing on the the root cause rather than the symptoms, this is a cycle we won’t break.  See my post titled "Sacred Cows, Meatloaf, and Solving the Wrong Problems" for an example of what I mean.

Tippett compared vulnerability research with automobile safety
research. "If I sat up in a window of a building, I might find that I
could shoot an arrow through the sunroof of a Ford and kill the
driver," he said. "It isn’t very likely, but it’s possible.

"If I disclose that vulnerability, shouldn’t the automaker put in
some sort of arrow deflection device to patch the problem? And then
other researchers may find similar vulnerabilities in other makes and
models," Tippett continued. "And because it’s potentially fatal to the
driver, I rate it as ‘critical.’ There’s a lot of attention and effort
there, but it isn’t really helping auto safety very much."

What this really means and Peter doesn’t really ever state, is that mitigating vulnerabilities in the absence of threat, impact or probability is a bad thing.  This is why I make such a fuss about managing risk instead of mitigating vulnerabilities.  If there were millions of malicious archers firing arrows through the sunroofs of unsuspecting Ford Escort drivers, then the ‘critical’ rating is relevant given the probability and impact of all those slings and arrows of thine enemies…

Tippett also suggested that many security pros waste time trying
to buy or invent defenses that are 100 percent secure. "If a product
can be cracked, it’s sometimes thrown out and considered useless," he
observed. "But automobile seatbelts only prevent fatalities about 50
percent of the time. Are they worthless? Security products don’t have
to be perfect to be helpful in your defense."

I like his analogy and the point he’s trying to underscore.  What I find in many cases is that the binary evaluation of security efficacy — in products and programs — still exists.  In the absence of measuring the effective impact that something has in effecting one’s risk posture, people revert to a non-gradient scale of 0% or 100% insecure or secure.  Is being "secure" really important or is managing to a level of risk that is acceptable — with or without losses — the really relevant measure of success?   

This concept also applies to security processes, Tippett said.
"There’s a notion out there that if I do certain processes flawlessly,
such as vulnerability patching or updating my antivirus software, that
my organization will be more secure. But studies have shown that there
isn’t necessarily a direct correlation between doing these processes
well and the frequency or infrequency of security incidents.

"You can’t always improve the security of something by doing it
better," Tippett said. "If we made seatbelts out of titanium instead of
nylon, they’d be a lot stronger. But there’s no evidence to suggest
that they’d really help improve passenger safety."

I would like to see these studies.  I think that companies who have rigorous, mature and transparent processes that they execute "flawlessly" may not be more "secure," (a measurement I’d love to see quantified) but are in a much better position to respond and recover when (not if) an event occurs.  Based upon the established corollary that we can’t be 100% "secure" in the first place, we then know we’re going to have incidents.

Being able to recover from them or continue to operate while under duress is more realistic and important in my view.  That’s the point of information survivability.

Security teams need to rethink the way they spend their time,
focusing on efforts that could potentially pay higher security
dividends, Tippett suggested. "For example, only 8 percent of companies
have enabled their routers to do ‘default deny’ on inbound traffic," he
said. "Even fewer do it on outbound traffic. That’s an example of a
simple effort that could pay high dividends if more companies took the
time to do it."

I agree.  Focusing on efforts that eliminate entire classes of problems based upon reducing risk is a more appropriate use of time, money and resources.

Security awareness programs also offer a high
rate of return, Tippett said. "Employee training sometimes gets a bad
rap because it doesn’t alter the behavior of every employee who takes
it," he said. "But if I can reduce the number of security incidents by
30 percent through a $10,000 security awareness program, doesn’t that
make more sense than spending $1 million on an antivirus upgrade that
only reduces incidents by 2 percent?"

Nod.  That was the point of the portfolio evaluation process I gave in my disruptive innovation presentation:

24. Provide Transparency in portfolio effectiveness

I didn’t invent this graph, but it’s one of my favorite ways of
visualizing my investment portfolio by measuring in three dimensions:
business impact, security impact and monetized investment.  All of
these definitions are subjective within your organization (as well as
how you might measure them.)

The Y-axis represents the "security impact" that the solution
provides.  The X-axis represents the "business impact" that the
solution provides while the size of the dot represents the capex/opex
investment made in the solution.

Each of the dots represents a specific solution in the portfolio.

If you have a solution that is a large dot toward the bottom-left of
the graph, one has to question the reason for continued investment
since it provides little in the way of perceived security and business
value with high cost.   On the flipside, if a solution is represented
by a small dot in the upper-right, the bang for the buck is high as is
the impact it has on the organization.

The goal would be to get as many of your investments in your
portfolio from the bottom-left to the top-right with the smallest dots

This transparency and the process by which the portfolio is assessed
is delivered as an output of the strategic innovation framework which
is really comprised of part art and part science.

All in all, a good read from someone who helped create the monster and is now calling it ugly…


Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

January 10th, 2008 11 comments

Juicy Fat Assets, Ripe For the Picking…

So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

  1. Assume a company that:
    …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
  2. Take the following:
    Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
  3. Combine With:
    Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
  4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
  5. You Get:
    Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

…I mean besides the fact that an entire industry has been leeching off this mess for decades…

I’ll Gladly Pay You Tuesday For A Secure Solution Today…

The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

Relax, Dude…Keep Your Firewalls…

In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

This goes for internal corporate users who are chained to their desks and not just mobile users.

What’s preventing you from doing this today?


Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.



And Now Some Useful 2008 Information Survivability Predictions…

December 7th, 2007 1 comment

So, after the obligatory dispatch of gloom and doom as described in my
2008 (in)Security Predictions, I’m actually going to highlight some of
the more useful things in the realm of Information Security that I
think are emerging as we round the corner toward next year.

They’re not really so much predictions as rather some things to watch.

Unlike folks who can only seem to talk about desperation, futility
and manifest destiny or (worse yet) "anti-pundit pundits" who try to
suggest that predictions and forecasting are useless (usually because
they suck at it,) I gladly offer a practical roundup of impending
development, innovation and some incremental evolution for your

You know, good news.

As Mogull mentioned,
I don’t require a Cray XMP48, chicken bones & voodoo or a
prehensile tail to make my picks.  Rather I grab a nice cold glass of
Vitamin G (Guiness) and sit down and think for a minute or two,
dwelling on my super l33t powers of common sense and pragmatism with just a
pinch of futurist wit.

Many of these items have been underway for some time, but 2008 will
be a banner year for these topics as well as the previously-described
"opportunities for improvement…"

That said, let’s roll with some of the goodness we can look forward to in the coming year.  This is not an exhaustive list by any means, but some examples I thought were important and interesting:

  1. More robust virtualization security toolsets with more native hypervisor/vmm accessibility
    it didn’t start with the notion of security baked in, virtualization
    for all of its rush-to-production bravado will actually yield some
    interesting security solutions that help tackle some very serious
    challenges.  As the hypervisors become thinner, we’re going to see the
    management and security toolsets gain increased access to the guts of
    the sausage machine in order to effect security appropriately and this
    will be the year we see the virtual switch open up to third parties and
    more robust APIs for security visibility and disposition appear.
  2. The focus on information centric security survivability graduates from v1.0 to v1.1
    to secure the network and the endpoint is like herding cats and folks
    are tired of dumping precious effort on deploying kitty litter around
    the Enterprise to soak up the stinky spots.  Rather, we’re going to see
    folks really start to pay attention to information classification,
    extensible and portable policy definition, cradle-to-grave lifecycle
    management, and invest in technology to help get them there.

    the current maturity of features/functions such as NAC and DLP have
    actually helped us get closer to managing our information and
    information-related risks.  The next generation of these offerings in
    combination with many of the other elements I describe herein and their
    consolidation into the larger landscape of management suites will
    actually start to deliver on the promise of focusing on what matters —
    the information.

  3. Robust Role-based policy, Identity and access management coupled with entitlement, geo-location and federation…oh and infrastructure, too!
    getting closer to being able to affect policy not only based upon just
    source/destination IP address, switch and router topology and the odd entry in active directory on
    a per-application basis, but rather holistically based upon robust
    lifecycle-focused role-based policy engines that allow us to tie in all of the major
    enterprise components that sit along the information supply-chain.

    Who, what, where, when, how and ultimately why will be the decision
    points considered with the next generation of solutions in this space.
    Combine the advancements here with item #2 above, and someone might
    actually start smiling.

    If you need any evidence of the convergence/collision of the application-oriented with the network-oriented approach and a healthy overlay of user entitlement provisioning, just look at the about-face Cisco just made regarding TrustSec.  Of course, we all know that it’s not a *real* security concern/market until Cisco announces they’ve created the solution for it 😉

  4. Next Generation Networks gain visibility as they redefine the compute model of today
    as there exists a Moore’s curve for computing, there exists an
    overlapping version for networking, it just moves slower given the
    footprint.  We’re seeing the slope of this curve starting to trend up
    this coming year, and it’s much more than bigger pipes, although that
    doesn’t hurt either…

    These next generation networks will
    really start to emerge visibly in the next year as the existing
    networking models start to stretch the capabilities and capacities of
    existing architecture and new paradigms drive requirements that dictate
    a much more modular, scalable, resilient, high-performance, secure and
    open transport upon which to build distributed service layers.

    networks and service layers are designed, composed, provisioned,
    deployed and managed — and how that intersects with virtualization and
    grid/utility computing — will start to really sink home the message
    that "in the cloud" computing has arrived.  Expect service providers
    and very large enterprises to adapt these new computing climates first
    with a trickle-down to smaller business via SaaS and hosted service
    operators to follow.

    BT’s 21CN
    (21st Century Network) is a fantastic example of what we can expect
    from NGN as the demand for higher speed, more secure, more resilient and more extensible interconnectivity really
    takes off.

  5. Grid and distributed utility computing models will start to creep into security
    really interesting by-product of the "cloud compute" model is that as
    data, storage, networking, processing, etc. get distributed, so shall
    security.  In the grid model, one doesn’t care where the actions take
    place so long as service levels are met and the experiential and
    business requirements are delivered.  Security should be thought of in
    exactly the same way. 

    The notion that you can point to a
    physical box and say it performs function ‘X’ is so last Tuesday.
    Virtualization already tells us this.  So, imagine if your security
    processing isn’t performed by a monolithic appliance but instead is
    contributed to in a self-organizing fashion wherein the entire
    ecosystem (network, hosts, platforms, etc.) all contribute in the
    identification of threats and vulnerabilities as well as function to
    contain, quarantine and remediate policy exceptions.

    Sort of sounds like that "self-defending network" schpiel, but not focused on the network and with common telemetry and distributed processing of the problem.

    Check out Red Lambda’s cGrid technology for an interesting view of this model.

  6. Precision versus accuracy will start to legitimize prevention as
    the technology starts to allow us the confidence to start turning the
    corner beyond detection

    In a sad commentary on the last few
    years of the security technology grind, we’ve seen the prognostication
    that intrusion detection is dead and the deadpan urging of the security
    vendor cesspool convincing us that we must deploy intrusion prevention
    in its stead. 
    Since there really aren’t many pure-play intrusion detection systems
    left anyway, the reality is that most folks who have purchased IPSs
    seldom put them in in-line mode and when they do, they seldom turn on
    the "prevention" policies and instead just have them detect attacks,
    blink a bit and get on with it.

    Why?  Mostly because while the
    threats have evolved the technology implemented to mitigate them hasn’t
    — we’re either stuck with giant port/protocol colanders or
    signature-driven IPSs that are nothing more than IDSs with the ability
    to send RST packets.

    So the "new" generation of technology has
    arrived and may offer some hope of bridging that gap.  This is due to
    not only really good COTS hardware but also really good network
    processors and better software written (or re-written) to take
    advantage of both.  Performance, efficacy and efficiency have begun to
    give us greater visibility as we get away from making decisions based
    on ports/protocols (feel free to debate proxies vs. ACLs vs. stateful
    inspection…) and move to identifying application usage and getting us
    close to being able to make "real time" decisions on content in context
    by examining the payload and data.  See #2 above.

    precision versus accuracy discussion is focused around being able to
    really start trusting in the ability for prevention technology to
    detect, defend and deter against "bad things" with a fidelity and
    resolution that has very low false positive rates.

    We’re getting closer with the arrival of technology such as Palo Alto Network’s solutions
    — you can call them whatever you like, but enforcing both detection
    and prevention using easy-to-define policies based on application (and
    telling the difference between any number of apps all using port
    80/443) is a step in the right direction.

  7. The consumerization of IT will cause security and IT as we know it to die radically change
    I know it’s heretical but 2008 is going to really push the limits of
    the existing IT and security architectures to their breaking points, which is
    going to mean that instead of saying "no," we’re going to have to focus
    on how to say "yes, but with this incremental risk" and find solutions for an every increasingly mobile and consumerist enterprise. 

    We’ve talked about this before, and most security folks curl up into a fetal position when you start mentioning the adoption by the enterprise of social
    neworking, powerful smartphones, collaboration tools, etc.  The fact is that the favorable economics, agility , flexibility and efficiencies gained with the adoption of consumerization of IT outweigh the downsides in the long run.  Let’s not forget the new generation of workers entering the workforce. 

    So, since information is going to be leaking from our Enterprises like a sieve on all manners of devices and by all manner of methods, it’s going to force our hands and cause us to focus on being information centric and stop worrying about the "perimeter problem," stop focusing on the network and the host, and start dealing with managing the truly important assets while allowing our employees to do their jobs in the most effective, collaborative and efficient methods possible.

    This disruption will be a good thing, I promise.  If you don’t believe me, ask BP — one of the largest enterprises on the planet.  Since 2006 they’ve put some amazing initiatives into play:

    like this little gem:

    Oil giant BP is pioneering a "digital consumer" initiative
    that will give some employees an allowance to buy their own IT
    equipment and take care of their own support needs.

    project, which is still at the pilot stage, gives select BP staff an
    annual allowance — believed to be around $1,000 — to buy their own
    computing equipment and use their own expertise and the manufacturer’s
    warranty and support instead of using BP’s IT support team.

    to the scheme is tightly controlled and those employees taking part
    must demonstrate a certain level of IT proficiency through a computer
    driving licence-style certification, as well as signing a diligent use

    …combined with this:

    than rely on a strong network perimeter to secure its systems, BP has
    decided that these laptops have to be capable of coping with the worst
    that malicious hackers can throw at it, without relying on a network

    Ken Douglas, technology director of BP, told the UK
    Technology Innovation & Growth Forum in London on Monday that
    18,000 of BP’s 85,000 laptops now connect straight to the internet even
    when they’re in the office.

  8. Desktop Operating Systems become even more resilient
    The first steps taken by Microsoft and Apple in Vista and OS X (Leopard) as examples have begun to
    chip away at plugging up some of the security holes that
    have plagued them due to the architectural "feature" that providing an open execution runtime model delivers.  Honestly, nothing short of a do-over will ultimately mitigate this problem, so instead of suggesting that incremental improvement is worthless, we should recognize that our dark overlords are trying to makethings better.

    Elements in Vista such as ASLR, NX, and UAC combined with integrated firewalling, anti-spyware/anti-phishing, disk encryption, integrated rights management, protected mode IE mode, etc. are all good steps in a "more right" direction than previous offerings.  They’re in response to lessons learned.

    On the Mac, we also see ASLR, sandboxing, input management, better firewalling, better disk encryption, which are also notable improvements.  Yes, we’ve got a long way to go, but this means that OS vendors are paying more attention which will lead to more stable and secure platforms upon which developers can write more secure code.

    It will be interesting to see how the intersection of these "more secure" OS’s factor with virtualization security discussed in #1 above.

    Vista SP1 is due to ship in 2008 and will include APIs through which third-party security products can work with kernel patch protection on Vista
    x64, more secure BitLocker drive encryption and a better Elliptical Curve Cryptography PRNG (pseudo-random number generator.)  Follow-on releases to Leopard will likely feature security enhancements to those delivered this year.

  9. Compliance stops being a dirty word  & Risk Management moves beyond buzzword
    we typically see the role of information security described as blocking and tackling; focused on managing threats and
    vulnerabilities balanced against the need to be "compliant" to some
    arbitrary set of internal and external policies.  In many people’s
    assessment then, compliance equals security.  This is an inaccurate and
    unfortunate misunderstanding.

    In 2008, we’ll see many of the functions of security — administrative, policy and operational — become much more visible and transparent to the business and we’ll see a renewed effort placed on compliance within the scope of managing risk because the former is actually a by-product of a well-executed risk management strategy.

    We have compliance as an industry today because we manage technology threats and vulnerabilities and don’t manage risk.  Compliance is actually nothing more than a way of forcing transparency and plugging a gap between the two.  For most, it’s the best they’ve got.

    What’s traditionally preventing the transition from threat/vulnerability management to risk management is the principal focus on technology with a lack of a good risk assessment framework and thus a lack of understanding of business impact.

    The availability of mature risk assessment frameworks (OCTAVE, FAIR, etc.) combined with the maturity of IT and governance frameworks (CoBIT, ITIL) and the readiness of the business and IT/Security cultures to accept risk management as a language and actionset with which they need to be conversant will yield huge benefits this year.

    Couple that with solutions like Skybox and you’ve got the makings of a strategic risk management strategy that can bring the security more closely aligned to the business.

  10. Rich Mogull will, indeed, move in with his mom and start speaking Klingon
    ’nuff said.

So, there we have it.  A little bit of sunshine in your otherwise gloomy day.


Understanding & Selecting a DLP Solution…Fantastic Advice But Wholesale Misery in 10,000 Words or More…

November 6th, 2007 9 comments

If you haven’t been following Rich Mogull’s amazing writeup on how to "Understand and Select a DLP Data Leakage Prevention Solution" you’re missing one of the best combinatorial market studies, product dissection and consumer advice available on the topic from The Man who covered the space at Gartner.

Here’s a link to the latest episode (part 7!) that you can use to work backwards from.

This is not a knock on the enormous amount of work Rich has done to educate us all, in fact it’s probably one of the reasons he chose to write this opus magnum; this stuff is complicated which explains why we’re still having trouble solving this problem… 

If it takes 7 large blog posts and over 10,000 words to enable someone
to make a reasonably educated decision on how to consider approaching the purchase of one of these solutions, there are two possible reasons for this:

  1. Rich is just a detail-oriented, anal-retentive ex-analyst who does a fantastic job of laying out everything you could ever want to know about this topic given his innate knowledge of the space, or
  2. It’s a pie that ain’t quite baked.

I think the answer is "C – All of the above," and t’s absolutely
no wonder why this market feature has a cast of vendors who are
shopping themselves to the highest bidder faster that you can say

Yesterday we saw the leader in this space (Vontu) finally submit to the giant Yellow Sausage Machine.

The sales cycle and adoption attach rate for this sort of product must
be excruciating if one must be subjected to the equivalent of the Old
Testament just to understand the definition and scope of the solution…as a consumer, I know I have a pain that needs amelioration in this category, but which one of these ointments is going to stop the itching?

I dig one of the first paragraphs in Part I which is probably the first clue we’re going to hit a slippery slope: 

The first problem in understanding DLP is figuring out what we’re
actually talking about. The following names are all being used to
describe the same market:

  • Data Loss Prevention/Protection
  • Data Leak Prevention/Protection
  • Information Loss Prevention/Protection
  • Information Leak Prevention/Protection
  • Extrusion Prevention
  • Content Monitoring and Filtering
  • Content Monitoring and Protection

And I’m sure I’m missing a few. DLP seems the most common term, and
while I consider its life limited, I’ll generally use it for these
posts for simplicity. You can read more about how I think of this progression of solutions here.

So you’ve got that goin’ for ya… 😉

In the overall evolution of the solution landscape, I think that this iteration of the DLP/ILP/EP/CMF/CMP (!) solution sets raise the visibility of the need to make decisions on content in context and focus on information centricity (data-centric "security" for the technologists) instead  of the continued deployment of packet-filtering 5-tuple network colanders and host-based agent bloatscapes being foisted upon us.

More on the topic of Information Centricity and its relevance to Information Survivability soon.  I spent a fair amount of time talking about this as a source of disruptive innovation/technology during my keynote at the Information Security Decisions conference yesterday.

Great conversations were had afterwards with some *way* smart people on the topic, and I’m really excited to share them once I can digest the data and write it down.


(Image Credit: Stephen Montgomery)