Archive

Archive for March, 2008

Endpoint Security vs. DLP? That’s Part Of the Problem…

March 31st, 2008 6 comments

Sandisk
Larry Walsh wrote something (Defining the Difference Between Endpoint Security and Data Loss Prevention) that sparked an interesting debate based upon a vendor presentation given to him on "endpoint security" by SanDisk.

SanDisk is bringing to market a set of high-capacity USB flash drives that feature built-in filesystem encryption as well as strong authentication and access control.  If the device gets lost with the data on it, it’s "safe and secure" because it’s encrypted.  They are positioning this as an "endpoint security" solution.

I’m not going to debate the merits/downsides of that approach because I haven’t seen their pitch, but suffice it to say, I think it’s missing a "couple" of pieces to solve anything other than a very specific set of business problems.

Larry’s dilemma stems from the fact that he maintains that this capability and functionality is really about data loss protection and doesn’t have much to do with "endpoint security" at all:

We debated that in my office for a few minutes. From my perspective, this solution seems more like a data loss prevention solution than endpoint security. Admittedly, there are many flavors of endpoint security. When I think of endpoint security, I think of network access control (NAC), configuration management, vulnerability management and security policy enforcement. While this solution is designed for the endpoint client, it doesn’t do any of the above tasks. Rather, it forces users to use one type of portable media and transparently applies security protection to the data. To me, that’s DLP.

In today’s market taxonomy, I would agree with Larry.  However, what Larry is struggling with is not really the current state of DLP versus "endpoint security," but rather the future state of converged information-centric governance.  He’s describing the problem that will drive the solution as well as the inevitable market consolidation to follow.

This is actually the whole reason Mogull and I are talking about the evolution of DLP as it exists today to a converged solution we call CMMP — Content Management, Monitoring and Protection. {Yes, I just added another M for Management in there…}

What CMMP represents is the evolved and converged end-state technology integration of solutions that today provide a point solution but "tomorrow" will be combined/converged into a larger suite of services.

Off the cuff, I’d expect that we will see at a minimum the following technologies being integrated to deliver CMMP as a pervasive function across the information lifecycle and across platforms in flight/motion and at rest:

  • Data leakage/loss protection (DLP)
  • Identity and access management (IAM)
  • Network Admission/Access Control (NAC)
  • Digital rights/Enterprise rights management (DRM/ERM)
  • Seamless encryption based upon "communities of interest"
  • Information classification and profiling
  • Metadata
  • Deep Packet Inspection (DPI)
  • Vulnerability Management
  • Configuration Management
  • Database Activity Monitoring (DAM)
  • Application and Database Monitoring and Protection (ADMP)
  • etc…

That’s not to say they’ll all end up as a single software install or network appliance, but rather a consolidated family of solutions from a few top-tier vendors who have coverage across the application, host and network space. 

If you were to look at any enterprise today struggling with this problem, they likely have or are planning to have most of the point solutions above anyway.  The difficulty is that they’re all from different vendors.  In the future, we’ll see larger suites from fewer vendors providing a more cohesive solution.

This really gives us the "cross domain information protection" that Rich talks about.

We may never achieve the end-state described above in its entirety, but it’s safe to say that the more we focus on the "endpoint" rather than the "information on the endpoint," the bigger the problem we will have.

/Hoff

Hey, Hoff, You’re SO Much More of An Asshole In Real Life Than On Your Blog…

March 29th, 2008 9 comments

Asshole
Sometimes it’s hard being me. 

I am, admittedly, bipolar and schizophrenic.  Armed with a lack of patience, a fondness for bourbon and an expense account, I can go from hero to zero in the time it takes to read one of my mini-opus blog posts.

It takes me about 5-10 minutes to write one of my blog posts and it shows.  A lot of my thoughts are just that — thoughts.  Sometimes they’re not complete.  That’s actually your job.  Point ’em out and make us both think, but be prepared for passionate debate.

That said, I get asked all the time why I didn’t turn it up to 11 and rip someone a new one on my blog when they post marketing drivel or why I didn’t squirt a product with lighter fluid and set it ablaze instead of taking the less flammable road.

You see, my blog represents the kinder, gentler version of me (scary, I know.)  It’s me, getting in touch with my feminine side.

So I find it genuinely amusing when people are surprised that I am *more* of an asshole in real life than I am on my blog.  I feel that’s better than the other way around, honestly. 

I find it deliciously ironic that I seem to represent the minority in this characterization, so let me explain why it is that I’ve decided to be more restrained than I used to be:

  1. I’m getting older.  Maybe it’s a lack of fiber or almost 15 years of marriage, but somethings I just let roll off my shoulders these days.  It could be that training 4-5 times a week in Brazilian Jiu Jitsu lets me deal with all the bottled-up rage that a rear-naked choke, armbar or cross-collar choke seems to take care of.  Some people have Calgon to take them away, but for me, I’ve got nothing to prove besides the fact that I’m not afraid to say that I have nothing to prove.
     
  2. You people are smart.  If I ask very specific questions  and raise issues to which people respond like programmed spokesholes from the planet Marketron, you’ll see right through them and arrive at the same point as you would were I to lead you down the path.
  3. It’s a small freaking world.  I don’t want some dude I piss off now to run over my dogma with his Karma later.  It takes a ton to really get me going, and bad things will occur when you do.  One of my first blogging  turrets adventures ended up getting someone fired, and as hysterical as that is, unless what someone says is personally offensive, criminal or steps on the rights of others, I’ll poke a little and that person will look like an assclown all by themselves.
  4. Context is everything, permanence is scary.  It’s impossible to have a conversation via blogs.  Comment pong sucks donkey and more often than not, sentences get picked apart due to use of passive voice and arguments ensue debating the trees for the forest.  And it stays around forever.  If I have beef with someone regarding something, I’ll email them or *gasp* talk to them.  I don’t want some printout from the wayback machine being entered into evidence as People’s Exhibit #3.
     
  5. I’ve got 3 kids.  Besides having to act as moral compass, my three girls eat like piranha, need to learn how to be good humans, and require daily sacrifices at the Webkinz/Hannah Montana/Jonas Brothers altar.  That shit is expensive on all fronts.  I need a paycheck.  Yes, I’m a sellout to the man, er, woman.  You don’t seem to mind when I expense dinner and drinks though, huh?
     
  6. It’s best to pick your battles.  When something stinks, I tell you.  When I believe or don’t believe in something, I say it.  I just don’t need to pour gas on a fire for effect.  Sometimes, it’s just not worth the time, effort or exposure.  See #7.
     
  7. I’ve got better shit to do.  ’nuff said.

I do hope that opening the kimono and revealing my humanity  doesn’t alarm anyone.  Rest assured, however, that in person I really am a huge asshole.  I don’t have a lot of friends and that’s the way I like it.  I’m rarely wrong and given that fact, I’m loud, opinionated and don’t mind sharing. 

I think the real-life version of me is *so* much better than this one, but YMMV.

Ask anyone who’s had the misfortune of knowing me for any length of time.  If my Feedburner stats take a dump, so be it. 

/Hoff

Update: Just to be clear, I was laughing when I wrote this, so hopefully you are when you’re reading it.  This wasn’t a plea for pity nor was it because I’m being psychically marauded by a rogue band of empaths looking to bring me down.  I’m quite happy being me.  Thanks for the virtual hugs from those of you thinking I was needing one! 😉

Categories: General Rants & Raves Tags:

Performance Implications Of Security Functions In Virtualized Environments

March 28th, 2008 4 comments

Virtsecmodel
In my VirtSec presentations, I lead my audience through the evolution of virtualized security models that describes what configuration and architecture options we have in implementing existing and emerging security solutions both now as well as projected out to about 3 years from now.

I’ll be posting that shortly.

Three of the interesting things that I highlight that result in having light bulbs go off in the audience are when I discuss:

  1. The compute (CPU) and I/O overhead that is added by security software running in either the VM’s on top of the guest OS’s, security virtual appliances in the host, or a combination both.
  2. The performance limitations of the current implementations of virtual networking and packet handling routines due to virtualization architectures and access to hardware
  3. The complexity imposed when having to manage/map a number of physical to virtual NICS and configuring the vSwitch and virtual appliances appropriately to manipulate traffic flows (at L2 and up) through multiple security solutions either from an intra-host perspective, integrated with external security solutions, or both. 

I’m going to tackle each of these issues in separate posts, but I’d be interested in speaking to anyone with whom I can compare results of my testing with. 

Needless to say, I’ve done some basic mock-ups and performance testing with some open source and commercial security products in virtualized configurations under load, and much of the capacity I may have gained by consolidating low-utilization physical hosts into a virtualized single host is eroded by the amount of processing needed by the virtual appliance(s) to keep up with the load under stress without dropping packets or introducing large amounts of latency.

Beware of what this might mean in your production environments.  Ever see a CPU pegged due to a runaway process?  Imagine what happens when every packet between virtual interfaces gets crammed through a virtual appliance in the same host first in order to "secure" it.

I made mention of this in my last post:

The reality is that for reasons I’ve spoken of many times, our favorite ISV’s have been a little handicapped by what the virtualization platforms offer up in terms of proper integration against which we can gain purchase from a security perspective.  They have to sell what they’ve got while trying to remain relevant all the while watching the ground drop out beneath them.

These vendors have a choice: employ some fancy marketing messaging to make it appear as though the same products you run on a $50,000+ dedicated security appliance will actually perform just as well in a virtual form.

Further, tell you that you’ll enjoy just as much visibility without disclosing limitations when interfaced to a virtual switch that makes it next to impossible to replicate most complex non-virtualized topologies.

Or, just wait it out and see what happens hoping to sell more appliances in the meantime.

Some employ all three strategies (with a fourth being a little bit of hope.)

This may differ based upon virtualization platforms and virtualization-aware chipsets, but capacity planning when adding security functions is going to be critical in production environments for anyone going down this path. 

/Hoff

Categories: Virtualization Tags:

It’s Virtualization March Madness! Up First, Montego Networks

March 27th, 2008 No comments

Marchmadness
If you want to read about Montego Networks right off the bat, you can skip the Hoff-Tax and scroll down to the horizontal rule and start reading.  Though I’ll be horribly offended, I’ll understand…

I like being contradictory, even when it appears that I’m contradicting myself.  I like to think of it as giving a balanced perspective on my schizophrenic self…

You will likely recall that my latest post suggested that the real challenge for virtualization at this stage in the game is organizational and operational and not technical. 

Well, within the context of this post, that’s obviously half right, but it’s an incredibly overlooked fact that is causing distress in most organizations, and it’s something that technology — as a symptom of the human condition — cannot remedy.

But back to the Tech.

The reality is that for reasons I’ve spoken of many times, our favorite ISV’s have been a little handicapped by what the virtualization platforms offer up in terms of proper integration against which we can gain purchase from a security perspective.  They have to sell what they’ve got while trying to remain relevant all the while watching the ground drop out beneath them.

Bs_2
These vendors have a choice: employ some fancy marketing messaging to make it appear as though the same products you run on a $50,000+ dedicated security appliance will actually perform just as well in a virtual form.

Further, tell you that you’ll enjoy just as much visibility without disclosing limitations when interfaced to a virtual switch that makes it next to impossible to replicate most complex non-virtualized topologies. 

Or, just wait it out and see what happens hoping to sell more appliances in the meantime.

Some employ all three strategies (with a fourth being a little bit of hope.)

Some of that hoping is over and is on it’s way to being remedied with enablers like VMware’s VMsafe initiative.  It’s a shame that we’ll probably end up with a battle of API’s with ISV’s having to choose which virtualization platform providers’ API to support rather than a standard across multiple platforms.

Simon Crosby from Xen/Citrix made a similar comment in this article:

While I totally agree with his sentiment, I’m not sure Simon would be as vocal or egalitarian had Citrix been first out of the gate with their own VMsafe equivalent.  It’s always sad when one must plead for standardization when you’re not in control of the standards…and by the way, Simon, nobody held a gun to the heads of the 20 companies that rushed for the opportunity to be the first out of the gate with VMsafe as it’s made available.

While that band marches on, some additional measure of aid may come from innovative youngbloods looking to build and sell you the next better mousetrap.


As such, in advance of the RSA Conference in a couple of weeks, the security world’s all aflutter with the sounds of start-ups being born out of stealth as well as new-fangled innovation clawing its way out of up-starts seeking to establish a beachhead in the attack on your budget.

With the normal blitzkrieg of press releases that will undoubtedly make their way to your doorstop, I thought I’d comment on a couple of these companies in advance of the noise.

A lot of what I want to say is sadly under embargo, but I’ll get further in-depth later when I’m told I can take the wraps off.  You should know that almost all of these emerging solutions, as with the one below, operate as virtual appliances inside your hosts and require close and careful configuration of the virtual networking elements therein.

If you go back to the meat of the organization/operational issue I describe above, who do you think has access and control over the virtual switch configurations?  The network team?  The security team?  How about the virtual server admin. team…are you concerned yet?

Here’s my first Virtualized March Madness (VMM, get it!) ISV:

  • Montegomodel
    Montego Networks – John Peterson used to be the CTO at Reflex, so he knows a thing or two about switching, virtualization and security.  I very much like Montego’s approach to solving some of the networking issues associated with vSwitch integration and better yet, they’ve created a very interesting business model that actually is something like VMsafe in reverse. 

    Essentially Montego’s HyperSwitch works in conjunction with the integrated vSwitch in the VMM and uses some reasonably elegant networking functionality to classify traffic and either enforce dispositions natively using their own "firewall" technologies (L2-L4) or — and this is the best part — redirect traffic to other named security software partners to effect disposition. 

    If you look on Montego’s website, you’ll see that they show StillSecure and BlueLane as candidates as what they call HyperVSecurity partners.  They also do some really cool stuff with Netflow.

    Neat model.  When VMsafe is available, Montego should then allow these other third party ISV’s to take advantage of VMsafe (by virtue of the HyperSwitch) without the ISV’s having to actually modify their code to do so – Montego will build that to suit.  There’s a bunch of other stuff that I will write about once the embargo is lifted.

    I’m not sure how much runway and strategic differentiation Montego will have from a purely technical perspective as VMsafe ought to level the playing field for some of the networking functionality with competitors, but the policy partnering is a cool idea. 

    We’ll have to see what the performance implications are given the virtual appliance model Montego (and everyone else) has employed.  There’s lots of software in them thar hills doing the flow/packet processing and enacting dispositions…and remember, that’s all virtualized too.

    In the long term, I expect we’ll see some of this functionality appear natively in other virtualization platforms.

    We’ll see how well that prediction works out over time as well as keep an eye out for that Cisco virtual switch we’ve all been waiting for…*

I’ll be shortly talking about Altor Networks and Blue Lane’s latest goodies.

If you’ve got a mousetrap you’d like to see in lights here, feel free to ping me, tell me why I should care, and we’ll explore your offering.  I guarantee that if it passes the sniff test here it will likely mean someone else will want a whiff.

/Hoff

* Update: Alan over at the Virtual Data Center Blog did a nice write-up on his impressions and asks why this functionality isn’t in the vSwitch natively.  I’d pile onto that query, too.  Also, I sort of burned myself by speaking to Montego because the details of how they do what they do is under embargo based on my conversation for a little while longer, so I can’t respond to Alan…

The Challenge of Virtualization Security: Organizational and Operational, NOT Technical

March 25th, 2008 7 comments

Bullfight
Taking the bull by the horns…

I’ve spoken many times over the last year on the impact virtualization brings to the security posture of organizations.  While there are certainly technology issues that we must overcome, we don’t have solutions today that can effectively deliver us from evil. 

Anyone looking for the silver bullet is encouraged to instead invest in silver buckshot.  No shocker there.

There are certainly technology and solution providers looking to help solve these problems, but honestly, they are constrained by the availability and visibility to the VMM/Hypervisors of the virtualization platforms themselves. 

Obviously announcements like VMware’s VMsafe will help turn that corner, but VMsafe requires re-tooling of ISV software and new versions of the virtualization platforms.  It’s a year+ away and only addresses concerns for a single virtualization platform provider (VMware) and not others.

The real problem of security in a virtualized world is not technical, it is organizational and operational.

With the consolidation of applications, operating systems, storage, information, security and networking — all virtualized into a single platform rather than being discretely owned, managed and supported by (reasonably) operationally-mature teams — the biggest threat we face in virtualization is now we have lost not only visibility, but the clearly-defined lines of demarcation garnered from a separation of duties we had in the non-virtualized world.

Many companies have segmented off splinter cells of "virtualization admins" from the server teams and they are often solely responsible for the virtualization platforms which includes the care, feeding, diapering and powderering of not only the operating systems and virtualization platforms, but the networking and security functionality also.

No offense to my brethren in the trenches, but this is simply a case of experience and expertise.  Server admins are not experts in network or security architectures and operations, just as the latter cannot hope to be experts in the former’s domain.

We’re in an arms race now where virtualization brings brilliant flexibility, agility and cost savings to the enterprise, but ultimately further fractures the tenuous relationships between the server, network and security teams.

Now that the first-pass consolidation pilots of virtualizing non-critical infrastructure assets has been held up as beaconing examples of ROI in our datacenters, security and networking teams are exercising their veto powers as virtualization efforts creep towards critical production applications, databases and transactional systems.

Quite simply, the ability to express risk, security posture, compliance, troubleshooting and measureing SLA’s and dependencies within the construct of a virtualized world is much more difficult than in the discretely segregated physical world and when taken to the mat on the issues, the virtual server admins simply cannot address these issues competently within the scope of language of the security and risk teams.

This is going to make for some unneeded friction in what was supposed to be a frictionless effort.  If you thought the security teams were thought of as speed bumps before, you’re not going to like what happens soon when they try to delay/halt a business-driven effort to reduce costs, speed time-to-market, increase availability and enable agility.

I’ll summarize my prior recommendations as to how to approach this conundrum in a follow-on post, but the time is now to get these teams together and craft the end-play strategies and desired end-states for enterprise architecture in a virtualized world before we end up right back where we started 15+ years ago…on the hamster wheel of pain!

/Hoff

An Interesting Role Transition For Me…

March 25th, 2008 6 comments

I don’t write a lot about what I do for my day job/paycheck.  There are lots of reasons for that, but sometimes the Universe shakes things up a bit and this is one of those times.

I came on board as the Chief Architect of Security Innovation at Unisys eight months ago.  With the intriguing title came some really interesting opportunities to branch into areas that I didn’t have a lot of direct experience with while also maintaining a role of evangelist and sometimes-spokeshole.

I’ve been involved in areas of converged security with large sensor networks, issues of (inter)national security, public sector engagements and all sorts of mind-blowing non-classified military and federal activities.  It’s a whole other world. 

Floating about global business units is entertaining and stimulating, but at times a bit overwhelming and less mission-oriented than I am used to.  It’s cool to exercise strategy muscles in tactical maneuvers but I’m technically a start-up/turnaround guy who likes focused and goal-oriented challenges.

Last week I got an opportunity to do just that — work my strategy/futurist muscles — with a really refined focus by moving over into our S&T (Systems and Technology) division as the Chief Security Architect headed up by ex-HP exec Rich Marcello who is the corporate SVP and President of the S&T division. Rich is a very cool guy — he’s a Mac nut, iPhone owner and musician.  He definitely thinks outside of the box.

I’m tasked with crafting a comprehensive security strategy across all the S&T product, solution and services portfolios and aligning that with the rest of our strategic security initiatives across the company.

So besides working for a very cool guy and with an excellent team, this is really interesting to me because S&T is focused on the delivery of Real Time Infrastructure (RTI) solutions and services which are functionally based upon virtualization technologies and all the interesting things that go along with that.

I’m excited about this because (as if you can’t tell) I am rather interested in virtualization and security so now I get to put those two things together not only here, but as my day job, too. 

So, for those of you who were confused/wondering about what I actually *do* besides blogging, now you know!

OK, back to our regularly-scheduled programming…

/Hoff

Categories: General Rants & Raves Tags:

Risky Business — The Next Audit Cycle: Bellweather Test for Critical Production Virtualized Infrastructure

March 23rd, 2008 3 comments

Riskybusiness
I believe it’s fair to suggest that thus far, the adoption of virtualized infrastructure has been driven largely by consolidation and cost reduction.

In most cases the initial targets for consolidation through virtualization have focused on development environments, internally-facing infrastructure and non-critical application stacks and services.

Up until six months ago, my research indicated that most larger companies were not yet at the point where either critical applications/databases or those that were externally-facing were candidates for virtualization. 

As the virtualization platforms mature, the management and mobility functionality provides leveraged impovement over physical non-virtualized counterparts, and the capabilities to provide for resilient services emerge,  there is mounting pressure to expand virtualization efforts to include these remaining services/functions. 

With cost-reduction and availability improvements becoming more visible, companies are starting to tip-toe down the path of evaluating virtualizing everything else including these critical application stacks, databases and externally-facing clusters that have long depended on physical infrastructure enhancements to ensure availability and resiliency.

In these "legacy" environments, the HA capabilities are often provided by software-based clustering capabilities in the operating systems, applications or via the network thanks to load balancers and the like.  Each of these solutions sets are managed by different teams.  There’s a lot of complexity in making it all appear simple, secure and available.

This raises some very interesting questions that focus on assessing
risk in these environments in which duties and responsibilities are
largely segmented and well-defined versus their prospective virtualized counterparts where the opposite is true.

If companies begin to virtualize
and consolidate the applications, storage, servers, networking, security and high-availability
capabilities into the virtualization platforms, where does the buck
stop in terms of troubleshooting or assurance?  How does one assess risk?  How do we demonstrate compliance and
security when "all the eggs are in one basket?"

I don’t think it’s accurate to suggest that the lack of mature security
solutions has stalled the adoption of virtualization across the board,
but I do think that as companies evaluate virtualization candidacy,
security has been a difficult-to-quantify speed bump that has been
danced around. 

We’ve basically been playing a waiting game.  The debate over virtualization and the
inability to gain consensus in the increase/decrease of risk posture has left us at the point where we
have taken the low-hanging fruit that is either non-critical or has
resiliency built in, and simply consolidated it.  But now we’re at a crossroads as virtualization phase 2 has begun.

It’s time to put up or shut down…

Over the last year since my panel on virtualization security at RSA, I’ve been asking the same question in customer engagements and briefings:

How many of you have been audited by either internal or external governance organizations against critical virtualized infrastructure that are in production roles and/or externally facing? 

A year ago, nobody raised their hands.  I wonder what it will look like this year?

If IT and Security professionals can’t agree on the relative "security" or risk increase/decrease that virtualization brings, what position do you think that leaves the auditors in?  They are basically going to measure relative compliance to guidelines prescribed by governance and regulatory requirements.  Taken quite literally, many production environments featuring virtualized production components would not pass an audit.  PCI/DSS comes to mind.

In virtualized environments we’ve lost visiblity, we’ve lost separation of duties, we’ve lost the inherent simplicity that functions spread over physical entities provides.  Existing controls and processes get us only so far and the technology crutches we used to be able to depend on are buckling when we add the V-word to the mix.

We’ve seen technology initiatives such as VMware’s VMsafe that are still 9-12 months out that will help gain back some purchase in some of these areas, but how does one address these issues with auditors today?

I’m looking forward to the answer to this question at RSA this year to evaluate how companies are dealing with GRC (governance, risk and compliance) audits in complex critical production environments.

/Hoff

A Cogent Example of Information Centricity

March 21st, 2008 7 comments

My buddy Adrian Lane over @ IPLocks wrote up a really nice example of an information centric security model that is based off the discussions Mogull has been having on his blog regarding the same that I commented on a couple of weeks ago here and here:

I want to provide the simplest example of what I consider to be an information centric security. I
have never spoken with Rich directly on this subject and he may
completely disagree, but this is one of the simplest examples I can
come up with. It embodies the basic tenants, but it also exemplifies the model’s singular greatest challenge. Of course there is a lot more possible than what I am going to propose here, but this is a starting point.

Consider a digitally signed email encrypted with PGP as a tangible example.

Following Rich Mogull’s defining tenets/principles post:

  • The
    data is self describing as it carries MIME type and can encrypt the
    payload and leave business context (SMTP) exposed.
  • The
    data is self defending in both confidentiality (encrypted with the
    recipient public key) and integrity (digitally signed by the sender).
  • While
    the business context in this example is somewhat vague, it can be
    supplied in the email message itself, or added as a separate packet and
    interpreted by the application(s) that decrypt, verify hash or read the
    contents. Basically, it’s variable.
  • The
    data is protected in motion, does not need network support for
    security, and really does not care about the underlying medium of
    conveyance for security, privacy or integrity. The verification can be      performed independently once it reaches its destination. And the payload, the message itself,      could be wrapped up and conveyed into different applications as well. A
    trouble ticket application or customer relationship management
    application are but two examples of changing business contexts.
  • The policies can work consistently      provided there is an agreed upon application processing. I think Rich’s intention was business      processing, but it holds for security policies as well. Encryption
    provides a nice black & white example as anyone without the
    appropriate private key is not going to gain access to the email
    message. Business rules and processes
    embedded should have some verification that they have not been altered
    or tampered with, but cryptographic hashes can provide that. We can even add a      signed audit trail, verifiable to receiving parties, within the      payload.

I might add that there should be independent
‘Brokerage’ facilities for dispute resolution or verification of some
types of rules, process or object state in workflow systems. If recipients can add or even alter some subset of the information, who’s copy is the latest and greatest? But anyway, that is too much detail for this example.

I’m not sure what Adrian meant when he said (in boldface) "The
data is self describing as it carries MIME type and can encrypt the
payload and leave business context (SMTP) exposed.
"  Perhaps that the traffic is still identified as SMTP (via port 25) even though the content is encrypted?

For this example Adrian used MIME Type as the descriptor.  MIME types
provide an established "standardized" format that makes for an easy
transition to making decisions and enacting dispositions based on (at
least) SMTP content in context easy, but I maintain that depending on
where and when you make these decisions (in motion, at rest, etc.) we still need a common metadata format that is independent of
protocol/application that would allow analysis even on encrypted data at rest or in
motion.

Need versus ability to deliver is a valid concern, of course…

A note on DLP and Information Centric Security: Security that acts directly upon information, and information that embeds it’s security are different concepts. IMO. Under
a loose definition, I understand how one could view Data Loss
Prevention, in context Monitoring/IDS and even Assessment as a data
centric examination of security. But this is really not what I am attempting to describe. Maybe we change the name to Embedded Information Security, but that is semantics we can work out later.

I would agree that in the end game, the latter requires less (or perhaps none) of the former.  If the information is self-governing and enforcement of policy is established based upon controls such as strong mutual authentication and privacy-enforcing elements such as encryption, then really the information that has embedded "security" is, in an of itself, "…security that acts directly on information."

It’s a valid point but in the interim we’re going to need this functionality because we don’t have a universal method of applying yet alone enforcing self-described policies on information.

/Hoff

 

Categories: Information Centricity Tags:

Thanks For Your Concern, But I Didn’t Steal Dan Geer’s Presentation…

March 20th, 2008 4 comments

Conspiracy
As previously mentioned, last week, Mogull and I presented at SOURCEBoston.  Our offering was a bit of a rough first-pass mashup at peering my talk on "Disruptive Innovation" with Rich’s excellent "Future of Security" presentation.  It went over decently well and five minutes after the preso., I bailed to the airport for a flight to New Zealand.

Upon my return, I was catching up on email and noticed all manner of really great feedback on Dan Geer’s keynote that he gave the day after I left.  I was saddened by the fact that I missed it and was really looking forward to reading the transcript of Dan’s talk given how much of a fan I am of his work and intellect.

What followed next ranged from confusion to amusement to happiness and then annoyance and disgust.  I’ve been wrestling with how to frame this so as not to imply anything at all negative about Dan as I respect him tremendously and do not in any way wish to besmudge him.

I attribute what you are about to read to serendipity and kismet with the unfortunate side-effect caused by a small but persistent group of annoying individuals who have nothing better to do than create conspiracy theories in between games of Halo3 and unrequited love via match.com.

If you read the transcript of Dan’s presentation, you will be struck when comparing presentations that a large portion of it mirrors the content and thematic representation in my presentation, down to some incredibly specific examples and references as well as a choice number of unique analogs and anecdotes.

I wasn’t particularly concerned by this, in fact I was jazzed when I realized that Dan was not only saying the same things I was but that we were interlocked on some really cool examples…all until I started getting emails and blog comments suggesting that I had ripped off Dan’s work.

So, let me just (sadly) state for the record two things:

  1. The material in the presentation I gave on 3/12 was an updated version of my keynote presentation I gave at the Information Security Decisions show in Chicago in October 2007.  In fact, I posted the narrative slide-by-slide in four parts:
  2. Rich and I presented the day before Dan did.

So, for those of you who have decided to annoy me and call into question my honor and credibility, you can take both those issues above and stuff ’em in your…it’s clear that I authored and published the bulk of my presentation almost 6 months ago and I spoke before Dan did.  This would make it difficult for me to rip him off unless I was psychic.

I know without a doubt that he didn’t take any of this from me, either, and there’s no reason to suggest otherwise.  I’ll just chalk it up to a great mind (his) and a mediocre one (mine) thinking alike.

So in closing, I’m thrilled that we both spoke of punctuated equilibrium, dampened oscillations, disruptive innovation, cyclical evolution, etc.  It means that I’m doing the same sort of thinking as someone that I truly admire.

I intend to reach out to Dan and tell him how much I really enjoyed his keynote and share with him ahead of time some of my emerging work on chaos theory, the dip and predictive economic modeling theory as applied to InfoSec…I only wish our presentation went over as well as his did 😉

I trust we can put this to bed now?

/Hoff

No Good Deed Goes Unpunished (Or Why NextGen DLP Is a Step On The Information Centric Ladder…)

March 19th, 2008 4 comments

Farmersnakeangled
Rothman wrote a little ditty today commenting on a blog I scribbled last week titled "The Walls Are Collapsing Around Information Centricity"

Information centricity – Name that tune.

Of course, the Hoff needs to pile on to Rich’s post about information-centric security. He even finds means to pick apart a number of my statements. Now that he is back from down under, maybe he could even show us some examples of how a DLP solution is doing anything like information-centricity. Or maybe I’m just confused by the uber-brain of the Hoff and how he thinks maybe 500 steps ahead of everyone else.

Based on my limited brain capacity, the DLP vendors can profile and maybe even classify the types of data. But that information is neither self-describing, nor is it portable. So once I make it past the DLP gateway, the data is GONE baby GONE.

In my world of information-centricity, we are focused on what the fundamental element of data can do and who can use it. It needs to be enforced anywhere that data can be used. Yes, I mean anywhere. Name that tune, Captain Hoff. I’d love to see something like this in use. I’m not going to be so bold as to say it isn’t happening, but it’s nothing I’ve seen before. Please please, edumacate me.

I’m always pleased when Uncle Mike shows me some blog love, so I’ll respond in kind, if not only to defend my honor.  Each time Mike "compliments" me on how forward-looking I am, it’s usually accompanied by a gnawing sense that his use of "uber-brained" is Georgian for "dumbass schlock." 😉

Yes, you’re confused by my "uber-brain…" {roll eyes here}

I believe Mike missed a couple of key words in my post, specifically that the next generation of solutions would start to deliver the functionality described in both my and Rich’s posts.

What I referred to was that the evolution of the current generation of DLP solutions as well as the incremental re-tooling of DRM/ERM, ADMP, CMP, and data classification at the point of creation and across the wire gets us closer to being able to enforce policy across a greater landscape.

The current generation of technologies/features such as DLP do present useful solutions in certain cases but in their current incarnation are not complete enough to solve all of the problems we need to solve.  I’ve said this many times.  They will, however, evolve, which is what I was describing.

Mike is correct that today data is not self-describing, but that’s a problem that we’ll need standardization to remedy — a common metadata format would be required if cross-solution policy enforcement were to be realized.  Will we ever get there?  It’ll take a market leader to put a stake in the ground to get us started, for sure (wink, wink.)

As both Mogull and I alluded in our posts and our SOURCEBoston presentation, we’re keyed into many companies in stealth mode as well as the roadmaps of many of the companies in this space and the solutions represented by the intersection of technologies and solutions that are becoming CMP are very promising.

That shouldn’t be mistaken for near-term success, but since my job is to look 3-5 years out on the horizon, that’s what I wrote about.  Perhaps Mike mistook my statement about the fact that companies are beginning to circle the wagons on this issue to mean that they are available now.  That’s obviously not the case.

Hope that helps, Mike.

/Hoff

Categories: Information Centricity Tags: