Search Results

Keyword: ‘jericho forum’

Don’t Hassle the Hoff: Recent Press & Podcast Coverage & Upcoming Speaking Engagements

June 5th, 2008 12 comments

Microphone
Here are some of the recent press coverage on topics relevant to content on my blog:

Podcasts/Webcasts:

I am confirmed to  speak at the following upcoming events:

/Hoff

Categories: Press, Speaking Engagements Tags:

The Walls Are Collapsing Around Information Centricity

March 10th, 2008 2 comments

Since Mogull and I collaborate quite a bit on projects and share many thoughts and beliefs, I wanted to make a couple of comments on his last post on Information Centricity and remind the audience at home of a couple of really important points.

Rich’s post was short and sweet regarding the need for Information-Centric solutions with some profound yet subtle guideposts:

For information-centric security to become a reality, in the long term it needs to follow the following principles:

  1. Information (data) must be self describing and defending.
  2. Policies and controls must account for business context.
  3. Information must be protected as it moves from structured to
    unstructured, in and out of applications, and changing business context.
  4. Policies must work consistently through the different defensive layers and technologies we implement.

I’m not convinced this is a complete list, but I’m trying to keep to
my new philosophy of shorter and simpler. A key point that might not be
obvious is that while we have self-defending data solutions, like DRM
and label security, for success they must grow to account for business
context. That’s when static data becomes usable information.

Mike Rothman gave an interesting review of Rich’s post:


The Mogull just laid out your work for the next 10 years. You just
probably don’t know it yet. Yes, it’s all about ensuring that the
fundamental elements of your data are protected, however and wherever
they are used. Rich has broken it up into 4 thoughts. The first one
made my head explode: "Information (data) must be self-describing and
defending."

Now I have to clean up the mess. Sure things like DRM are a
bad start, and have tarnished how we think about information-centric
security, but you do have to start somewhere. The reality is this is a
really long term vision of a problem where I’m not sure how you get
from Point A to Point B. We all talk about the lack of innovation in
security. And how the market just isn’t exciting anymore. What Rich
lays out here is exciting. It’s also a really really really big
problem. If you want a view of what the next big security company does,
it’s those 4 things. And believe me, if I knew how to do it, I’d be
doing it – not talking about the need to do it.

The comments I want to make are three-fold:

  1. Rich is re-stating and Mike’s head is exploding around the exact concepts that Information Survivability represents and the Jericho Forum trumpets in their Ten Commandments.  In fact, you can read all about that in a prior posts I made on the subjects of the Jericho Forum, re-perimeterization, information survivability and information centricity.  I like this post on a process I call ADAPT (Applied Data and Application Policy Tagging) a lot.

    For reference, here are the Jericho Forum’s Ten Commandments. Please see #9:

    Jericho_comm1Jericho_comm2

  2. As mike alluded, DRM/ERM has received a bad rap because of how it’s implemented — which has really left a sour taste in the mouths of the consumer consciousness.  As a business tool, it is the precursor of information centric policy and will become the lynchpin in how we will ultimately gain a foothold on solving the information resiliency/assurance/survivability problem.
  3. As to the innovation and dialog that Mike suggests is lacking in this space, I’d suggest he’s suffering from a bit of Shitake-ism (a-la mushroom-itis.)  The next generation of DLP solutions that are becoming CMP (Content Monitoring and Protection — a term I coined) are evolving to deal with just this very thing.  It’s happening.  Now.

    Further to that, I have been briefed by some very, very interesting companies that are in stealth mode who are looking to shake this space up as we speak.

So, prepare for Information Survivability, increased Information Resilience and assurance.  Coming to a solution near you…

/Hoff

The Unbearable Lightness of Being…Virtualized

March 10th, 2008 1 comment

Ripple
My apologies to Pete Lindstrom for not responding to his comment regarding my "virtualization & defibrilation" post sooner and hat-tip for Rothman for the reminder.

Pete was commenting on a snippet from my larger post dealing with the following assertion:

The Hoff-man pontificates yet again:

Firewalls, IDS/IPSs, UTM, NAC, DLP — all of them have limited
visibility in this rapidly "re-perimeterized" universe in which our
technology operates, and in most cases we’re busy looking at
uninteresting and practically non-actionable things anyway.  As one of
my favorite mentors used to say, "we’re data rich, but information
poor."

In
a general sense, I agree with this statement, but in a specific sense,
it isn’t clear to me just how significant this need is. After all, we
are talking today about 15-20 VMs per physical host max and I defy
anyone to suggest that we have these security solutions for every 15-20
physical nodes on our network today – far from it.

Let’s shed a little light on this with some context regarding *where* in the network we are virtualizing as I don’t think I was quite clear enough.

Most companies have begun to virtualize their servers driven by the economic benefits of consolidation and have done so in the internal zones of the networks.  Most of these networks are still mostly flat from a layer 3 perspective, with VLANs providing isolation for mostly performance/QoS reasons.

In discussions with many companies ranging from the SME to the Fortune 10, all except a minority are virtualizing hosts and systems placed within traditional DMZ’s facing external networks.

From that perspective, I agree with Pete.  Most companies are still grappling with segmenting their internal networks based on asset criticality, role, function or for performance/security reasons, so it stands to reason that internally we don’t, in general, see "…security solutions for every 15-20
physical nodes on our network today,"
but we certainly do in externally-facing DMZ’s.

However, that’s changing — and will continue to — as we see more and more basic security functionality extend into switching fabrics and companies begin to virtualize security policies internally.  The next generation of NAC/NAP is good example.  Internal segmentation will drive the need for logically applying combinations of security functions virtualized across the entire network space.

Furthermore, there are companies who are pushing well past the 15-20 VM’s per host, and that measure is really not that important.  What is important is the number of VM’s on hosts per cluster.  More on that later.

That said, it isn’t necessarily a great argument for me simply to
suggest that since we don’t do it today in the physical world that we
don’t have to do it in the virtual world. The real question in my mind
is whether folks are crossing traditional network zones on the same physical box.
That is, do trust levels differ among the VMs on the host? If they
don’t, then not having these solutions on the boxes is not that big a
deal – certainly useful for folks who are putting highly-sensitive
resources in a virtual infrastructure, but not a lights-out problem.

The reality is that as virtualization platforms become more ubiquitous, internal segmentation and more strict definition of host grouping in both virtualized and non-virtualized server clusters will become an absolute requirement because it’s the only way security policies will be able to be applied given today’s technology. 

This will ultimately then drive to the externally-facing DMZ’s over time.  It can’t not.  However, today, the imprecise measurement of quantifiable risk (or lack thereof) combined with exceptionally large investment in "perimeter" security technologies, keeps most folks at an arm’s length from virtualizing their DMZ’s from a server perspective.  Lest we forget our auditor friends and things like PCI/DSS which complicate the issue.

Other’s might suggest that with appropriately architected multi-tier environments combined with load balanced/clustered server pools and virtualized security contexts, the only difference between what they have today and what we’re talking about here is simply the hypervisor.  With blade servers starting to proliferate in the DMZ, I’d agree.

Until we have security policies that are, in essence, dynamically instantiated as they are described by the virtual machines and the zones into which they are plumbed, many feel they are still constrained by trying to apply static ACL-like policies to incredibly dynamic compute models.

So we’re left with constructing little micro-perimeter DMZ’s throughout the network. 

If the VMs do cross zones, then it is much more important to have
virtual security solutions. In fact, we don’t recommend using virtual
security appliances as zone separators simply because the hypervisor’s
additional attack surface introduces unknown levels of risk in an
environment. (Similarly, in the physical world we don’t recommend
switch-based security solutions as zone separators either).

What my research is finding in discussion with some very aggressive
companies who are embracing virtualization is that starting internally
— and for some of the reasons like performance which Pete mentions —
companies are beginning to setup clusters of VM’s that do indeed "cross
the streams." 😉

I am told by our virtualization technical expert that there may be
performance benefits to commingling resources in this way, so at some
point it will be great to have these security solutions available. I
suppose we should keep in mind that any existing network security
solution that isn’t using proprietary OS and/or hardware can migrate
fairly easily into a virtual security appliance.

There are most certainly performance "benefits" — I call them band-aids — to co-mingling resources in a single virtual host.  Given the limitations of today’s virtual networking and the ever-familiar I/O bounding we experience with higher-than-acceptable latency in mainstream Ethernet-based connectivity, sometimes this is the only answer.  This is pushing folks to consider I/O virtualization technologies and connectivity other than Ethernet such as InfiniBand.

The real issue here that I have with the old "just run your tired old security solutions as a virtual appliance within a virtual host" is the exact reason that I alluded to in the quote Pete singled in on.  We lose visibility, coherence and context using this model.  This is the exact reason for something like VMsafe, but will come with a trade-off I’ve already highlighted.

The thinner the hypervisor, the more internals will need to be exposed via API to external functions.  While the VMM gets more compact, the attack surface expands via the API.

Keep in mind that we have essentially ignored the whole
de-perimeterization, network without borders, Jericho Forum
predisposition to minimize these functions anyway. That is, we can
configure the functional VMs themselves with better security
capabilities as well.

Yes, we have ignored it…and to our peril.  Virtualization is here. It will soon be in your DMZ if it isn’t already.  If you’re not seriously reconsidering the implications that virtualization (of servers, storage, networking, security, applications, data, etc.) has on your datacenter — both in your externally-facing segments and your internal networks —  *now* you are going to be in a world of hurt within 18 months.

/Hoff

Categories: Virtualization Tags:

Information Security: Deader Than a Door Nail. Information Survivability’s My Game.

October 17th, 2007 14 comments

This isn’t going to be a fancy post with pictures.   It’s not going to be long.  It’s not particularly well thought out, but I need to get it out of my head and written down as tomorrow I plan on beginning a new career. 

I am retiring from the Information Security rat race and moving on to something fulfilling, achievable, impacting and that will make a difference.

Why?

Mogull just posted Information Security’s official eulogy titled "An Optimistically Fatalistic View of The Futility of Security."

He doesn’t know just how right he is.

Sad, though strangely inspiring, it represents the highpoint of a lovely internment ceremony replete with stories of yore, reflections on past digressions, oddly paradoxical and quixotic paramedic analogies, the wafting fragility of the human spirit and our unstoppable yearning to all make a difference.  It made me all weepy inside.   You’ll laugh, you’ll cry.  Before I continue, a public service announcement:

I’ve been instructed to ask that you please send donations in lieu of flowers to Mike Rothman so he can hire someone other than his four year old to produce caricatures of "Security Mike."  Thank you.

However amusing parts of it may have been, Rich has managed to catalyze the single most important thought I’ve had in a long time regarding this topic and I thank him dearly for it.

Along the lines of how Spaf suggested we are solving the wrong problems comes my epiphany that this is to be firmly levied on the wide shoulders of the ill-termed industrial complex and practices we have defined to describe the terminus of some sort of unachievable end-state goal.  Information Security represents  a battle we will never win.

Everyone’s admitted to that, yet we’re to just carry on "doing the best we can" as we "make a difference" and hope for the best?  What a load of pessimistic, nihilist, excuse-making donkey crap.  Again, we know that what we’re doing isn’t solving the problem, but rather than admitting the problems we’re solving aren’t the right ones, we’ll just keep on keeping on?

Describing our efforts, mission, mantra and end-state as "Information Security" or more specifically "Security" has bred this unfaithful housepet we now call an industry that we’re unable to potty train.  It’s going to continue to shit on the carpet no matter how many times we rub it’s nose in it.

This is why I am now boycotting the term "Information Security" or for that matter "Security" period.  I am going to find a way to change the title of my blog and my title at work.

Years ago I dredged up some research that came out of DARPA that focused on Information Assurance and Information Survivability.  It was fantastic stuff and profoundly affected what and how I added value to the organizations I belonged to.  It’s not a particularly new, but it represents a new
way of thinking even though it’s based on theory and practice from many
years ago.

I’ve been preaching about the function without the form.  Thanks to Rich for reminding me of that.

I will henceforth only refer to what I do — and my achievable end-state — using the term Information Survivability.

Information Survivability is defined  as “the capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents to ensure
that the right people get the right information at the right time.

A survivability approach combines risk management and contingency planning with computer security to protect highly distributed information services and assets in order to sustain mission-critical functions. Survivability expands the view of security from a narrow, technical specialty understood only by security experts to a risk management perspective with participation by the entire organization and stakeholders."

This is what I am referring to.  This is what Spaf is referring to.  This is what the Jericho Forum is referring to.

This is my new mantra. 

Information Security is dead.  Long live Information Survivability.  I’ll be posting all my I.S. references in the next coming days.

Rich, those paramedic skills are going to come in handy.

/Hoff

Sacred Cows, Meatloaf, and Solving the Wrong Problems…

October 16th, 2007 29 comments

Spaf_small_2Just as I finished up a couple of posts decrying the investments being made in lumping device after device on DMZ boundaries for the sake of telling party guests that one subscribes to the security equivalent of the "Jam of the Month Club," (AKA Defense-In-Depth) I found a fantastic post on the CERIAS blog where Prof. Eugene Spafford wrote a fantastic piece titled "Solving Some of the Wrong Problems."

In the last two posts (here and here,) I used the example of the typical DMZ and it’s deployment as a giant network colander which, despite costing hundreds of thousands of dollars, doesn’t generally deliver us from the attacks it’s supposedly designed to defend against — or at least those that really matter.

This is mostly because these "solutions" treat the symptoms and not the problem but we cling to the technology artifacts because it’s the easier road to hoe.

I’ve spent a lot of time over the last few months suggesting that people ought to think differently about who, what, why and how they are focusing their efforts.  This has come about due to some enlightenment I received as part of exercising my noodle using my blog.  I’m hooked and convinced it’s time to make a difference, not a buck.

My rants on the topic (such as those regarding the Jericho Forum) have induced the curious wrath of technology apologists who have no answers beyond those found in a box off the shelf.

I found such resonance in Spaf’s piece that I must share it with you. 

Yes, you.  You who have chided me privately and publicly for my recent proselytizing that our efforts are focused on solving the wrong sets of problems.   The same you who continues to claw disparately at your sacred firewalls whilst we have many of the tools to solve a majority of the problems we face, and choose to do otherwise.  This isn’t an "I told you so."  It’s a "You should pay attention to someone who is wiser than you and I."

Feel free to tell me I’m full of crap (and dismiss my ramblings as just that,) but I don’t think that many can claim to have earned the right to suggest that Spaf has it wrong dismiss Spaf’s thoughts offhandedly given his time served and expertise in matters of information assurance, survivability and security:

As I write this, I’m sitting in a review of some university research
in cybersecurity. I’m hearing about some wonderful work (and no, I’m
not going to identify it further). I also recently received a
solicitation for an upcoming workshop to develop “game changing” cyber
security research ideas. What strikes me about these efforts —
representative of efforts by hundreds of people over decades, and the
expenditure of perhaps hundreds of millions of dollars — is that the
vast majority of these efforts have been applied to problems we already
know how to solve.

We know how to prevent many of our security problems — least
privilege, separation of privilege, minimization, type-safe languages,
and the like. We have over 40 years of experience and research about
good practice in building trustworthy software, but we aren’t using
much of it.

Instead of building trustworthy systems (note — I’m not referring to
making existing systems trustworthy, which I don’t think can succeed)
we are spending our effort on intrusion detection to discover when our
systems have been compromised.

We spend huge amounts on detecting botnets and worms, and deploying
firewalls to stop them, rather than constructing network-based systems
with architectures that don’t support such malware.

Instead of switching to languages with intrinsic features that
promote safe programming and execution, we spend our efforts on tools
to look for buffer overflows and type mismatches in existing code, and
merrily continue to produce more questionable quality software.

And we develop almost mindless loyalty to artifacts (operating
systems, browsers, languages, tools) without really understanding where
they are best used — and not used. Then we pound on our selections as
the “one, true solution” and justify them based on cost or training or
“open vs. closed” arguments that really don’t speak to fitness for
purpose. As a result, we develop fragile monocultures that have a
particular set of vulnerabilities, and then we need to spend a huge
amount to protect them. If you are thinking about how to secure Linux
or Windows or Apache or C++ (et al), then you aren’t thinking in terms
of fundamental solutions.

Please read his entire post.  It’s wonderful. Dr. Spafford, I apologize for re-posting so much of what you wrote, but it’s so fantastically spot-on that I couldn’t help myself.

Timing is everything.

/Hoff

{Ed: I changed the sentence regarding Spaf above after considering Wismer’s comments below.  I didn’t mean to insinuate that one should preclude challenging Spaf’s assertions, but rather that given his experience, one might choose to listen to him over me any day — and I’d agree!  Also, I will get out my Annie Oakley decoder ring and address that Cohen challenge he brought up after at least 2-3 hours of sleep… 😉 }

Opinions Are Like De-Perimeterized Virtualized Servers — Everyone’s Got One, Even Larry Seltzer

October 2nd, 2007 3 comments

Weirdscience
Dude, maybe if we put bras on our heads and chant incoherently we can connect directly to the Internet…

Somebody just pushed my grumpy button!  I’m all about making friends and influencing people, but the following article titled "You Wouldn’t Actually Turn Off Your Firewall, Would You?" is simply a steaming heap of unqualified sensationalism, plain and simple. 

It doesn’t really deserve my attention but the FUD it attempts to promulgate is nothing short of Guinness material and I’m wound up because my second Jiu Jitsu class of the week isn’t until tomorrow night and I’ve got a hankering for an arm-bar.

Larry Seltzer from eWeek decided to pen an opinion piece which attempts for no good reason to collapse two of my favorite topics into a single discussion: de-perimeterization (don’t moan!) and virtualization. 

What one really has to do directly with the other within the context of this discussion, I don’t rightly understand, but it makes for good drama I suppose.

Larry starts off with a question we answered in this very blog (here, here, here and here) weeks ago:

Opinion: I’m unclear on what deperimeterization means. But if it means putting
company systems directly on the Internet then it’s a big mistake.

OK, that’s a sort of a strange way to state an opinion and hinge an article, Larry. Why don’t you go to the source provided by those who coined the term, here.  Once you’re done there, you can read the various clarifications and debates above. 

But before we start, allow me to just point out that almost every single remote salesperson who has a laptop that sits in a Starbucks or stays in a hotel is often connected "…directly on the Internet."  Oh, but wait, they’re sitting behind some sort of NAT gateway broadband-connected super firewall, ya?  Certainly the defenses at Joe’s Java shack must be as stringent as a corporate firewall, right?  <snore>

For weeks now I’ve been thinking on and off about "deperimeterization,"
a term that has been used in a variety of ways for years. Some analyst talk got it in the news recently.

So you’ve been thinking about this for weeks and don’t mention if
you’ve spoken to anyone from the Jericho Forum (it’s quite obvious you haven’t read their 10 commandments) or anyone mentioned in the article
save for a couple of analysts who decided to use a buzzword to get some
press?  Slow newsday, huh?

At least the goal of deperimeterization is to enhance security.
That I can respect. The abstract point seems to be to identify the
resources worth protecting and to protect them. "Resources" is defined
very, very broadly.

The overreacting approach to this goal is to say
that the network firewall doesn’t fit into it. Why not just put systems
on the Internet directly and protect the resources on them that are
worthy of protection with appropriate measures?

Certainly the network firewall fits into it.  Stateful inspection firewalls are, for the most part today, nothing more than sieves that filter out the big chunks.  They serve that purpose very nicely.  They allow port 80 and port 443 traffic through unimpeded.  Sweet.  That’s value.

Even the inventors of stateful inspection will tell you so (enter one Shlomo Kramer and Nir Zuk.)  Most "firewalls" (in the purest definition) don’t do much more than stateful ACL’s do today and are supplemented with IDS’s, IPS’s, Web Application Firewalls, Proxies, URL Filters, Anti-Virus, Anti-Spam, Anti-Malware and DDoS controls for that very reason.

Yup, the firewall is just swell, Larry.  Sigh.

I hope I’m not misreading the approach, but that’s what I got out of
our news article: "BP has taken some 18,000 of its 85,000 laptops off
its LAN and allowed them to connect directly to the Internet,
[Forrester Research analysts Robert Whiteley and Natalie Lambert]
said." This is incredible, if true.

Not for nothing, but rather than depend on a "couple of analysts," did you think to contact someone from BP and ask them what they meant instead of speculating and deriding the effort before you condemned it?  Obviously not:

What does it mean? Perhaps it just means that they can connect
to the VPN through a regular ISP connection? That wouldn’t be news. On
the other hand, what else can it mean? Whitely and Lambert seem to view
deperimeterization as a means to improve performance and lower cost. If
you need to protect the data on a notebook computer they say you should
do it with encryption and "data access controls." This is the
philosophy in the 2001 article in which the term was coined.

Honestly, who in Sam’s Hill cares what "Whitely and Lambert" seem to view deperimeterization as?  They didn’t coin the term, they butchered its true intent and you still don’t apparently know how to answer your own question. 

Further, you also reference a conceptual document floated back in 2001 ignoring the author’s caution that "The actual concept behind the entire paper never really flew, but you may find that too thought provoking."  Onward.

But of course you can’t just put a system on Comcast and have it
access corporate resources. VPNs aren’t just about security, they
connect a remote client into the corporate network. So unless everyone
in the corporation has subnet mask of 0.0.0.0 there needs to be some
network management going on.

Firstly, nobody said that network management should be avoided, where the heck did you get that!?

Secondly, if you don’t have firewalls in the way, sure you can — but that would be cheating the point of the debate.  So we won’t go there.  Yet.  OK, I lied, here we go.

Thirdly, if you look at what you will get with, say, Vista and Longhorn, that’s exactly what you’ll be able to do.  You can simply connect to the Internet and using encryption and mutual authentication, gain access to internal corporate resources without the need for a VPN client at all.  If you need a practical example, you can read about it here, where I saw it with my own eyes.

Or maybe I’m wrong. Maybe that’s what they actually want to do. This certainly sounds like the idea behind the Jericho Forum, the minds behind deperimeterization. This New York Times blog echoes the thoughts.

Maybe…but we’re just dreamers.  I dare say, Larry, that Bill Cheswick has forgotten more about security than you and I know.  It’s obvious you’ve not read much about information assurance or information survivability but are instead content to myopically center on what "is" rather than that which "should be."

Not everyone has this cavalier attitude towards deperimeterization. This article from the British Computer Society
seems a lot more conservative in approach. It refers to protecting
resources "as if [they were] directly exposed to the Internet." It
speaks of using "network segmentation, strict access controls, secure
protocols and systems, authentication and encryption at multiple
levels."

Cavalier!?  What’s so cavalier about suggesting that systems ought to be stand-alone defensible in a hostile environment as much as they are behind one of those big, bad $50,000 firewalls!? I bet you money I can put a hardened host on the Internet today without a network firewall in front of it and it will be just as resistant to attack. 

But here’s the rub, nobody said that to get from point A to point B one would not pragmatically apply host-based hardening and layered security such as (wait for it) a host-based firewall or HIPS?  Gasp!

What’s the difference between filtering TCP handshakes or blocking based on the 4/5 tupule at a network level versus doing it at the host when you’re only interested in scaling to performance and commensurately secured levels of a single host?  Except for tens of thousands of dollars.  How about Nada?  (That’s Spanish for "Damn this discussion is boring…")

And whilst my point above is in response to your assertions regarding "clients," the same can be said for "servers."  If I use encryption and mutual authentication, short of DoS/DDoS, what’s the difference?

That sounds like a shift in emphasis, moving resources more
towards internal protection, but not ditching the perimeter. I might
have some gripes with this—it sounds like the Full Employment Act for
Security Consultants, for example—but it sounds plausible as a useful
strategy.

I can’t see how you’d possibly have anything bad to say about this approach especially when you consider that the folks that make up the Jericho Forum are CISO’s of major corporations, not scrappy consultants looking for freelance pen-testing.

When considering the protection of specific resources, Whitely and
Lambert go beyond encryption and data access controls. They talk
extensively about "virtualization" as a security mechanism. But their
use of the term virtualization sounds like they’re really just talking
about terminal access. Clearly they’re just abusing a hot buzzword.
It’s true that virtualization can be involved in such setups, but it’s
hardly necessary for it and arguably adds little value. I wrote a book
on Windows Terminal Server back in 2000 and dumb Windows clients with
no local state were perfectly possible back then.

So take a crappy point and dip it in chocolate, eh?   Now you’re again tainting the vision of de-perimeterization and convoluting it with the continued ramblings of a "couple of analysts."  Nice.

Whitely and Lambert also talk in this context about how updating in
a virtualized environment can be done "natively" and is therefore
better. But they must really mean "locally," and this too adds no
value, since a non-virtualized Terminal Server has the same advantage.

What is the security value in this? I’m not completely clear
on it, since you’re only really protecting the terminal, which is a
low-cost item. The user still has a profile with settings and data. You
could use virtual machines to prevent the user from making permanent
changes to their profile, but Windows provides for mandatory (static,
unchangeable) profiles already, and has for ages. Someone explain the
value of this to me, because I don’t get it.

Well, that makes two of us..

And besides, what’s it got to do with deperimeterization? The
answer is that it’s a smokescreen to cover the fact that there are no
real answers for protecting corporate resources on a client system
exposed directly to the Internet.

Well, I’m glad we cleared that up.  Absolutely nothing.  As to the smokescreen comment, see above.  I triple-dog-dare you.  My Linux workstation and Mac are sitting on "the Internet" right now.  Since I’ve accomplished the impossible, perhaps I can bend light for you next?

The reasonable approach is to treat local and perimeter security as
a "belt and suspenders" sort of thing, not a zero sum game. Those who
tell you that perimeter protections are a failure because there have
been breaches are probably just trying to sell you protection at some
other layer.

…or they are pointing out to you that you’re treating the symptom and not the problem.  Again, the Jericho Forum is made up of CISO’s of major multinational corporations, not VP’s of Marketing from security vendors or analyst firms looking to manufacture soundbites.

Now I have to set a reminder for myself in Outlook for about
two years from now to write a column on the emerging trend towards
"reperimeterization."

Actually, Larry, set that appointment back a couple of months…it’s already been said.  De-perimeterization has been called many things already, such as re-perimeterization or radical externalization.

I don’t really give much merit to what you choose to call it, but I call it a good idea that should be discussed further and driven forward in consensus such that it can be used as leverage against the software and OS vendors to design and build more secure systems that don’t rely on band-aids.

…but hey, I’m just a dreamer.

/Hoff

Amrit: I Love You, Man…But You’re Still Not Getting My Bud Lite

September 26th, 2007 1 comment

Medium_budlightotter
I’ve created a monster!

Well, a humble, well-spoken and intelligent monster who — like me — isn’t afraid to admit that sometimes it’s better to let go than grip the bat too tight.  That doesn’t happen often, but when it does, it’s a wonderful thing.

I reckon that despite having opinions, perhaps sometimes it’s better to listen with two holes and talk with one, shrugging off the almost autonomic hardline knee-jerks of defensiveness that come from having to spend years of single minded dedication to cramming good ideas down people’s throats.

It appears Amrit’s been speaking to my wife, or at least they read the same books.

So it is with the utmost humility that I take full credit for nudging along Amrit’s renaissance and spiritual awakening as evidenced in this, his opus magnum of personal growth titled "Embracing Humility – Enlightened Information Security" wherein a dramatic battle of the Ego and Id is played out in daring fashion before the world:


Too often in IT ego drives one to be rigid and stubborn. This results
in a myopic and distorted perspective of technology that can limit ones
ability to gain an enlightened view of dynamic and highly volatile
environments. This defect is especially true of information security
professionals that tend towards ego driven dispositions that create
obstacles to agility. Agility is one of the key foundational tenets to
achieving an enlightened perspective on information security; humility
enables one to become agile.  Humility, which is far different from
humiliation, is the wisdom to realize one’s own ignorance,
insignificance, and limitations of intellect, without which one cannot
see the truth.

19th century philosopher Herbert Spencer captured this sentiment in
an oft-cited quote “There is a principle which is a bar against all
information, which is proof against all arguments and which cannot fail
to keep a man in everlasting ignorance – that principle is contempt
prior to investigation.”

The security blogging community is one manifestation of the
information security profession, based upon which one could argue that
security professionals lack humility and generally propose contempt for
an idea prior to investigation. I will relate my own experience to
highlight this concept.

Humility and the Jericho Forum
I was one of the traditionalists that was vehemently opposed to the
ideas, at least my understanding of the ideas, put forth by the Jericho
forum. In essence all I heard was “de-perimeterization”, “Firewalls are
dead and you do not need them”, and “Perfect security is achieved
through the end-point” – I lacked the humility required to properly
investigate their position and debated against their ideas blinded by
ego and contempt. Reviewing the recent spate of blog postings related
to the Jericho forum I take solace in knowing that I was not alone in
my lack of humility. The reality is that there is a tremendous amount
of wisdom in realizing that the traditional methods of network security
need to be adjusted to account for a growing mobile workforce, coupled
with a dramatic increase in contractors, service providers and non pay
rolled actors, all of which demand access to organizational assets, be
it individuals, information or infrastructure. In the case of the
Jericho forum’s ideas I lacked humility and it limited my ability to
truly understand their position, which limits my ability to broaden my
perspective’s on information security.


Good stuff.

It takes a lot of chutzpah to privately consider changing one’s stance on matters; letting go of preconceived notions and embracing a sense of openness and innovation.  It’s quite another thing to do it publicly.   I think that’s very cool.  It’s always been a refreshing study in personal growth when I’ve done it. 

I know it’s still very hard for me to do in certain areas, but my kids — especially my 3 year old — remind me everyday just how fun it can be to be wrong and right within minutes of one another without any sense of shame.

I’m absolutely thrilled if any of my posts on Jericho and the ensuing debate has made Amrit or anyone else consider for a moment that perhaps there are other alternatives worth exploring in the way in which we think, act and take responsibility for what we do in our line of work.

I could stop blogging right now and…

Yeah, right.  Stiennon, batter up!

/Hoff

(P.S. Just to be clear, I said "batter" not "butter"…I’m not that open minded…)

Captains Obvious and Stupendous: G-Men Unite! In Theatres Soon!

September 19th, 2007 No comments

Dynamicduo_2
So Rich and Rich, the ex-analytic Dynamic Duo, mount poor (ha!) arguments against my posts on the the Jericho Forum.

To quickly recap, it seems that they’re ruffled at Jericho’s suggestion that the way in which we’ve approached securing our assets isn’t working and that instead of focusing on the symptoms by continuing to deploy boxes that don’t ultimately put us in the win column, we should solve the problem instead.

I know, it’s shocking to suggest that we should choose to zig instead of zag.  It should make you uncomfortable.

I’ve picked on Mogull enough today and despite the fact that he’s still stuck on the message marketing instead of the content (despite claiming otherwise,) let’s take a peek at Captain Obvious’ illucidating commentary on the matter:

Let me go on record now. The perimeter is alive and well. It has to
be. It will always be. Not only is the idea that the perimeter is going
away wrong it is not even a desirable direction. The thesis is not even
Utopian, it is dystopian. The Jericho Forum
has attempted to formalize the arguments for de-perimeterization. It is
strange to see a group formed to promulgate a theory. Not a standard,
not a political action campaign, but a theory. Reminds me of the Flat Earth Society.

I’m glad to see that while IDS is dead, that the perimeter is alive and well.  It’s your definition that blows as well as your focus.  As you recall, what I actually said was that the perimeter isn’t going away, it’s multiplying, but the diameter is collapsing.  Now we have hundreds if not thousands of perimeters as we collapse defenses down to the hosts themselves — and with virtualization, down to the VM and beyond.

Threats abound. End points are attacked. Protecting assets is more
and more complicated and more and more expensive. Network security is
hard for the typical end user to understand: all those packets, and
routes, and NAT, and PAT. Much simpler, say the
de-perimeterizationists, to leave the network wide open and protect the
end points, applications, data and users.

It’s an ironic use of the word "open."  Yes, the network should be "open" to allow for the most highest performing, stable, resilient, and reliable plumbing available.  Network intelligence should be provided by service layers and our focus should be on secure operating systems, applications and readily defensible data stores.  You’re damned right we should protect the end points, applications, data and users — that’s the mission of information assurance!

This is what happens when you fling around terms like "risk management" when what you really mean is "mitigating threats and vulnerabilities."  They are NOT the same thing.  Information survivability and assurance are what you mean to say, but what comes our is "buy more kit."

Yeah, well, the reality is that the perimeter is being reinforced
constantly. Dropping those defenses would be like removing the dikes
around Holland. The perimeter is becoming more diverse, yes. When you
start to visualize the perimeter, which must encompass all of an
organization’s assets,one is reminded of the coast of England metaphor.
In taking the measure of that perimeter the length is dependant on the
scale. A view from space predicts a different measurement than a view
from 100 meters or even 1 meter. Coast lines are fractal. So are network perimeters.

"THE perimeter" is not being reinforced, it’s being consolidated as it comes out of firewall refresh cycles, there’s a difference.  You accurately suggest that this is occurring constantly.  The reason for that is because the stuff we have just simply cannot defend our assets appropriately.

Folks like Microsoft understand this — look at Vista and Longhorn.  We’re getting closer to more secure operating systems.

Virtualization is driving the next great equalizer in the security industry and "network security" will become even more irrelevant.

Why don’t the two Richies and the faithful toy-happy squadrons of security lemmings get it instead of desperately struggling to tighten their grasp on the outdated notion of their glorious definition of "THE perimeter."  That was a rhetorical question, by the way.

De-perimeterization (or re-perimeterization) garners panic in those whose gumboots have become mired in the murky swamps of the way things were; they can’t go forward and they can’t go back.  Better to sink in one’s socks than get your feet dirty in the mud by stepping out of your captive clogs, eh?

The threats aren’t the same.  The attackers aren’t the same.  Networks aren’t the same.  The tools, philosophy and techniques we use to secure them can’t afford to be, either.

Finally:

Disclaimer:  I work for a vendor of network perimeter security appliances.
But, keep in mind, I would not be working for a perimeter defense
company if I did not truly believe that the answer lies in protecting
our networks. If I believed otherwise I would work for a
de-perimeterization vendor, if I could find one. 🙂

I can’t even bring myself to address this point.  I’ll let  Dan Weber do it instead.

/Hoff

Categories: Jackassery Tags:

Security Interoperability: Standards Are Great, Especially When They’re Yours…

September 19th, 2007 6 comments

Agentmaxwell Wow, this is a rant and a half…grab a beer, you’re going to need it…

Jon Robinson pens a lovely summary of the endpoint security software sprawl discussion we’ve been chatting about lately.

My original post on the matter is here. 

Specifically, he isolates what might appear to be diametrically-opposed perspectives on the matter; mine and Amrit Williams’ from BigFix.

No good story flows without a schism-inducing polarizing galvanic component, so Jon graciously obliges by proposing to slice the issue in half with the introduction of what amounts to a discussion of open versus proprietary approaches to security interoperability between components. 

I’m not sure that this is the right starting point to frame this discussion, and I’m not convinced that Amrit and I are actually at polar ends of the discussion.  I think we’re actually both describing the same behavior in the market, and whilst Amrit works for a company that produces endpoint agents, I think he’s discussing the issue at hand in a reasonably objective manner. 

We’ll get back to this in a second.  First, let’s peel back the skin from the skeleton a little.

Dissect_crazy_frog Dissecting the Frog
Just like in high school, this is the messy part of science class where people either reveal their darksides as they take deep lung-fulls of formaldehyde vapor and hack the little amphibian victim to bits…or run shrieking from the room.

Jon comments on Amrit’s description of the "birth of the endpoint protection platform" while I care to describe it as the unnatural (but predictable) abortive by-product of industrial economic consolidation. The notion here — emotionally blanketed by the almost-unilateral hatred for anti-virus — is that we’ll see a:

"…convergence of desktop security functionality into a single product that delivers antivirus, antispyware, personal firewall and other styles of host intrusion prevention (for example, behavioral blocking) capabilities into a single and cohesive policy-managed solution."

I acknowledge this and agree that it’s happening.  I’m not very happy about *how* it’s manifesting itself, however.  We’re just ending up with endpoint oligopolies that still fail to provide a truly integrated and holistic security solution, and when a new class of threat or vulnerability arises, we get another agent — or chunky bit grafted onto the Super Agent from some acquisition that clumsily  ends up as a product roadmap feature due to market opportunism. 

You know, like DLP, NAC, WAF… 😉

One might suggest that if the "platform" as described was an open, standards-based framework that defined how to operate and communicate, acted as a skeleton upon which to hang the muscular offerings of any vendor, and provided a methodology and communications protocol that allowed them all to work together and intercommunicate using a common nervous system, that would be excellent.

We would end up with a much lighter-weight intelligent threat defense mechanism.  Adaptive and open, flexible and accommodating.  Modular and a cause of little overhead.

But it isn’t, and it won’t be.

Unfortunately, all the "Endpoint Protection Platform" illustrates, as I pointed out previously, is that the same consolidation issues pervasive in the network world are happening now at the endpoint.  All those network-based firewalls, IPS’s, AV gateways, IDS’s, etc. are smooshing into UTM platforms (you can call it whatever you want) and what we’re ending up with is the software equivalent of UTM on the endpoint.

SuperAgents or "Endpoint Protection Platforms" represent the desperately selfish grasping by security vendors (large and small) to remain relevant in an ever-shrinking marketspace.  Just like most UTM offerings at the network level.  Since piling up individual endpoint software hasn’t solved the problem, it must hold true that one is better than many, right?

Each of these vendors producing "Super Agent" frameworks, all have their own standards.  Each of them are battling furiously to be "THE" standard, and we’re still not solving the problem.

Man, that stinks
Jon added some color to my point that the failure to interoperate is really an economic issue, not a technical one, by my describing "greed" as the cause.  I got a chuckle out of his response:

Hoff goes on to say that he doesn’t think we will ever see this type of interoperability among vendors because of greed. I wouldn’t blame greed though, unless by greed he means an unwillingness to collaborate because they believe their value lies in their micro-monopoly patents and their ability to lock customers in their solution. (Little do they know, that they are making themselves less valuable by doing so.) No, there isn’t any interoperability because customers aren’t demanding it.

Some might suggest that my logic is flawed and the market demonstrates it with an example like where GE booted out Symantec in favor of 350,000 seats of Sophos:

Seeking to improve manageability and reduce costs which arise from managing multiple solutions, GE will introduce Network Access Control (NAC) as well as antivirus and client firewall protection which forms part of the Sophos Security and Control solution.

Sophos CEO, Steve Munford, said companies want a single integrated agent that handles all aspects of endpoint security on each PC.                     

"Other vendors offer security suites that are little more than a bunch of separate applications bundled together, all vying for resources on the user’s computer," he said.    

"Enterprises tell us that the tide has turned, and the place for NAC and integrated security is at the endpoint."

While I philosophically don’t agree with the CEO’s comment relating the need for a Super Agent,  the last line is the most important "…the place for…integrated security is at the endpoint."  They didn’t say Super Agent, they said "integrated."  If we had integration and interoperability, the customer wouldn’t care about how many "components" it would take so long as it was cost-effective and easily managed.  That’s the rub because we don’t. 

So I get the point here.  Super Agents are our only logical choice, right?  No!

I suggest that while we make progress toward secure OS’s and applications, instead of moving from tons of agents to a Super Agent, the more intelligent approach would be a graceful introduction of an interoperable framework of open-standards based protocols that allow these components to work together as the "natural" consolidation effect takes its course and markets become features.  Don’t go from one extreme to the other.

I have yet to find anyone that actually believes that deploying a monolithic magnum malware mediator that maintains a modality of mediocrity masking a monoculture  is a good idea.

…unless, of course, all you care about is taking the cost out of operationalizing security and not actually reducing risk.  For some reason, these are being positioned by people as mutually-exclusive.  The same argument holds true in the network space; in some regards we’re settling for "good enough" instead of pushing to fix the problem and not the symptoms.

If people would vote with the wallets (which *is* what the Jericho Forum does, Rich) we wouldn’t waste our time yapping about this, we’d be busy solving issues relevant to the business, not the sales quotas of security company sales reps. I guess that’s what GE did, but they had a choice.  As the biggest IT consumer on the planet (so I’ve been told,) they could have driven their vendors together instead of apart.

People are loathe to think that progress can be made in this regard.  That’s a shame, because it can, and it has.   It may not be as public as you think, but there are people really working hard behind the scenes to make the operating systems, applications and protocols more secure. 

As Jon points out, and many others like Ranum have said thousands of times before, we wouldn’t need half of these individual agents — or even Super Agents — if the operating systems and software were secure in the first place. 

Run, Forrest, Run!
This is where people roll their eyes and suggest that I’m copping out because I’m describing a problem that’s not going to be fixed anytime soon.  This is where they stop reading.  This is where they just keep plodding along on the Hamster Wheel of Pain and add that line item for either more endpoint agents or a roll-up to a Super Agent.

I suggest that those of you who subscribe to this theory are wrong (and probably have huge calves from all that running.)  The first evidence of this is already showing up on shelves.  It’s not perfect, but it’s a start. 

Take Vista, as an example.  Love it or hate it, it *is* a more secure operating system and it features a slew of functionality that is causing dread and panic in the security industry — especially from folks like Symantec, hence the antitrust suits in the EU.  If the OS becomes secure, how will we sell our Super Agents.  ANTI-TRUST!

Let me reiterate that while we make progress toward secure OS’s and applications, instead of going from tons of agents to a Super Agent, the more intelligent approach is a graceful introduction of an interoperable framework of open-standards based protocols that allow these components to work together as the "natural" consolidation effect takes its course and markets become features.  Don’t go from one extreme to the other.

Jon sums it up with the following describing solving the interoperability problem:

In short, let the market play out, rather than relying on and hoping for central planning. If customers demand it, it will emerge. There is no reason why there can’t be multiple standards competing for market share (look at all the different web syndication standards for example). Essentially, a standard would be collaboration between vendors to make their stuff play well together so they can win business. They create frameworks and APIs to make that happen more easily in the future so they can win business easier. If customers like it, it becomes a “standard”.

At any rate, I’m sitting in the Starbucks around the corner from tonight’s BeanSec! event.  We’re going to solve world hunger tonight — I wonder if a Super Agent will do that one day, too?

/Hoff

Categories: Endpoint Security, Open Standards Tags:

For Data to Survive, It Must ADAPT…

June 1st, 2007 2 comments

Adapt

Now that I’ve annoyed you by suggesting that network security will over time become irrelevant given lost visibility due to advances in OS protocol transport and operation, allow me to give you another nudge towards the edge and further reinforce my theories with some additionally practical data-centric security perspectives.

If any form of network-centric security solution is to succeed in adding value over time, the mechanics of applying policy and effecting disposition on flows as they traverse the network must be made on content in context.  That means we must get to a point where we can make “security” decisions based upon information and its “value” and classification as it moves about.

It’s not good enough to only make decisions on how flows/data should be characterized and acted on with the criteria being focused on the 5-tupule (header,) signature-driven profiling or even behavioral analysis that doesn’t characterize the content in context of where it’s coming from, where it’s going and who (machine, virtual machine and “user”) or what (application, service) intends to access and consume it.

In the best of worlds, we like to be able to classify data before it makes its way though the IP stack and enters the network and use this metadata as an attached descriptor of the ‘type’ of content that this data represents.  We could do this as the data is created by applications (thick or thin, rich or basic) either using the application itself or by using an agent (client-side) that profiles the data prior to storage or transmission.

Since I’m on my Jericho Forum kick lately, here’s how they describe how data ought to be controlled:

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component.

You would probably need client-side software to provide this
functionality.  As an example, we do this today with email compliance solutions that have primitive versions of
this sort of capability that force users to declare the classification
of an email before you can hit the send button or even the document info that can be created when one authors a Word document.

There are a bunch of ERM/DRM solutions in play today that are bandied about being sold as “compliance” solutions, but there value goes much deeper than that.  IP Leakage/Extrusion prevention systems (with or without client-side tie-ins) try to do similar things also.

Ideally, this metadata would be used as a fixed descriptor of the content that permanently attaches itself and follows that content around so it can be used to decide what content should be “routed” based upon policy.

If we’re not able to use this file-oriented static metadata, we’d like then for the “network” (or something in/on it) to be able to dynamically profile content at wirespeed and characterize the data as it moves around the network from origin to destination in the same way.

So, this is where Applied Data & Application Policy Tagging (ADAPT) comes in.  ADAPT is an approach that can make use of existing and new technology to profile and characterize content (by using content matching, signatures, regular expressions and behavioral analysis in hardware or software) to then apply policy-driven information “routing” functionality as flows traverse the network by using an 802.1 q-in-q VLAN tags (open approach) or applying a proprietary ADAPT tag-header as a descriptor to each flow as it moves around the network.

Think of it like a VLAN tag the describes the data within the packet/flow which is defined as seen fit;

The ADAPT tag/VLAN is user defined and can use any taxonomy that best suits the types of content that is interesting; one might use asset classification such as “confidential” or uses taxonomies such as “HIPAA” or “PCI” to describe what is contained in the flows.  One could combine and/or stack the tags, too.  The tag maps to one of these arbitrary categories which could be fed by interpreting metadata attached to the data itself (if in file form) or dynamically by on-the-fly profiling at the network level.

As data moves across the network and across what we call boundaries (zones) of trust, the policy tags are parsed and disposition effected based upon the language governing the rules.  If you use the “open” version using the q-in-q VLAN’s, you have something on the order of 4096 VLAN IDs to choose from…more than enough to accomodate most asset classification and still leave room for VLAN usage.  Enforcing the ACL’s can be done by pretty much ANY modern switch that supports q-in-q, very quickly.

Just like an ACL for IP addresses or VLAN policies, ADAPT does the same thing for content routing, but using VLAN ID’s (or the proprietary ADAPT header) to enforce it.

To enable this sort of functionality, either every switch/router in the network would need to either be q-in-q capable (which is most switches these days) or ADAPT enabled (which would be difficult since you’d need every network vendor to support the protocols.)  You could use an overlay UTM security services switch sitting on top of the network plumbing through which all traffic moving from one zone to another would be subject to the ADAPT policy since each flow has to go through said device.

Since the only device that needs to be ADAPT aware is this UTM security service switch (see the example below,) you can let the network do what it does best and utilize this solution to enforce the policy for you across these boundary transitions.  Said UTM security service switch needs to have an extremely high-speed content security engine that is able to characterize the data at wirespeed and add a tag to the frame as it moves through the switching fabric and processed prior to popping out onto the network.

Clearly this switch would have to have coverage across every network segment.  It wouldn’t work well in virtualized server environments or any topology where zoned traffic is not subject to transit through the UTM switch.

I’m going to be self-serving here and demonstrate this “theoretical” solution using a Crossbeam X80 UTM security services switch plumbed into a very fast, reliable, and resilient L2/L3 Cisco infrastructure.  It just so happens to have a wire-speed content security engine installed in it.  The reason the X-Series can do this is because once the flow enters its switching fabric, I own the ultimate packet/frame/cell format and can prepend any header functionality I like onto the structure to determine how it gets “routed.”

Take the example below where the X80 is connected to the layer-3 switches using 802.1q VLAN trunked interfaces.  I’ve made this an intentionally simple network using VLANs and L3 routing; you could envision a much more complex segmentation and routing environment, obviously.

AdaptjpgThis network is chopped up into 4 VLAN segments:

  1. General Clients (VLAN A)
  2. Finance & Accounting Clients (VLAN B)
  3. Financial Servers (VLAN C)
  4. HR Servers (VLAN D)

Each of the clients/servers in the respective VLANs default routes out to an IP address which belongs to the firewall cluster IP addresses which is proffered by the firewall application modules providing service in the X80.

Thus, to get from one VLAN to another VLAN, one must pass through the X80 and profiled by this content security engine and whatever additional UTM services are installed in the chassis (such as firewall, IDP, AV, etc.)

Let’s say then that a user in VLAN A (General Clients) attempts to access one or more resources in the VLAN D (HR Servers.)

Using solely IP addresses and/or L2 VLANs, let’s say the firewall and IPS policies allow this behavior as the clients in that VLAN have a legitimate need to access the HR Intranet server.  However, let’s say that this user tries to access data that exists on the HR Intranet server but contains personally identifiable information that falls under the governance/compliance mandates of HIPAA.

Let us further suggest that the ADAPT policy states the following:

Rule  Source                Destination            ADAPT Descriptor           Action
==============================================================

1        VLAN A             VLAN D                    HIPAA, Confidential        Deny
IP.1.1               IP.3.1

2        VLAN B             VLAN C                    PCI                                 Allow
IP.2.1             IP.4.1

Using rule 1 above, as the client makes the request, he transits from VLAN A to VLAN D.  The reply containing the requested information is profiled by the content security engine which is able to  characterize the data as containing information that matches our definition of either “HIPAA or Confidential” (purely arbitrary for the sake of this example.)

This could be done by reading the metadata if it exists as an attachment to the content’s file structure, in cooperation with an extrusion prevention application running in the chassis, or in the case of ad-hoc web-based applications/services, done dynamically.

According to the ADAPT policy above, this data would then be either silently dropped, depending upon what “deny” means, or perhaps the user would be redirected to a webpage that informs them of a policy violation.

Rule 2 above would allow authorized IP’s in VLANs to access PCI-classified data.

You can imagine how one could integrate IAM and extend the policies to include pseudonymity/identity as a function of access, also.  Or, one could profile the requesting application (browser, for example) to define whether or not this is an authorized application.  You could extend the actions to lots of stuff, too.

In fact, I alluded to it in the first paragraph, but if we back up a step and look at where consolidation of functions/services are being driven with virtualization, one could also use the principles of ADAPT to extend the ACL functionality that exists in switching environments to control/segment/zone access to/from virtual machines (VMs) of different asset/data/classification/security zones.

What this translates to is a workflow/policy instantiation that would use the same logic to prevent VM1 from communicating with VM2 if there was a “zone” mis-match; as we add data classification in context, you could have various levels of granularity that defines access based not only on VM but VM and data trafficked by them.

Furthermore, assuming this service was deployed internally and you could establish a trusted CA with certs that would support transparent MITM SSL decrypts, you could do this (with appropriate scale) with encrypted traffic also.

This is data-centric security that uses the network when needed, the host when it can and the notion of both static and dynamic network-borne data classification to enforce policy in real-time.

/Hoff

[Comments/Blogs on this entry you might be interested in but have no trackbacks set:

MCWResearch Blog

Rob Newby’s Blog

Alex Hutton’s Blog

Security Retentive Blog