Archive

Archive for September, 2007

Mission Accomplished: Dialog and Exploration of Jericho Forum Happening

September 21st, 2007 5 comments

Jerichoartist
Just to be clear, I don’t set out to "win" everything I post about.  It may come off that way, but I write from a stream of consciousness; my blog is usually my own little jot pad for working through thought patterns that could often times could use a little pinging from others on the subject.

My blog has seen the evolution of some of my thinking; it’s produced profound realizations and even reversals in my own opinions and thoughts.  I think that’s cool.

In the case of the last series of posts which started here regarding the Jericho Forum, however, I wanted to start a dialog.  I knew it was going to be a slog, because people always get riled up on the subject of the Jericho Forum’s vision.

I wanted to take this contentious subject and drag it into the light some more, especially here in the U.S. where the concepts are met with a litany of protest — usually due not to the content, but rather the context around which they are framed and by whom.

At any rate, I banged out my posts over the last couple of days and regardless of the fact that almost nobody can see the forest for the trees, here’s what we ended up with; I’d suggest reading the last two as the others are rather like a blog version of demolition derby that don’t actually rationalize much on the subject at all:

Mogull – Jericho Needs Assistance Restating the Obvious
Stiennon – De-perimeterization is Dead
Newby – The Horns of Jericho
Hutton – Jericho In Pictures
LonerVamp – Jericho 1-4: de-perimeterization and the jericho forum commandments

…and only because I love, I’m going to highlight the last line of what otherwise would be a very interesting exploration of LV’s Jericho ponderings:

So what we have so far is very heart-warming, feel-good idealistic
goals for a global infrastructure (extrastructure?) utilizing perfect
or near perfect protocols and devices that can withstand anything.
Sorry, but what the fuck…?

Wow.  I have no response to that.  On second thought, I do, but I’m not sure I can say it again without screaming.  See here for a clue.

If there’s anyone else I missed, send me a ping so I can add you.

/Hoff

Categories: Jericho Forum Tags:

Captains Obvious and Stupendous: G-Men Unite! In Theatres Soon!

September 19th, 2007 No comments

Dynamicduo_2
So Rich and Rich, the ex-analytic Dynamic Duo, mount poor (ha!) arguments against my posts on the the Jericho Forum.

To quickly recap, it seems that they’re ruffled at Jericho’s suggestion that the way in which we’ve approached securing our assets isn’t working and that instead of focusing on the symptoms by continuing to deploy boxes that don’t ultimately put us in the win column, we should solve the problem instead.

I know, it’s shocking to suggest that we should choose to zig instead of zag.  It should make you uncomfortable.

I’ve picked on Mogull enough today and despite the fact that he’s still stuck on the message marketing instead of the content (despite claiming otherwise,) let’s take a peek at Captain Obvious’ illucidating commentary on the matter:

Let me go on record now. The perimeter is alive and well. It has to
be. It will always be. Not only is the idea that the perimeter is going
away wrong it is not even a desirable direction. The thesis is not even
Utopian, it is dystopian. The Jericho Forum
has attempted to formalize the arguments for de-perimeterization. It is
strange to see a group formed to promulgate a theory. Not a standard,
not a political action campaign, but a theory. Reminds me of the Flat Earth Society.

I’m glad to see that while IDS is dead, that the perimeter is alive and well.  It’s your definition that blows as well as your focus.  As you recall, what I actually said was that the perimeter isn’t going away, it’s multiplying, but the diameter is collapsing.  Now we have hundreds if not thousands of perimeters as we collapse defenses down to the hosts themselves — and with virtualization, down to the VM and beyond.

Threats abound. End points are attacked. Protecting assets is more
and more complicated and more and more expensive. Network security is
hard for the typical end user to understand: all those packets, and
routes, and NAT, and PAT. Much simpler, say the
de-perimeterizationists, to leave the network wide open and protect the
end points, applications, data and users.

It’s an ironic use of the word "open."  Yes, the network should be "open" to allow for the most highest performing, stable, resilient, and reliable plumbing available.  Network intelligence should be provided by service layers and our focus should be on secure operating systems, applications and readily defensible data stores.  You’re damned right we should protect the end points, applications, data and users — that’s the mission of information assurance!

This is what happens when you fling around terms like "risk management" when what you really mean is "mitigating threats and vulnerabilities."  They are NOT the same thing.  Information survivability and assurance are what you mean to say, but what comes our is "buy more kit."

Yeah, well, the reality is that the perimeter is being reinforced
constantly. Dropping those defenses would be like removing the dikes
around Holland. The perimeter is becoming more diverse, yes. When you
start to visualize the perimeter, which must encompass all of an
organization’s assets,one is reminded of the coast of England metaphor.
In taking the measure of that perimeter the length is dependant on the
scale. A view from space predicts a different measurement than a view
from 100 meters or even 1 meter. Coast lines are fractal. So are network perimeters.

"THE perimeter" is not being reinforced, it’s being consolidated as it comes out of firewall refresh cycles, there’s a difference.  You accurately suggest that this is occurring constantly.  The reason for that is because the stuff we have just simply cannot defend our assets appropriately.

Folks like Microsoft understand this — look at Vista and Longhorn.  We’re getting closer to more secure operating systems.

Virtualization is driving the next great equalizer in the security industry and "network security" will become even more irrelevant.

Why don’t the two Richies and the faithful toy-happy squadrons of security lemmings get it instead of desperately struggling to tighten their grasp on the outdated notion of their glorious definition of "THE perimeter."  That was a rhetorical question, by the way.

De-perimeterization (or re-perimeterization) garners panic in those whose gumboots have become mired in the murky swamps of the way things were; they can’t go forward and they can’t go back.  Better to sink in one’s socks than get your feet dirty in the mud by stepping out of your captive clogs, eh?

The threats aren’t the same.  The attackers aren’t the same.  Networks aren’t the same.  The tools, philosophy and techniques we use to secure them can’t afford to be, either.

Finally:

Disclaimer:  I work for a vendor of network perimeter security appliances.
But, keep in mind, I would not be working for a perimeter defense
company if I did not truly believe that the answer lies in protecting
our networks. If I believed otherwise I would work for a
de-perimeterization vendor, if I could find one. 🙂

I can’t even bring myself to address this point.  I’ll let  Dan Weber do it instead.

/Hoff

Categories: Jackassery Tags:

Security Interoperability: Standards Are Great, Especially When They’re Yours…

September 19th, 2007 6 comments

Agentmaxwell Wow, this is a rant and a half…grab a beer, you’re going to need it…

Jon Robinson pens a lovely summary of the endpoint security software sprawl discussion we’ve been chatting about lately.

My original post on the matter is here. 

Specifically, he isolates what might appear to be diametrically-opposed perspectives on the matter; mine and Amrit Williams’ from BigFix.

No good story flows without a schism-inducing polarizing galvanic component, so Jon graciously obliges by proposing to slice the issue in half with the introduction of what amounts to a discussion of open versus proprietary approaches to security interoperability between components. 

I’m not sure that this is the right starting point to frame this discussion, and I’m not convinced that Amrit and I are actually at polar ends of the discussion.  I think we’re actually both describing the same behavior in the market, and whilst Amrit works for a company that produces endpoint agents, I think he’s discussing the issue at hand in a reasonably objective manner. 

We’ll get back to this in a second.  First, let’s peel back the skin from the skeleton a little.

Dissect_crazy_frog Dissecting the Frog
Just like in high school, this is the messy part of science class where people either reveal their darksides as they take deep lung-fulls of formaldehyde vapor and hack the little amphibian victim to bits…or run shrieking from the room.

Jon comments on Amrit’s description of the "birth of the endpoint protection platform" while I care to describe it as the unnatural (but predictable) abortive by-product of industrial economic consolidation. The notion here — emotionally blanketed by the almost-unilateral hatred for anti-virus — is that we’ll see a:

"…convergence of desktop security functionality into a single product that delivers antivirus, antispyware, personal firewall and other styles of host intrusion prevention (for example, behavioral blocking) capabilities into a single and cohesive policy-managed solution."

I acknowledge this and agree that it’s happening.  I’m not very happy about *how* it’s manifesting itself, however.  We’re just ending up with endpoint oligopolies that still fail to provide a truly integrated and holistic security solution, and when a new class of threat or vulnerability arises, we get another agent — or chunky bit grafted onto the Super Agent from some acquisition that clumsily  ends up as a product roadmap feature due to market opportunism. 

You know, like DLP, NAC, WAF… 😉

One might suggest that if the "platform" as described was an open, standards-based framework that defined how to operate and communicate, acted as a skeleton upon which to hang the muscular offerings of any vendor, and provided a methodology and communications protocol that allowed them all to work together and intercommunicate using a common nervous system, that would be excellent.

We would end up with a much lighter-weight intelligent threat defense mechanism.  Adaptive and open, flexible and accommodating.  Modular and a cause of little overhead.

But it isn’t, and it won’t be.

Unfortunately, all the "Endpoint Protection Platform" illustrates, as I pointed out previously, is that the same consolidation issues pervasive in the network world are happening now at the endpoint.  All those network-based firewalls, IPS’s, AV gateways, IDS’s, etc. are smooshing into UTM platforms (you can call it whatever you want) and what we’re ending up with is the software equivalent of UTM on the endpoint.

SuperAgents or "Endpoint Protection Platforms" represent the desperately selfish grasping by security vendors (large and small) to remain relevant in an ever-shrinking marketspace.  Just like most UTM offerings at the network level.  Since piling up individual endpoint software hasn’t solved the problem, it must hold true that one is better than many, right?

Each of these vendors producing "Super Agent" frameworks, all have their own standards.  Each of them are battling furiously to be "THE" standard, and we’re still not solving the problem.

Man, that stinks
Jon added some color to my point that the failure to interoperate is really an economic issue, not a technical one, by my describing "greed" as the cause.  I got a chuckle out of his response:

Hoff goes on to say that he doesn’t think we will ever see this type of interoperability among vendors because of greed. I wouldn’t blame greed though, unless by greed he means an unwillingness to collaborate because they believe their value lies in their micro-monopoly patents and their ability to lock customers in their solution. (Little do they know, that they are making themselves less valuable by doing so.) No, there isn’t any interoperability because customers aren’t demanding it.

Some might suggest that my logic is flawed and the market demonstrates it with an example like where GE booted out Symantec in favor of 350,000 seats of Sophos:

Seeking to improve manageability and reduce costs which arise from managing multiple solutions, GE will introduce Network Access Control (NAC) as well as antivirus and client firewall protection which forms part of the Sophos Security and Control solution.

Sophos CEO, Steve Munford, said companies want a single integrated agent that handles all aspects of endpoint security on each PC.                     

"Other vendors offer security suites that are little more than a bunch of separate applications bundled together, all vying for resources on the user’s computer," he said.    

"Enterprises tell us that the tide has turned, and the place for NAC and integrated security is at the endpoint."

While I philosophically don’t agree with the CEO’s comment relating the need for a Super Agent,  the last line is the most important "…the place for…integrated security is at the endpoint."  They didn’t say Super Agent, they said "integrated."  If we had integration and interoperability, the customer wouldn’t care about how many "components" it would take so long as it was cost-effective and easily managed.  That’s the rub because we don’t. 

So I get the point here.  Super Agents are our only logical choice, right?  No!

I suggest that while we make progress toward secure OS’s and applications, instead of moving from tons of agents to a Super Agent, the more intelligent approach would be a graceful introduction of an interoperable framework of open-standards based protocols that allow these components to work together as the "natural" consolidation effect takes its course and markets become features.  Don’t go from one extreme to the other.

I have yet to find anyone that actually believes that deploying a monolithic magnum malware mediator that maintains a modality of mediocrity masking a monoculture  is a good idea.

…unless, of course, all you care about is taking the cost out of operationalizing security and not actually reducing risk.  For some reason, these are being positioned by people as mutually-exclusive.  The same argument holds true in the network space; in some regards we’re settling for "good enough" instead of pushing to fix the problem and not the symptoms.

If people would vote with the wallets (which *is* what the Jericho Forum does, Rich) we wouldn’t waste our time yapping about this, we’d be busy solving issues relevant to the business, not the sales quotas of security company sales reps. I guess that’s what GE did, but they had a choice.  As the biggest IT consumer on the planet (so I’ve been told,) they could have driven their vendors together instead of apart.

People are loathe to think that progress can be made in this regard.  That’s a shame, because it can, and it has.   It may not be as public as you think, but there are people really working hard behind the scenes to make the operating systems, applications and protocols more secure. 

As Jon points out, and many others like Ranum have said thousands of times before, we wouldn’t need half of these individual agents — or even Super Agents — if the operating systems and software were secure in the first place. 

Run, Forrest, Run!
This is where people roll their eyes and suggest that I’m copping out because I’m describing a problem that’s not going to be fixed anytime soon.  This is where they stop reading.  This is where they just keep plodding along on the Hamster Wheel of Pain and add that line item for either more endpoint agents or a roll-up to a Super Agent.

I suggest that those of you who subscribe to this theory are wrong (and probably have huge calves from all that running.)  The first evidence of this is already showing up on shelves.  It’s not perfect, but it’s a start. 

Take Vista, as an example.  Love it or hate it, it *is* a more secure operating system and it features a slew of functionality that is causing dread and panic in the security industry — especially from folks like Symantec, hence the antitrust suits in the EU.  If the OS becomes secure, how will we sell our Super Agents.  ANTI-TRUST!

Let me reiterate that while we make progress toward secure OS’s and applications, instead of going from tons of agents to a Super Agent, the more intelligent approach is a graceful introduction of an interoperable framework of open-standards based protocols that allow these components to work together as the "natural" consolidation effect takes its course and markets become features.  Don’t go from one extreme to the other.

Jon sums it up with the following describing solving the interoperability problem:

In short, let the market play out, rather than relying on and hoping for central planning. If customers demand it, it will emerge. There is no reason why there can’t be multiple standards competing for market share (look at all the different web syndication standards for example). Essentially, a standard would be collaboration between vendors to make their stuff play well together so they can win business. They create frameworks and APIs to make that happen more easily in the future so they can win business easier. If customers like it, it becomes a “standard”.

At any rate, I’m sitting in the Starbucks around the corner from tonight’s BeanSec! event.  We’re going to solve world hunger tonight — I wonder if a Super Agent will do that one day, too?

/Hoff

Categories: Endpoint Security, Open Standards Tags:

Captain Stupendous — Making the Obvious…Obvious! Jericho Redux…

September 19th, 2007 8 comments

Captstupendous
Sometimes you have to hurt the ones you love. 

I’m sorry, Rich.  This hurts me more than it hurts you…honest.

The Mogull decides that rather than contribute meaningful dialog to discuss the meat of the topic at hand, he would rather contribute to the FUD regarding the messaging of the Jericho Forum that I was actually trying to wade through.

…and he tried to be funny.  Sober.  Painful combination.

In a deliciously ironic underscore to his BlogSlog, Rich caps off his post with a brilliant gem of obviousness of his own whilst chiding everyone else to politely "stay on message" even when he leaves the reservation himself:

"I formally
submit “buy secure stuff” as a really good one to keep us busy for a
while."

<phhhhhht> Kettle, come in over, this is Pot. <phhhhhhttt> Kettle, do you read, over? <phhhhhhht>  It’s really dark in here <phhhhhhttt>

So if we hit the rewind button for a second, let’s revisit Captain Stupendous’ illuminating commentary.  Yessir.  Captain Stupendous it is, Rich, since the franchise on Captain Obvious is plainly over-subscribed.

I spent my time in my last post suggesting that the Jericho Forum’s message is NOT that one should toss away their firewall.  I spent my time suggesting that rather reacting to the oft-quoted and emotionally flammable marketing and messaging, folks should actually read their 10 Commandments as a framework. 

I wish Rich would have read them because his post indicates to me that the sensational hyperbole he despises so much is hypocritically emanating from his own VoxHole. <sigh>

Here’s a very high-level generalization that I made which was to take the focus off of "throwing away your firewall":

Your perimeter *is* full of holes so what we need to do is fix the problems, not the symptoms.  That is the message.

And Senor Stupendous suggested:

Of course the perimeter is full of holes; I haven’t met a security
professional who thinks otherwise. Of course our software generally
sucks and we need secure platforms and protocols. But come on guys,
making up new terms and freaking out over firewalls isn’t doing you any
good. Anyone still think the network boundary is all you need? What? No
hands? Just the “special” kid in back? Okay, good, we can move on now.

You’re missing the point — both theirs and mine.  I was restating the argument as a setup to the retort.  But who can resist teasing the mentally challenged for a quick guffaw, eh, Short Bus?

Here is the actual meat of the Jericho Commandments.  I’m thrilled that Rich has this all handled and doesn’t need any guidance.  However, given how I just spent my last two days, I know that these issues are not only relevant, but require an investment of time, energy, and strategic planning to make actionable and remind folks that they need to think as well as do.

I defy you to show me where this says "throw away your firewalls."

Repeat after me: THIS IS A FRAMEWORK and provides guidance and a rational, strategic approach to Enterprise Architecture and how security should be baked in.  Please read this without the FUDtastic taint:

Jericho_comm1Jericho_comm2

Rich sums up his opus with this piece of reasonable wisdom, which I wholeheartedly agree with:

You have some big companies on board and could use some serious
pressure to kick those market forces into gear.

…and to warm the cockles of your heart, I submit they do and they are.  Spend a little time with Dr. John Meakin, Andrew Yeomans, Stephen Bonner, Nick Bleech, etc. and stop being so bloody American 😉  These guys practice what they preach and as I found out, have been for some time.

They’ve refined the messaging some time ago.  Unload the baggage and give it a chance.

Look at the real message above and then see how your security program measures up against these topics and how your portfolio and roadmap provides for these capabilities.

Go forth and do stupendous things. <wink>

/Hoff

The British Are Coming! In Defense (Again) of the Jericho Forum…

September 17th, 2007 10 comments

NutsjerichoThe English are coming…and you need to give them a break.  I have.

Back in 2006, after numerous frustrating discussions dating back almost three years without a convincing conclusion, I was quoted in an SC Magazine article titled "World Without Frontiers" which debated quite harshly the Jericho Forum’s evangelism of a security mindset and architecture dubbed as "de-perimeterization."

Here’s part of what I said:

Some people dismiss Jericho as trying to re-invent the wheel. "While
the group does an admirable job raising awareness, there is nothing
particularly new either in what it suggests or even how it suggests we
get there," says Chris Hoff, chief security strategist at Crossbeam
Systems.

"There is a need for some additional technology and
process re-tooling, some of which is here already – in fact, we now
have an incredibly robust palette of resources to use. But why do we
need such a long word for something we already know? You can dress
something up as pretty as you like, but in my world that’s not called
‘deperimeterisation’, it’s called a common sense application of
rational risk management aligned to the needs of the business."   

Hoff
insists the Forum’s vision is outmoded. "Its definition speaks to what
amounts to a very technically focused set of IT security practices,
rather than data survivability. What we should come to terms with is
that confidentiality, integrity and availability will be compromised.
It’s not a case of if, it’s a case of when.

The focus should
be less on IT security and more on information survivability; a
pervasive enterprise-wide risk management strategy and not a
narrowly-focused excuse for more complex end-point products," he says.

But is Jericho just offering insight into the obvious? "Of course,"
says Hoff. "Its suggestion that "deperimeterisation" is somehow a new
answer to a set of really diverse, complex and long-standing IT
security issues… simply ignores the present and blames the past," he
says.

"We don’t need to radically deconstruct the solutions
universe to arrive at a more secure future. We just need to learn how
to appropriately measure risk and quantify how and why we deploy
technology to manage it. I admire Jericho’s effort, and identify with
the need. But the problem needs to be solved, not renamed."

I have stated previously that this was an unfortunate reaction to the marketing of the message and not the message itself, and I’ve come to understand what the Jericho Forum’s mission and its messaging actually represents.  It’s a shame that it took me that long and that others continue to miss the point.

Jericho
Today Mike Rothman commented about NetworkWorld’s coverage of the latest Jericho Forum in New York last week.  The byline of the article suggested that "U.S. network execs clinging to firewalls" and it seems we’re right back on the Hamster Wheel of Pain, perpetuating a cruel myth.

After all this time, it appears that the Jericho Forum is apparently still suffering from a failure to communicate — there exists a language gap — probably due to that allergic issue we had once to an English King and his wacky ideas relating to the governance of our "little island."  Shame, that.

This is one problem that this transplanted Kiwi-American (same Queen after-all) is motivated to fix.

Unfortunately, the Jericho Forum’s message has become polluted and marginalized thanks to a perpetuated imprecise suggestion that the Forum recommends that folks simply turn off their firewalls and IPS’s and plug their systems directly into the Internet, as-is.

That’s simply not the case, and in fact the Forum has recognized some of this messaging mess, and both softened and clarified the definition by way of the issuance of their "10 Commandments." 

You can call it what you like: de-perimeterization, re-perimeterization or radical externalization, but here’s what the Jericho Forum actually advocates, which you can read about here:

header De-perimeterization explained
    The huge explosion in business use of the Web protocols means that:
   

  • today the traditional "firewalled" approach to securing a network boundary is at best Barrierflawed, and at worst ineffective. Examples include:
           

    • business demands that tunnel through perimeters or bypass them altogether
    • IT products that cross the boundary, encapsulating their protocols within Web protocols
    • security exploits that use e-mail and Web to get through the perimeter.

          

  • to respond to future business needs, the break-down of the traditional
    distinctions between “your” network and “ours” is inevitable
  • increasingly, information will flow between business organizations over
    shared and third-party networks, so that ultimately the only reliable
    security strategy is to protect the information itself, rather than the
    network and the rest of the IT infrastructure   

This
trend is what we call “de-perimeterization”. It has been developing for
several years now. We believe it must be central to all IT security
strategies today.

header The de-perimeterization solution
   
SolutionWhile
traditional security solutions like network boundary technology will
continue to have their roles, we must respond to their limitations. In
a fully de-perimeterized network, every component will be independently
secure, requiring systems and data protection on multiple levels, using
a mixture of

  • encryption
  • inherently-secure computer protocols
  • inherently-secure computer systems
  • data-level authentication

The design principles that guide the development of such technology solutions are what we call our “Commandments”, which capture the essential requirements for IT security in a de-perimeterized world.

I was discussing these exact points today in a session at an Institute for Applied Network Security conference today (and as I have before here) wherein I summarized this as the capability to:

Take a host with a secured OS, connect it into any network using whatever means you find appropriate,
without regard for having to think about whether you’re on the "inside"
or "outside." Communicate securely, access and exchange data in
policy-defined "zones of trust" using open, secure, authenticated and
encrypted protocols.

Did you know that one of the largest eCommerce sites on the planet doesn’t even bother with firewalls in front of its webservers!?  Why?  Because with 10+ Gb/s of incoming HTTP and HTTP/S connections using port 80 and 443 specifically, what would a firewall add that a set of ACLs that only allows port 80/443 through to the webservers cannot?

Nothing.  Could a WAF add value?  Perhaps.  But until then, this is a clear example of a U.S. company that gets the utility of not adding security in terms of a firewall just because that’s the way it’s always been done.

From the NetworkWorld article, this is a clear example of the following:

The forum’s view of firewalls is that they no longer meet the needs of businesses that increasingly need to let in traffic
                        to do business. Its deperimeterization thrust calls for using secure applications and firewall protections closer to user devices and servers.

It’s not about tossing away prior investment or abandoning one’s core beliefs, it’s about about being honest as to the status of information security/protection/assurance, and adapting appropriately.

Your perimeter *is* full of holes so what we need to do is fix the problems, not the symptoms.
That is the message.

So consider me the self-appointed U.S. Ambassador to our friends across the pond.  The Jericho Forum’s message is worth considering and deserves your attention.

/Hoff

An Excellent Risk-Focused Virtualization Security Assessment & Hardening Document

September 17th, 2007 1 comment

Xtravirt
Reader Colin was kind enough to forward me a link to a great security and hardening document which begins to address many of the  elements I posted in my recent "Ephiphany…" blog entry regarding virtualization and hardening documentation.

This document was produced by the folks at XtraVirt  who describe themselves as "…a company of innovative experts dedicated to VMware virtualisation, storage, operating systems and deployment methods."  These guys maintain an impressive cache of tools, whitepapers and commercial products focused on virtualization, many of which are available for download.

I’m rather annoyed and embarrassed that it took me this long to discover this site and its resources!

As a wonderful study in serendipity, I’ve recently signed up to contribute to the follow-on to the CIS Virtualization Benchmark that specifically addresses VMware’s ESX environment.   This draft is under construction now, and it represents a good first pass, but continues to need (IMHO) some additional focus on the network/vSwitch elements.

I respectfully suggest that many of the contents of the XtraVirt document, need to make their way into the CIS draft.

One of the other really interesting approaches this document takes is to classify each of the potential hardening elements by risk expressed as a function of threat, likelihood, potential impact and countermeasure as measured against impact to C, I, and A.

Secondly, there is a much-needed section on the VirtualSwitch and network constituent functions.

Here’s the snapshot of the XtraVirt effort:

One of the more difficult challenges found when introducing
virtualisation technologies to a new environment, whether it be your
own or as a consultant to a client, can be gaining the understanding
and support of the IT Security team, especially so if they haven’t been
exposed to virtualisation technologies in the past. 

As a
Solutions Architect having faced this in this situation on several
occasions and being tied up in weeks of claim and counter-claim about
how secure VMware VI3 was, I tried several approaches; one was to
simply email the published VMware security documents to them, and two
was sit down and explain why and how VI3 was inherently secure.

Both
of these approaches could take weeks and at times frustration on both
sides could lead to unnecessary discussions.  Although the VMware
documents are excellent and pitched at the right level, I found that
security team engagement could be limited and it wasn’t always enough
to simply provide these on their own as the basis for a solution.

So the idea was sown to create the ‘VMware® VI3 Security Risk Assessment Template’ that could be repeatedly used as the basis for any VI3 design submission. There’s
nothing particularly clever about it, the information is already out
there, I just felt it needed to be presented in a customised way for IT
Security review and approval.

This MS Word document template is designed to:

· Provide detail of around security measures designed into each major component of VI3

· Provide a ‘best practice’ security framework for VI3 designs that can be repeated again and again

· Detail
real world scenario’s that IT Security personnel can relate to their
environment, including built-in countermeasures and additional
configuration options.

· Significantly reduce the time and stress involved with gaining design approvals.

The idea is to take your own VI3 design and apply it to each of the major VI3 components in this template:

· ESX Server – Service Console

· ESX Server – Kernel

· ESX Server – Virtual Networking Layer

· Virtual Machines

· Virtual Storage

· VirtualCenter

This
means that in most cases it’s just a case of filling in the gaps, and
putting a stake in the ground as to which additional configuration
options you wish to implement.  In all cases you end up with a
document that should relate to your design and the IT Security teams
have a specific proposal which details all the things they want to see
and understand.

The first time I used it (on a
particularly tough Security Advisor who had never seen VMware products
I might add) I had nothing but great feedback which allowed my low
level design to proceed with confidence and saved weeks of explanation
and negotiation.

I’ve reached out to the guys at XtraVirt to both thank them and to gain some additional insight into their work.

I think this is a great effort.

Oh, did you want the link? 😉

/Hoff


Categories: Virtualization, VMware Tags:

BeanSec! Wednesday, September 19th – 6PM to ?

September 14th, 2007 2 comments

Beansec3 Yo!  BeanSec! is once again upon us.  Wednesday, September 19th, 2007.

BeanSec! is an informal meetup of information security professionals, researchers and academics in the Greater Boston area that meets the third Wednesday of each month. 

I say again, BeanSec! is hosted the third Wednesday of every month.  Add it to your calendar.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join up”, present a zero-day exploit, or defend your dissertation to attend. Map to the Enormous Room in Cambridge. 

Enormous Room: 567 Mass Ave, Cambridge 02139.  Look for the Elephant on the left door next to the Central Kitchen entrance.  Come upstairs. We sit on the left hand side…

Don’t worry about being "late" because most people just show up when they can.  6:30 is a good time to aim for.  We’ll try and save you a seat.  There is a parking garage across the street and 1 block down or you can try the streets (or take the T)

In case you’re wondering, we’re getting about 30-40 people on average per BeanSec!  Weld, 0Day and I have been at this for just over a year and without actually *doing* anything, it’s turned out swell.

We’ve had some really interesting people of note attend lately (I’m not going to tell you who…you’ll just have to come and find out.)  At around 9:00pm or so, the DJ shows up…as do the rather nice looking people from the Cambridge area, so if that’s your scene, you can geek out first and then get your thang on.

The food selection is basically high-end finger-food appetizers and the drinks are really good; an attentive staff and eclectic clientèle make the joint fun for people watching.  I’ll generally annoy you into participating somehow, even if it’s just fetching napkins. 😉

See you there.

/Hoff

Categories: BeanSec! Tags:

Epiphany: For Network/InfoSec Folks, the Virtualization Security Awareness Problem All Starts With the vSwitch…

September 13th, 2007 9 comments

Solutions_desk_datasecurity
It’s all about awareness, and as you’ll come to read, there’s just not enough of it when we talk about the security implications of virtualizing our infrastructure.  We can fix this, however, without resorting to FUD and without vendors knee-jerking us into spin battles. 

I’m attending VMworld 2007 in San Francisco, looking specifically for security-focused information in the general sessions and hands-on labs.  I attended the following sessions and a hands-on lab yesterday:

  • Security Hardening and Monitoring of VMware Infrastructure 3 (Hands-on Lab)
  • Security Architecture Design and Hardening VMware Infrastructure 3 (General Session)
  • Using the Secure Technical Implementation Guide (STIG) with VMware Infrastructure 3 (General Session)

I had high hopes that the content and focus of these sessions would live up to the hype surrounding the statements by VMware reps at the show.   As Dennis Fisher from  SearchSecurity wrote, there are some  powerful statements coming from the VMware camp on the security of virtualized environments and how they are safer than non-virtualized deployments.  These are a bold, and in my opinion, dangerously generalized statements to make when backed up with examples which beg for context:

To help assuage customers’ fears, VMWare executives and security
engineers are going on the offensive and touting the company’s ESX
Server as a more secure alternative to traditional computing setups.

Despite the complexity of virtualized environments, they are inherently
more secure than normal one-to-one hardware and operating system
environments
because of the hypervisor’s ability to enforce isolation
among the virtual machines, Mukundi Gunti, a security engineer at
VMWare said in a session on security and virtualization Tuesday.

 

"It’s a much more complex architecture with a lot of moving parts.
There are a lot of misconceptions about security and virtualization,"
said Jim Weingarten, senior technical alliances manager at VMWare, who
presented with Gunti. "Virtual machines are safer."

So I undertook my mission of better understanding how these statements could be substantiated empirically and attended the sessions/labs with as much of an open mind as I could given the fact that I’m a crusty old security dweeb.

Security Hardening and Monitoring of VMware Infrastructure 3 (Hands-on Lab)

The security hardening and monitoring hands-on lab was very basic and focused on general unix hardening of the underlying RH OS as well as SWATCH log monitoring.  The labs represented the absolute minima that one would expect to see performed when placing any Unix/Linux based host into a network environment.  Very little was specific to virtualization.  This  session presented absolutely nothing new or informative.

Security Architecture Design and Hardening VMware Infrastructure 3 (General Session)
The security architecture design and hardening session was at the very end of the day and was packed with attendees.  Unfortunately, it was very much a re-cap of the hands-on lab but did touch on a couple of network-centric design elements (isolating the VMotion/VIC Management ports onto separate NICs/VLANs, etc) as well as some very interesting information regarding the security of the virtual switch (vSwitch.)  More on this below, because it’s very interesting.

The Epiphany
This is when it occurred to me, that given the general roles and constituent responsibilities of the attendees, most of whom are not dedicated network or security folks, the disconnect between the "Leviathan Force" (the network and network security admins) and the "Sysadmins" (the server/VMM administrators) was little more than the mashup of a classic turf battle and differing security mindsets combined with a lack of network and information security-focused educational campaigning on the part of VMware.

After the talk, I got to spend about 30 minutes chatting with VMware’s Kirk Larsen (Engineering Product Security Officer) and Banjot Chanana (Product Manager for Platform Security) about the lack of network-centric security information in the presentations and how we could fix that.

Virtualization_quote
What I suggested was that since now we see the collapse and convergence of the network and the compute stacks into the virtualizaton platforms, the operational impact of what that means to the SysAdmins and Network/Information Security folks is huge. 

The former now own the keys to the castle whilst the latter now "enjoy" the loss of visibility and operational control.  Because the network and InfoSec folks aren’t competently trained in the operation of the VM platforms and the SysAdmins aren’t competently trained in securing (holistically — from the host through to the network) them, there’s a natural tendency for conflict.

So here’s what VMware needs to do immediately:

  1. Add a series of whitepapers and sessions that speak directly to assuage the fear of the virtualization unknowns targeting the network and InfoSec security staffers. 
  2. Provide more detail and solicit feedback relating to the technical roadmaps that will get the network and InfoSec staffer’s visibility and control back by including them in the process, not isolating them from it.
  3. Assign a VMware community ombudsman to provide outreach and make his/her responsibility to make folks in our industry aware — and not by soundbites that sponsors contention — that there are really some excellent security enhancements that virtualization (and specifically VMware) bring to the table.
  4. Make more security acquisitions and form more partnerships.  Determina was good, but as much as we need "prevention" we need "detection" — we’ve lost visibility, so don’t ignore the basics.
  5. Stop fighting "FUD" with "IAFNAB" (It’s a feature, not a bug) responses
  6. Give the network and InfoSec folks solid information and guidance against which we can build budgets to secure the virtualization infrastructure before it’s deployed, not scrap for it after it’s already in production and hackbait.
  7. Address the possibility of virtualization security horizon events like Blue Pill and Skoudis’ VM Jail escapes head-on and work with us to mitigate them.
  8. Don’t make the mistakes Cisco does and just show pretty security architecture and strategy slides featuring roadmaps that are 80% vapor and call it a success.
  9. Leverage the talent pool in the security/network space to help build great and secure products; don’t think that the only folks you have to target are the cost-conscious CIO and the SysAdmins.
  10. Rebuild and earn our trust that the virtualization gravy train isn’t an end run around the last 10 years we’ve spent trying to secure our data and assets.  Get us involved.

I will tell you that both Kirk and Banjot recognize the need for these activities and are very passionate about solving them.  I look forward to their efforts. 

In the meantime, a lot of gaps can be closed in a very short period by disseminating some basic network and security-focused information.  For example, from the security architecture session, did you know that the vSwitch (fabric) is positioned as being more secure than a standard L2/L3 switch because it is not susceptible to the following attacks:

  • Double Encapsulation
  • Spanning Tree Floods
  • Random Frames
  • MAC Address Flooding
  • 802.1q and ISL Tagging
  • Multicast Brute Forcing

Basic stuff, but very interesting.  Did you know that the vSwitch doesn’t rely on ARP caches or CAM tables for switching/forwarding decisions?  Did you know that ESX features a built-in firewall (that stomps on IPtables?)  Did you know that the vSwitch has features built-in that limit MAC address flux and provides features such as traffic shaping and promiscuous mode for sniffing?

Many in the network and InfoSec career paths do not, and they should.

I’m going to keep in touch with Kirk and Banjot and help to ensure that this outreach is given the attention it deserves.  It will be good for VMware and very good for our community.  Virtualization is here to stay and we can’t afford to maintain this stare-down much longer.

I’m off to the show this morning to investigate some of the vendor’s like Catbird and Reflex and what they are doing with their efforts.

/Hoff

 

Take5 (Episode #6) – Five Questions for Andy Jaquith, Yankee Group Analyst and Metrician…

September 13th, 2007 3 comments

This sixth episode of Take5 interviews Andy Jaquith, Yankee Group analyst and champion of all things Metric…I must tell you that Andy’s answers to my interview questions were amazing to read and I really appreciate the thought and effort he put into this.

First a little background on the victim:

Jaquitha
Andrew Jaquith is a program manager in Yankee Group’s Enabling
Technologies Enterprise group with expertise in portable digital
identity and web application security. As Yankee Group’s lead security
analyst, Jaquith drives the company’s security research agenda and
researches disruptive technologies that enable tomorrow’s Anywhere
Enterprise™ to secure its information assets.

       

Jaquith
has 15 years of IT experience. Before joining Yankee Group, he
co-founded and served as program director at @stake, Inc., a security
consulting pioneer, which Symantec Corporation acquired in 2004. Before
@stake, Jaquith held project manager and business analyst positions at
Cambridge Technology Partners and FedEx Corporation.

His application security and  metrics research has been featured in publications such as CIO, CSO and the IEEE Security & Privacy.
In addition, Jaquith is the co-developer of a popular open source wiki
software package. He is also the author of the recently released
Pearson Addison-Wesley book, Security  Metrics: Replacing Fear, Uncertainty and Doubt. It has been praised by  reviewers as both ”sparking and witty“ and ”one of the best written security  books ever.”
  Jaquith holds a B.A. degree in  economics and political science from Yale University.

Questions:

1) Metrics.  Why is this such a contentious topic?  Isn't the basic axiom of "you can't 
manage what you don't measure" just common sense?  A discussion on metrics evokes very
passionate discussion amongst both proponents and opponents alike.  Why are we still
debating the utility of measurement?


The arguments over metrics are overstated, but to the extent they are 
contentious, it is because "metrics" means different things to 
different people. For some people, who take a risk-centric view of 
security, metrics are about estimating risk based on a model. I'd put 
Pete Lindstrom, Russell Cameron Thomas and Alex Hutton in this camp. 
For those with an IT operations background, metrics are what you get 
when you measure ongoing activities. Rich Bejtlich and I are 
probably closer to this view of the world. And there is a third camp 
that feels metrics should be all about financial measures, which 
brings us into the whole "return on security investment" topic. A lot 
of the ALE crowd thinks this is what metrics ought to be about. Just 
about every security certification course (SANS, CISSP) talks about 
ALE, for reasons I cannot fathom.

Once you understand that a person's point of view of "metrics" is 
going to be different depending on the camp they are in -- risk, 
operations or financial -- you can see why there might be some 
controversy between these three camps. There's also a fourth group 
that takes a look at the fracas and says, "I know why measuring 
things matter, but I don't believe a word any of you are talking 
about." That's Mike Rothman's view, I suspect.

Personally, I have always taken the view that metrics should measure 
things as they are (the second perspective), not as you imagine, 
model or expect them to be. That's another way of saying that I am an 
epiricist. If you collect data on things and swirl them around in a 
blender, interesting things will stratify out.

Putting it another way: I am a measurer rather than a modeler. I 
don't claim to know what the most important security metrics are. But 
I do know that people measure certain things, and that those things 
gives them insights into their firms performance. To that end, I've 
got about 100 metrics documented in my book; these are largely based 
what people tell me they measure. Dan Geer likes to say, "it almost 
doesn't matter what you measure, but get started and measure 
something." The point of my book, largely, is to give some ideas 
about what those somethings might be, and to suggest techniques for 
analyzing the data once you have them.

Metrics aren't really that contentious. Just about everyone in the 
securitymetrics.org community is pretty friendly and courteous. It's 
a "big tent." Most of the differences are with respect to 
inclination. But, outside of the "metrics community" it really comes 
down to a basic question of belief: you either believe that security 
can be measured or you don't.

The way you phrased your question, by the way, implies that you 
probably align a little more closely with my operational/empiricist 
view of metrics. But I'd expect that, Chris -- you've been a CSO, and 
in charge of operational stuff before. :)

2) You've got a storied background from FedEx to @Stake to the Yankee Group. 
I see your experience trending from the operational to the analytical.  How much of your
operational experience lends  itself to the practical collection and presentation of
metrics -- specifically security metrics?  Does your broad experience help you in
choosing what to measure and how?


That's a keen insight, and one I haven't thought of before. You've 
caused me to get all introspective all of a sudden. Let me see if I 
can unravel the winding path that's gotten me to where I am today.

My early career was spent as an IT analyst at Roadway, a serious, 
operationally-focused trucking firm. You know those large trailers 
you see on the highways with "ROADWAY" on them? That's the company I 
was with. They had a reputation as being like the Marines. Now, I 
wasn't involved in the actual day-to-day operations side of the 
business, but when you work in IT for a company like that you get to 
know the business side. As part of my training I had to do "ride 
alongs," morning deliveries and customer visits. Later, I moved to 
the contract logistics side of the house, where I helped plan IT 
systems for transportation brokerage services and contract warehouses 
the company ran. The logistics division was the part of Roadway that 
was actually acquired by FedEx.

I think warehouses are just fascinating. They are one hell of a lot 
more IT intensive than you might think. I don't just mean bar code 
readers, forklifts and inventory control systems; I mean also the 
decision support systems that produce metrics used for analysis. For 
example, warehouses measure an overall metric for efficiency called 
"inventory turns" that describes how fast your stock moves through 
the warehouse. If you put something in on January 1 and move it out 
on December 31 of the same year, that part has a "velocity" of 1 turn 
per year. Because warehouses are real estate like any other, you can 
spread out your fixed costs by increasing the number of turns through 
the warehouse.

For example, one of the reasons why Dell -- a former customer of mine 
at Roadway-- was successful was that they figured out how to make 
their suppliers hold their inventory for them and deliver it to final 
assembly on a "just-in-time" (JIT) basis, instead of keeping lots of 
inventory on hand themselves. That enabled them to increase the 
number of turns through their warehouses to something like 40 per 
year, when the average for manufacturing was like 12. That efficiency 
gain translated directly to profitability. (Digression: Apple, by the 
way, has lately been doing about 50 turns a year through their 
warehouses. Any wonder why they make as much money on PCs as HP, who 
has 6 times more market share? It's not *just* because they choose 
their markets so that they don't get suckered into the low margin 
part of the business; it's also because their supply chain operations 
are phenomenally efficient.)

Another thing I think is fascinating about warehouses is how you 
account for inventory. Most operators divide their inventory into 
"A", "B" and "C" goods based on how fast they turn through the 
warehouse. The "A" parts might circulate 10-50x faster than the C 
parts. So, a direct consequence is that when you lay out a warehouse 
you do it so that you can pick and ship your A parts fastest. The 
faster you do that, the more efficient your labor force and the less 
it costs you to run things. Neat, huh?

Now, I mention these things not strictly speaking to show you what a 
smartypants I am about supp ly chain operations. The real point is to 
show how serious operational decisions are made based on deep 
analytics. Everything I just mentioned can be modeled and measured: 
where you site the warehouses themselves, how you design the 
warehouse to maximize your ability to pick and ship the highest-
velocity items, and what your key indicators are. There's a virtuous 
feedback loop in place that helps operators understand where they are 
spending their time and money, and that in turn drives new 
innovations that increase efficiencies.

In supply chain, the analytics are the key to absolutely everything. 
And they are critical in an industry where costs matter. In that 
regard, manufacturing shares a lot with investment banking: leading 
firms are willing to invest a phenomenal amount of time and money to 
shave off a few basis points on a Treasury bill derivative. But this 
is done with experiential, analytical data used for decision support 
that is matched with a model. You'd never be able to justify 
redesigning a warehouse or hiring all of Yale's graduating math 
majors unless you could quantify and measure the impact of those 
investments on the processes themselves. Experiential data feeds, and 
improves, the model.

In contrast, when you look at security you see nothing like this. 
Fear and uncertainty rule, and almost every spending decision is made 
on the basis of intuition rather than facts. Acquisition costs 
matter, but operational costs don't. Can you imagine what would 
happen if you plucked a seasoned supply-chain operations manager out 
of a warehouse, plonked him down in a chair, and asked him to watch 
his security counterpart in action? Bear in mind this is a world 
where his CSO friend is told that the answer to everything is "buy 
our software" or "install our appliance." The warehouse guy would 
look at him like he had two heads. Because in his world, you don't 
spray dollars all over the place until you have a detailed, grounded, 
empirical view of what your processes are all about. You simply don't 
have the budget to do it any other way.

But in security, the operations side of things is so immature, and 
the gullibility of CIOs and CEOs so high, that they are willing to 
write checks without knowing whether the things they are buying 
actually work. And by "work," I don't mean "can be shown to stop a 
certain number of viruses at the border"; I mean "can be shown to 
decrease time required to respond to a security incident" or "has 
increased the company's ability to share information without 
incurring extra costs," or "has cut the pro-rata share of our IT 
operations spent on rebuilding desktops."

Putting myself in the warehouse manager's shoes again: for security, 
I'd like to know why nobody talks about activity-based costing. Or 
about process metrics -- that is, cycle times for everyday security 
activities -- in a serious way. Or benchmarking -- does my firm have 
twice as many security defects in our web applications as yours? Are 
we in the first, second, third or fourth quartiles?

If the large security companies were serious, we'd have a firmer grip 
on the activity, impact and cost side of the ledger. For example, why 
won't AV companies disclose how much malware is actually circulating 
within their customer bases, despite their promises of "total 
protection"? When the WMF zero-day exploit came out, how come none of 
the security companies knew how many of their customers were 
infected? And how much the cleanup efforts cost? Either nobody knew, 
or nobody wanted to tell. I think it's the former. If I were in the 
shoes of my Roadway operational friend, I'd be pissed off about the 
complete lack of feedback between spending, activities, impact and cost.

If this sounds like a very odd take on security, it is. My mentor and 
former boss, Dan Geer, likes to say that there are a lot of people 
who don't have classical security training, but who bring "hybrid 
vigor" to the field. I identify with that. With my metrics research, 
I just want to see if we can bring serious analytic rigor to a field 
that has resisted it for so long. And I mean that in an operational 
way, not a risk-equation way.

So, that's an exceptionally long-winded way of saying "yes" to your 
question -- I've trended from operational to analytical. I'm not sure 
that my past experience has necessarily helped me pick particular 
security metrics per se, but it has definitely biased me towards 
those that are operational rather than risk-based.

3) You've recently published your book.  I think it was a great appetite whetter but
I was left -- as were I think many of us who are members of the "lazy guild"-- wanting
more.   Do you plan to follow-up with a metrics toolkit of sorts?  You know, a templated
guide -- Metrics for Dummies?


You know, that's a great point. The fabulous blogger Layer 8, who 
gave my book an otherwise stunning review that I am very grateful for 
("I tucked myself into bed, hoping to sleep—but I could not sleep 
until I had read Security Metrics cover to cover. It was That 
Good."), also had that same reservation. Her comment was, "that final 
chapter just stopped short and dumped me off the end of the book, 
without so much as a fare-thee-well Final Overall Summary. It just 
stopped, and without another word, put on its clothes and went home". 
Comparing my prose to a one night stand is pretty funny, and a 
compliment.

Ironically, as the deadline for the book drew near, I had this great 
idea that I'd put in a little cheat-sheet in the back, either as an 
appendix or as part of the endpapers. But as with many things, I 
simply ran out of time. I did what Microsoft did to get Vista out the 
door -- I had to cut features and ship the fargin' bastid.

One of the great things about writing a book is that people write you 
letters when they like or loathe something they read. Just about all 
of my feedback has been very positive, and I have received a number 
of very thoughtful comments that shed light on what readers' 
companies are doing with metrics. I hope to use the feedback I've 
gotten to help me put together a "cheat sheet" that will boil the 
metrics I discuss in the book into something easier to digest.

4) You've written about the impending death of traditional Anti-Virus technology and its
evolution to combat the greater threats from adaptive Malware.  What role do you think
virtualization technology that provides a sandboxed browsing environment will have in
this space, specifically on client-side security?


It's pretty obvious that we need to do something to shore up the 
shortcomings of signature-based anti- malware software. I regularly 
check out a few of the anti-virus benchmarking services, like the 
OITC site that aggregates the Virustotal scans. And I talk to a 
number of anti-malware companies who tell me things they are seeing. 
It's pretty clear that current approaches are running out of gas. All 
you have to do is look at the numbers: unique malware samples are 
doubling every year, and detection rates for previously-unseen 
malware range from the single digits to the 80% mark. For an industry 
that has long said they offered "total protection," anything less 
than 100% is a black eye.

Virtualization is one of several alternative approaches that vendors 
are using to help boost detection rates. The idea with virtualization 
is to run a piece of suspected malware in a virtual machine to see 
what it does. If, after the fact, you determine that it did something 
naughty, you can block it from running in the real environment. It 
sounds like a good approach to me, and is best used in combination 
with other technologies.

Now, I'm not positive how pervasive this is going to be on the 
desktop. Existing products are already pretty resource-hungry. 
Virtualization would add to the burden. You've probably heard people 
joke: "thank God computers are dual-core these days, because we need 
one of 'em to run the security software on." But I do think that 
virtualized environments used for malware detection will become a 
fixture in gateways and appliances.

Other emergent ideas that complement virtualization are behavior 
blocking and herd intelligence. Herd intelligence -- a huge malware 
blacklist-in-the-sky -- is a natural services play, and I believe all 
successful anti-malware companies will have to embrace something like 
this to survive.

5) We've see the emergence of some fairly important back-office critical applications
make their way to the Web (CRM, ERP, Financials) and now GoogleApps is staking a hold
for the SMB.  How do you see the SaaS model affecting the management of security -- 
and ultimately risk --  over time?


Software as a service for security is already here. We've already 
seen fairly pervasive managed firewall service offerings -- the 
carriers and companies like IBM Global Services have been offering 
them for years. Firewalls still matter, but they are nowhere near as 
important to the overall defense posture as before. That's partly 
because companies need to put a lot of holes in the firewall. But 
it's also because some ports, like HTTP/HTTPS, are overloaded with 
lots of other things: web services, instant messaging, VPN tunnels 
and the like. It's a bit like the old college prank of filling a 
paper bag with shaving cream, sliding it under a shut door, then jumping 
on it and spraying the payload all over the room's occupants. HTTP is 
today's paper bag.

In the services realm, for more exciting action, look at what 
MessageLabs and Postini have done with the message hygiene space. At 
Yankee we've been telling our customers that there's no reason why an 
enterprise should bother to build bespoke gateway anti-spam and anti-
malware infrastructures any more. That's not just because we like 
MessageLabs or Postini. It's also because the managed services have a 
wider view of traffic than a single enterprise will ever have, and 
benefit from economies of scale on the research side, not to mention 
the actual operations.

Managed services have another hidden benefit; you can also change 
services pretty easily if you're unhappy. It puts the service 
provider's incentives in the right place. Qualys, for example, 
understands this point very well; they know that customers will leave 
them in an instant if they stop innovating. And, of course, whenever 
you accumulate large amounts of performance data across your customer 
base, you can benchmark things. (A subject near and dear to my heart, 
as you know.)

With regards to the question about risk, I think managed services do 
change the risk posture a bit. On the one hand, the act of 
outsourcing an activity to an external party moves a portion of the 
operational risk to that party. This is the "transfer" option of the 
classic "ignore, mitigate, transfer" set of choices that risk 
management presents. Managed services also reduce political risk in a 
"cover your ass" sense, too, because if something goes wrong you can 
always point out that, for instance, lots of other people use the 
same vendor you use, which puts you all in the same risk category. 
This is, if you will, the "generally accepted practice" defense.

That said, particular managed services with large customer bases 
could accrue more risk by virtue of the fact that they are bigger 
targets for mischief. Do I think, for example, that spammers target 
some of their evasive techniques towards Postini and MessageLabs? I 
am sure they do. But I would still feel safer outsourcing to them 
rather than maintaining my own custom infrastructure.

Overall, I feel that managed services will have a "smoothing" or 
dampening effect on the risk postures of enterprises taken in 
aggregate, in the sense that they will decrease the volatility in 
risk relative to the broader set of enterprises (the "alpha", if you 
will). Ideally, this should also mean a decrease in the *absolute* 
amount of risk. Putting this another way: if you're a rifle shooter, 
it's always better to see your bullets clustered closely together, 
even if they don't hit near the bull's eye, rather than seeing them 
near the center, but dispersed. Managed services, it seems to me, can 
help enterprises converge their overall levels of security -- put the 
bullets a little closer together instead of all over the place. 
Regulation, in cases where it is prescriptive, tends to do that too.

Bonus Question:
6) If you had one magical dashboard that could display 5 critical security metrics
to the Board/Executive Management, regardless ofindustry, what would those elements
be?


I would use the Balanced Scorecard, a creation of Harvard professors 
Kaplan and Norton. It divides executive management metrics into four 
perspectives: financial, internal operations, customer, and learning 
and growth. The idea is to create a dashboard that incorporates 6-8 
metrics into each perspective. The Balanced Scorecard is well known 
to the corner office, and is something that I think every security 
person should learn about. With a little work, I believe quite 
strongly that security metrics can be made to fit into this framework.

Now, you might ask yourself, I've spent all of this work organizing 
my IT security policies along the lines of ISO 17799/2700x, or COBIT, 
or ITIL. So why can't I put together a dashboard that organizes the 
measurements in those terms? What's wrong with the frameworks I've 
been using? Nothing, really, if you are a security person. But i f you 
really want a "magic dashboard" that crosses over to the business 
units, I think basing scorecards on security frameworks is a bad 
idea. That's not because the frameworks are bad (in fact most of them 
are quite good), but because they aren't aligned with the business. 
I'd rather use a taxonomy the rest of the executive team can 
understand. Rather than make them understand a security or IT 
framework, I'd rather try to meet them halfway and frame things in 
terms of the way they think.

So, for example: for Financial metrics, I'd measure how much my IT 
security infrastructure is costing, straight up, and on an activity-
based perspective. I'd want to know how much it costs to secure each 
revenue-generating transaction; quick-and-dirty risk scores for 
revenue-generating and revenue/cost-accounting systems; DDOS downtime 
costs. For the Customer perspective I'd want to know the percentage 
and number of customers who have access to internal systems; cycle 
times for onboarding/offloading customer accounts; "toxicity rates" 
of customer data I manage; the number of privacy issues we've had; 
the percentage of customers who have consulted with the security 
team; number and kind of remediation costs of audit items that are 
customer-related; number and kind of regulatory audits completed per 
period, etc. The Internal Process perspective has of the really easy 
things to measure, and is all about security ops: patching 
efficiency, coverage and control metrics, and the like. For Learning 
and Growth, it would be about threat/planning horizon metrics, 
security team consultations, employee training effectiveness and 
latency, and other issues that measure whether we're getting 
employees to exhibit the right behaviors and acquire the right skills.

That's meant to be an illustrative list rather than definitive, and I 
confess it is rather dense. At the risk of getting all Schneier on 
you, I'd refer your readers to the book for more details. Readers can 
pick and choose from the "catalog" and find metrics that work for 
their organizations.

Overall, I do think that we need to think a whole lot less about 
things like ISO and a whole lot more about things like the Balanced 
Scorecard. We need to stop erecting temples to Securityness that 
executives don't give a damn about and won't be persuaded to enter. 
And when we focus just on dollars, ALE and "security ROI", we make 
things too simple. We obscure the richness of the data that we can 
gather, empirically, from the systems we already own. Ironically, the 
Balanced Scorecard itself was created to encourage executives to move 
beyond purely financial measures. Fifteen years later, you'd think we 
security practitioners would have taken the hint.
Categories: Risk Management, Security Metrics, Take5 Tags:

Speaking of Yesterday, Mr. Shimel, You Do Know It’s Not 2001, Right?

September 10th, 2007 2 comments

Confused3
In response to my post regarding the CapGemini/GoogleApps relationship, in which I espoused the benefits of the upcoming service offering, Alan Shimel obviously forgot to take his meds as he referenced some bizarre military campaign reference in his post titled "Yesterday’s Argument, Tomorrow’s Solution."

I really tried to keep up with Alan’s logic in this post, but try as I might, I could not make heads or tails from Alan’s arguments in which seemed to contradict himself and ultimately make the same argument I did in my post.

As far as I can tell, Alan is suggesting that I’m out of touch with the realities of market economics and that security, privacy and compliance have no impact on the adoption of SaaS:

One of the classic mistakes that armies on the losing side make is
fighting the next war with the last wars weapons and tactics.  I am
afraid Mr Hoff is guilty as charged in talkingGoogle/CapGemini deal.  In case you have not heard, CapGemini will offer Google Apps to the one million strong corporate desktops that it services.
 

Firstly, this announcement is less than 12 hours old.  I hardly see how I’m on the "losing" side of anything? I’ve been suggesting that Google is in a position to encroach upon and own multiple markets currently monopolized by titans.  Alan’s already disagreed with me on Microsoft vs. Google once before, but that’s not what this is about.  I really don’t understand what the heck he means by my supposed "guilt" in "taking the losing side."

Chris
does a nice job of explaining how CG will make money on this and some
of the advantages of Google Apps. However, Chris seems to side on the
camp of those who think that SaaS based, centrally managed applications
and the data that goes with it, will present compliance and security
concerns that could slow adoption. 

Um, yeah!  Want some electricity for that cave you’re living in!?  You’re not seriously suggesting that privacy, security and compliance do not hinder the adoption of technology and services are you, and more specifically, centrally-hosted applications and data?

I say poppycock to that.

I guess you are.

I heard the same thing about Qualys storing vulnerability data 5 years
ago and over the intervening time have seen that argument melt away
except for maybe in the federal government space.  In fact Qualys has
now become the tester of choice for PCI compliance in many cases.  But
beyond that, the whole issue of outsourcing application hosting brings
me back to my days at Interliant, an early entrant into the ASP
market.  We hosted Lotus Notes, PeopleSoft and other enterprise level
applications. As well as managed security (mostly checkpoint firewalls,
which was sold to Akiva).

Just so I understand this, Alan is ignoring the history of my blog and then attempts to shore up his point by citing the poster child of Security SaaS for the last 6 years or so, Qualys.  For those of who who read my blog regularly,
you already know that (1) I am a huge proponent of SaaS, and (2) I was
a Qualys customer and advisory board member.  Alan obviously doesn’t
recognize either of those points.

To wit, storing scrubbed and encrypted vulnerability data (as Qualys does) is
quite different than storing unparsed, unencrypted sensitive corporate data which is intended to be collaboratively shared. 

The issue has not melted away, Alan…in fact, it’s the impetus of probably half of the security industry’s income statements, including yours.

One thing that we learned the hard way at Interliant is that people will not outsource applications which they consider critical and core
to the business.  So for instance, if they were an accounting firm,
they would probably not outsource the hosting and management of their
accounting software.  However, critical, non-core applications are good
candidates for outsourcing.  I think for the most part, this is exactly
where the Google Apps fall.  I think the success of hosted CRM like
Salesforce.com also shows that people are willing to outsource
critical, non-core applications.

So there’s been no movement in the adoption of SaaS from your experience 6 years ago at Interliant?  Look, SaaS is certainly on the uptake and it’s bringing new and interesting avenues to market for services that range from hosted apps to security, but it’s far from ubiquitous and it’s certainly got its fair share of scale, security and privacy concerns to deal with.

Poppycock away all you like, but riddle me this, how is it that you do not
consider email, spreadsheets, presentations and documents "…critical
and core to the business?"  I dare you to turn off your email fora week and tell me it’s not critical.

Now the fact that it is Google
after all, raises in my mind anyway, two other issues. One is the
privacy of my data from Google.  Is Google going to use that to hone
the ad words they serve up to me?  The other is that as Google
continues to grow, will it suffer from Microsoft like "evil empire"
syndrome, where people attach dark aspirations to everything they do.
I guess we will have to see how this plays out.

You just contradicted yourself and reinforced the exact point I made!  So now you’re concerned about privacy and hosted data?  That’s what my post was about entirely.

SaaS does and will absolutely continue to drive privacy concerns, especially for the very reasons at the end of your argument you make such a big point about highlighting.  I even talked about this in this post here titled "On-Demand SaaS Vendors Able to Secure Assets Better than Customers?"

I can’t figure out what point Alan’s making here; he seems to agree and disagree with my posting in the same post.

/Hoff