Archive for the ‘Risk Management’ Category

Risky Business — The Next Audit Cycle: Bellweather Test for Critical Production Virtualized Infrastructure

March 23rd, 2008 3 comments

I believe it’s fair to suggest that thus far, the adoption of virtualized infrastructure has been driven largely by consolidation and cost reduction.

In most cases the initial targets for consolidation through virtualization have focused on development environments, internally-facing infrastructure and non-critical application stacks and services.

Up until six months ago, my research indicated that most larger companies were not yet at the point where either critical applications/databases or those that were externally-facing were candidates for virtualization. 

As the virtualization platforms mature, the management and mobility functionality provides leveraged impovement over physical non-virtualized counterparts, and the capabilities to provide for resilient services emerge,  there is mounting pressure to expand virtualization efforts to include these remaining services/functions. 

With cost-reduction and availability improvements becoming more visible, companies are starting to tip-toe down the path of evaluating virtualizing everything else including these critical application stacks, databases and externally-facing clusters that have long depended on physical infrastructure enhancements to ensure availability and resiliency.

In these "legacy" environments, the HA capabilities are often provided by software-based clustering capabilities in the operating systems, applications or via the network thanks to load balancers and the like.  Each of these solutions sets are managed by different teams.  There’s a lot of complexity in making it all appear simple, secure and available.

This raises some very interesting questions that focus on assessing
risk in these environments in which duties and responsibilities are
largely segmented and well-defined versus their prospective virtualized counterparts where the opposite is true.

If companies begin to virtualize
and consolidate the applications, storage, servers, networking, security and high-availability
capabilities into the virtualization platforms, where does the buck
stop in terms of troubleshooting or assurance?  How does one assess risk?  How do we demonstrate compliance and
security when "all the eggs are in one basket?"

I don’t think it’s accurate to suggest that the lack of mature security
solutions has stalled the adoption of virtualization across the board,
but I do think that as companies evaluate virtualization candidacy,
security has been a difficult-to-quantify speed bump that has been
danced around. 

We’ve basically been playing a waiting game.  The debate over virtualization and the
inability to gain consensus in the increase/decrease of risk posture has left us at the point where we
have taken the low-hanging fruit that is either non-critical or has
resiliency built in, and simply consolidated it.  But now we’re at a crossroads as virtualization phase 2 has begun.

It’s time to put up or shut down…

Over the last year since my panel on virtualization security at RSA, I’ve been asking the same question in customer engagements and briefings:

How many of you have been audited by either internal or external governance organizations against critical virtualized infrastructure that are in production roles and/or externally facing? 

A year ago, nobody raised their hands.  I wonder what it will look like this year?

If IT and Security professionals can’t agree on the relative "security" or risk increase/decrease that virtualization brings, what position do you think that leaves the auditors in?  They are basically going to measure relative compliance to guidelines prescribed by governance and regulatory requirements.  Taken quite literally, many production environments featuring virtualized production components would not pass an audit.  PCI/DSS comes to mind.

In virtualized environments we’ve lost visiblity, we’ve lost separation of duties, we’ve lost the inherent simplicity that functions spread over physical entities provides.  Existing controls and processes get us only so far and the technology crutches we used to be able to depend on are buckling when we add the V-word to the mix.

We’ve seen technology initiatives such as VMware’s VMsafe that are still 9-12 months out that will help gain back some purchase in some of these areas, but how does one address these issues with auditors today?

I’m looking forward to the answer to this question at RSA this year to evaluate how companies are dealing with GRC (governance, risk and compliance) audits in complex critical production environments.


Off The Cuff Review: Nemertes Research’s “Virtualization Risk Analysis”

February 12th, 2008 4 comments

I just finished reading a research paper from Andreas Antonopoulous from Nemertes titled "A risk analysis of large-scaled and dynamic virtual server environments."  You can find the piece here: 

Executive Summary

As virtualization has gained acceptance in corporate data centers,
security has gone from afterthought to serious concern. Much of the
focus has been on the technologies of virtualization rather than the
operational, organizational and economic context. This comprehensive
risk analysis examines the areas of risk in deployments of virtualized
infrastructures and provides recommendations

I was interested by two things immediately:

  1. While I completely agree with the fact that in regards to virtualization and security the focus has been about the "…technologies of virtualization rather than the
    operational, organizational and economic context"
    I’m not convinced there is an overwhelming consensus that "…security has gone from afterthought to serious concern" mostly because we’re just now getting to see "large-scaled and dynamic virtual server environments.’  It’s still painted on, not baked in.  At least that’s how people react at my talks.
  2. Virtualization is about so much more than just servers, and in order to truly paint a picture of analyzing risk within "large-scaled and dynamic virtual server environments" much of the complexity and issues associated specifically with security stem from the operational and organizational elements associated with virtualizing storage, networking, applications, policies, data and the wholesale shift in operationalizing security and who owns it within these environments.

I’ve excerpted the most relevant element of the issue Nemertes wanted to discuss:

With all the
hype surrounding server virtualization come the inevitable security
concerns: are virtual servers less secure? Are we introducing higher
risk into the data center? For server virtualization to deliver
benefits we have to examine the security risks. As with any new
technology there is much uncertainty mixed in with promise. Part of the
uncertainty arises because most companies do not have a good
understanding of the real risks surrounding virtualization.

I’m easily confused…

While I feel the paper does a good job of describing the various stages of
deployment and many of the "concerns" associated with server
virtualization within these contexts, I’m left unsatisfied that I’m anymore prepared to assess and manage risk regarding server virtualization.  I’m concerned that the term "risk" is being spread about rather liberally because there is the presence of a bit of math.

The formulaic "Virtualization Risk Assessment" section is suggested to establish a quantatative basis for computing "relative risk," in the assessment summary.  However, since the variables introduced in the formulae are subjective and specific per asset, it’s odd that the summary table is then seemingly presented generically so as to describe all assets:

Scenario Vulnerability Impact Probability of Attack Overall Risk
Single virtual server (hypervisor risk) Low High Low Low/Medium
Basic services virtualized Low High Medium Medium
Production applications virtualized Medium High High Medium/High
Complete virtualization High High High High

I’m trying to follow this and then get smacked about by this statement, which explains why people just continue to meander along applying the same security strategies toward virtualized servers as they do in conventional environments:

This conclusion might appear to be pessimistic at first glance.
However, note that we are comparing various stages of deployment of
virtual servers. A large deployment of physical servers will suffer
from many of the same challenges that the “Complete Virtualization”
environment suffers from.

Furthermore, it’s unclear to me how to factor in compensating controls into this rendering given what follows:

What is new here is that there are fewer solutions for providing
virtual security than there are for providing physical security with
firewalls and intrusion prevention appliances in the network. On the
other hand, the cost of implementing virtualized security can be
significantly lower than the cost of dedicated hardware appliances,
just like the cost of managing a virtual server is lower than a
physical server.

The security solutions available today are limited by how much integration exists with the virtualization platforms today.  We’ve yet to see the VMM’s/Hypervisors opened up to allow true low-level integration and topology-sensitive security interaction with flow classification, provisioning, and disposition.

Almost all supposed "virtualization-ready" security solutions today are nothing more than virtual appliance versions of existing solutions or simply the same host-based solutions which run in the VM and manage not to cock it up.  Folding your management piece into something like VMware’s VirtualCenter doesn’t count.

In general, I simply disagree that the costs of implementing virtualized security (today) can be significantly lower than the cost of dedicated hardware appliances — not if you’re expecting the same levels of security you get in the conventional, non-virtualized world.

The reasons (as I give in my VirtSec presentations):  Loss of visibility, constraint of the virtual networking configurations, coverage, load on the hosts, licensing.  All really important.

Cutting to the Chase

I’m left waiting for the punchline, much like I was with Burton’s "Immutable Laws of Virtualization," and I think the reason why is that despite these formulae, the somewhat shallow definition of risk seems to still come down to nothing more than reasonably-informed speculation or subjective perception:

So, in the above risk analysis, one must also
consider that the benefits in virtualization far outweigh the risks.

The question is not so much whether companies should proceed with
virtualization – the market is already answering that resoundingly in
the affirmative. The question is how to do that while minimizing the
risk inherent in such a strategy.

These few sentences above seem to almost obviate the need for risk analysis at all and suggests that for most, security is still an afterthought.  High risk or not, the show must go on?

So given the fact that virtualization is happening at breakneck pace, we have few good security solutions available, we speak of risk "relatively," and that operationally the entire role and duty of "security" within virtualized environments is now shifting, how do we end up with this next statement?

In the long run, virtualized security solutions will not only help
mitigate the risk of broadly deployed infrastructure virtualization,
but will also provide new and innovative approaches to information
security that is in itself virtual. The dynamic, flexible and portable
nature of virtual servers is already leading to a new generation of
dynamic, flexible and portable security solutions.

I like the awareness Andreas tries to bring in this paper, but I fear that I am not left with any new information or tools for assessing risk (let alone quantifying it) in a virtual environment. 

So what do I do?!  I still have no answer to the main points of this paper, "With all the
hype surrounding server virtualization come the inevitable security
concerns: are virtual servers less secure? Are we introducing higher
risk into the data center?"

Well?  Are they?  Am I?


OMG, Availability Trumps Security! Oh, the Horror!

February 1st, 2008 3 comments

Michael Farnum’s making me shake my head today in confusion based upon a post wherein he’s shocked that some businesses may favor availability over (ahem) "security." 

Classically we’ve come to know and love (a)vailability as a component of
security — part of the holy triumvirate paired with (c)onfidentiality
and (i)ntegrity — but somehow it’s now incredulous that one of
these concerns can matter more to a business than the others.

If one measures business impact against an asset, are you telling me, Mike, that all three are always equal?  Of course not…

Depending upon what’s important to maintain operations as an on-going concern or what is required as a business decision to be more critical, being available even under degraded service levels may be more important than preserving or enforcing confidentiality and integrity.  To some, it may not.

The reality is that this isn’t an issue of absolutes.  The measured output of the investments in C, I and A aren’t binary — you’re not only either 0% or 100% secure.   There are shades of gray.  Decisions are often made such that one of the elements of C, I and A are deemed more relevant or more important.

Businesses often decide to manage risk by trading off one leg of the stool for another.  You may very end up with a wobbly seat, but there’s a difference between what we see in textbooks and what the realities in the field actually are.

Deal with it.  Sometimes businesses make calculated bets that straddle the fine line of acceptable loss and readiness and decide to invest in certain things versus others.  Banks to this all the time.  Their goal is to be right more often than they are wrong.  They manage their risk.  They generally do it well.  Depending upon the element in question, sometimes A wins.  Sometimes it doesn’t.

Here’s a test.  Go turn off your Internet firewall and tell everyone you’re perfectly secure now.  Will everyone high-five you for a job well done? 

Firewall’s down.  Business stops.  Not for "security’s sake."  Pushed the wrong button…

Compensating controls can help offset effects against C and I, but if an asset or service is not A(vailable) what good is it?  Again, this depends on the type of asset/service and YMMV.  Sometimes C or I win.

Thanks to the glut of security band-aids we’ve thrown at tackling "security" problems these days, availability has become — quite literally — a function of security.  As we see the trend move from managing "security" toward managing "risk," we’ll see more of this heresy common sense appear as mainstream thinking.

Since we can’t seem to express (for the most part) how things like firewalls translate to a healthier bottom line, better productivity or efficiency, it’s no wonder businesses are starting to look to actionable risk management strategies that focuses on operational business impact instead.

Measuring availability (at the macro level or transactionally) is easy.  IT knows how to do this.  Either something is available or it isn’t.  How do you measure confidentiality of integrity as a repeatable metric?

In my comment to Michael (and Kurt Wismer) I note:

It’s funny how allergic you and Wismer are toward the notion that managing risk may mean that “security” (namely C and I) isn’t always the priority.  Basic risk assessment process shows us that in many cases “availability” trumps "security." 

This can’t be a surprise to either of you.

Depending upon your BCP/DR/Incident Response capabilities, the notion of a breakdown in C or I can be overcome by resilience that also has the derivative effect of preserving A.

Risk Management != Security. 

However, good Security helps to reinforce and enforce those things which lend themselves toward making better decisions on how to manage risk.

What’s so hard to understand about that?

Sounds perfectly reasonable to me.

Security’s in the eye of the beholder.  Stop sticking your thumb in yours 😉

Speaking of which Twitter’s down. Damn!  Unavailability strikes again!


Ginko Financial Collapse Ultimately Yields Real Virtual Risk (Huh?)

January 15th, 2008 5 comments

I’m feeling old lately.  First it was my visceral reaction to the paranormal super-poking goings-on on Facebook and now it’s this news regarding Linden Lab’s Second Life that has my head spinning. 

It seems that the intersection between the virtual and physical worlds is continuing to inch ever closer.

In fact, it’s hitting people where it really counts, their (virtual) wallets. 

We first saw something like this bubble up with in-world gambling issues and now Linden announced in their blog today that any virtual "in-world banks" must be registered with real-world financial/banking regulatory agencies:

As of January 22, 2008, it will be prohibited to offer interest or
any direct return on an investment (whether in L$ or other currency)
from any object, such as an ATM, located in Second Life, without proof
of an applicable government registration statement or financial
institution charter. We’re implementing this policy after reviewing
Resident complaints, banking activities, and the law, and we’re doing
it to protect our Residents and the integrity of our economy.

Why?  It seems there’s more bad juju brewin’.  A virtual bank shuts down and defaults.  What’s next?  A virtual sub-prime loan scandal?

Since the collapse of Ginko Financial in August 2007, Linden Lab has
received complaints about several in-world “banks” defaulting on their
promises. These banks often promise unusually high rates of L$ return,
reaching 20, 40, or even 60 percent annualized.

Usually, we don’t step in the middle of Resident-to-Resident conduct
– letting Residents decide how to act, live, or play in Second Life.

But these “banks” have brought unique and substantial risks to
Second Life, and we feel it’s our duty to step in. Offering
unsustainably high interest rates, they are in most cases doomed to
collapse – leaving upset “depositors” with nothing to show for their
investments. As these activities grow, they become more likely to lead
to destabilization of the virtual economy. At least as important, the
legal and regulatory framework of these non-chartered, unregistered
banks is unclear, i.e., what their duties are when they offer
“interest” or “investments.”

There is no workable alternative. The so-called banks are not
operated, overseen or insured by Linden Lab, nor can we predict which
will fail or when. And Linden Lab isn’t, and can’t start acting as, a
banking regulator.

Some may argue that Residents who deposit L$ with these “banks” must
know they’re assuming a big risk – the high interest rates promised
aren’t guaranteed, and the banks aren’t overseen by Linden Lab or
anyone else. That may be true. But for all of the other reasons we’ve
set out above, we can’t let this activity continue.

Thus, as we did in the past with gambling, as of January 22, 2008 we will begin removing
any virtual ATMs or other objects that facilitate the operation or
facilitation of in-world “banking,” i.e., the offering of interest or a
rate of return on L$ invested or deposited. We ask that between now and
then, those who operate these “banks” settle up on any promises they
have made to other Residents and, of course, honor valid withdrawals.
After that date, we may sanction those who continue to offer these
services with suspension, termination of accounts, and loss of land.

Wow.  Loss of land!  I thought overdraft fees were harsh!?

Ed Felten from Freedom to Tinker summed it up nicely:

This was inevitable, given the ever-growing connections between the
virtual economy of Second Life and the real-world economy. In-world
Linden Dollars are exchangeable for real-world dollars, so financial
crime in Second Life can make you rich in the real world. Linden
doesn’t have the processes in place to license “banks” or investigate
problems. Nor does it have the enforcement muscle to put bad guys in

Expect this trend to continue. As virtual world “games” are played for
higher and higher stakes, the regulatory power of national governments
will look more and more necessary.

So far I’ve stayed away from Second Life; I’ve got enough to manage in my First one.  Perhaps it’s time to take a peek and see what all the fuss is about?


2008 Security Predictions — They’re Like Elbows…

December 3rd, 2007 6 comments

Yup.  Security predictions are like elbows.  Most everyone’s got at least two, they’re usually ignored unless rubbed the wrong way but when used appropriately, can be devastating in a cage match…

So, in the spirit of, well, keeping up with the Jones’, I happily present you with Hoff’s 2008 Information (in)Security Predictions.  Most of them are feature attacks/attack vectors.  A couple are ooh-aah trends.  Most of them are sadly predictable.  I’ve tried to be more specific than "cybercrime will increase."

I’m really loathe do these, but being a futurist, the only comfort I can take is that nobody can tell me that I’m wrong today 😉

…and in the words of Carnac the Magnificent, "May the winds of the Sahara blow a desert scorpion up your turban…"

  1. Nasty Virtualization Hypervisor Compromise
    As the Hypervisor gets thinner, more of the guts will need to be exposed via API or shed to management and functionality-extending toolsets, expanding the attack surface with new vulnerabilities.  To wit, a Hypervisor-compromising malware will make it’s first in-the-wild appearance to not only produce an exploit, but obfuscate itself thanks to the magic of virtualization in the underlying chipsets.  Hang on to yer britches, the security vendor product marketing SpecOps Generals are going to scramble the fighters with a shock and awe campaign of epic "I told you so" & "AV isn’t dead, it’s just virtualized" proportions…Security "strategery" at it’s finest.

  2. Major Privacy Breach of a Social Networking Site
    With the broadening reach of application extensibility and Web2.0 functionality, we’ll see a major privacy breach via social network sites such as MySpace, LinkedIn or Facebook via the usual suspects (CSRF, XSS, etc.) and via host-based Malware that 0wns unsuspecting Millenials and utilizes the interconnectivity offered to turn these services into a "social botnet" platform with a wrath the likes of which only the ungoldly lovechild of Storm, Melissa, and Slammer could bring…

  3. Integrity Hack of a Major SaaS Vendor
    Expect a serious bit of sliminess to occur with real financial impact to occur from a SaaS vendor’s offering.  With professional cybercrime on the rise, the criminals will go not only where the money is, but also after the data that describes where that money is.  Since much of the security of the SaaS model counts on the integrity and not just the availability of the hosted service, a targeted attack which holds hostage the (non-portable) data and threatens its integrity could have devastating effects on the companies who rely on it.  SalesForce, anyone?
  4. Targeted eBanking Compromise with substantial financial losses
    Get ready for a nasty eBanking focused compromise that starts to unravel the consumer confidence in this convenient utility; not directly because of identity abuse (note I didn’t say identity theft) but because of the business model impact it will bring to the banks.   These types of direct attacks (beyond phishing) will start to push the limits of acceptable loss for the financial institutions and their insurers and will start to move the accountability/responsibility more heavily down to the eBanker.  A tiered service level will surface with greater functionality/higher transaction limits being offered with a trade-off of higher security/less convenience.  Same goes for credit/debit cards…priceless!
  5. A Major state-sponsored espionage and cyberAttack w/disruption of U.S. government function
    We saw some of the more noisy examples of low-level crack attacks via our Chinese friends recently, but given the proliferation of botnets, the inexcusably poor levels of security in government systems and network security, we’ll see a targeted attack against something significant.  It’ll be big.  It’ll be public.  It’ll bring new legislation…Isn’t there some little election happening soon?  This brings us to…
  6. Be-Afraid-A of a SCADA compromise…the lunatics are running the asylum!
    Remember that leaked DHS "turn your generator into a roman candle" video that circulated a couple of months ago?  Get ready to see the real thing on prime time news at 11.  We’ve got decades of legacy controls just waiting for the wrong guy to flip the right switch.  We just saw an "insider" of a major water utility do naughty things, imagine if someone really motivated popped some goofy pills and started playing Tetris with the power grid…imagine what all those little SCADA doodads are hooked to…
  7. A Major Global Service/Logistics/Transportation/Shipping/Supply-Chain Company will be compromised via targeted attack
    A service we take for granted like UPS, FedEx, or DHL will have their core supply chain/logistics systems interrupted causing the fragile underbelly of our global economic interconnectedness to show itself, warts and all.  Prepare for huge chargebacks on next day delivery when all those mofo’s don’t get their self-propelled, remote-controlled flying UFO’s delivered from

  8. Mobile network attacks targeting mobile broadband
    So, you don’t use WiFi because it’s insecure, eh?  Instead, you fire up that Verizon EVDO card plugged into your laptop or tether to your mobile phone instead because it’s "secure."  Well, that’s going to be a problem next year.  Expect to see compromise of the RF you hold so dear as we all scramble to find that next piece of spectrum that has yet to be 0wn3d…Google’s 700Mhz spectrum, you say? Oh, wait…WiMax will save us all…
  9. My .txt file just 0wn3d me!  Is nothing sacred!?  Common file formats and protocols to cause continued unnatural acts
    PDF’s, Quicktime, .PPT, .DOC, .XLS.  If you can’t trust the sanctity of the file formats and protocols from Adobe, Apple and Microsoft, who can you trust!?  Expect to see more and more abuse of generic underlying software plumbing providing the conduit for exploit.  Vulnerabilities that aren’t fixed properly combined with a dependence on OS security functionality that’s only half baked is going to mean that the "Burros Gone Wild" video you’re watching on YouTube is going to make you itchy in more ways than one…

  10. Converged SensorNets
    In places like the UK, we’ve seen the massive deployment of CCTV monitoring of the populous.  In places like South Central L.A., we have ballistic geo-location and detection systems to track gunshots.  We’ve got GPS in phones.  In airports we have sniffers, RFID passport processing, biometrics and "Total Recall" nudie scanners.  The PoPo have license plate recognition.  Vegas has facial recognition systems.  Our borders have motion, heat and remote sensing pods.  start knitting this all together and you have massive SensorNets — all networked — and able to track you to military precision.  Pair that with GoogleMaps/Streets and I’ll be able to tell what color underwear you had on at the Checkout counter of your local Qwik-E-Mart when you bought that mocha slurpaccino last Tuesday…please don’t ask me how I know.

  11. Information Centric Security Phase One
    It should come as no surprise that focusing our efforts on the host and the network has led to the spectacular septic tank of security we have today.  We need to focus on content in context and set policies across platform and transport to dictate who, how, when, where, and why the creation, modification, consumption and destruction of data should occur.  In this first generation of DLP/CMF solutions (which are being integrated into the larger base of "Information" centric "assurance" solutions,) we’ve taken the first step along this journey.  What we’ll begin to see in 2008 is the information equivalent of the Mission Impossible self-destructing recording…only with a little more intelligence and less smoke.  Here come the DRM haters…
  12. The Attempted Coup to Return to Centralized Computing with the Paradox of Distributed Data
    Despite the fact that data is being distributed to the far reaches of the Universe, the wonders of economics combined with the utility of some well-timed technology is seeing IT & Security (encouraged by the bean counters) attempting to reel the genie back in the bottle and re-centralize the computing (desktop, server, application and storage) experience back into big boxes tucked safely away in some data center somewhere.  Funny thing is, with utility/grid computing and SaaS, the data center is but an abstraction, too.  Virtualization companies will become our dark overlords as they will control the very fabric of our digital lives…2008 is when we’ll really start to use the web as the platform for the delivery of all applications, served through streamed desktops on thinner and thinner clients.

So, that’s all I could come up with.  I don’t really have a formulaic empirical model like Stiennon.  I just have a Guiness and start complaining.  This is what I came up with.

In more ways than one, I hope I’m terribly wrong on most of these.


[Edit: Please see my follow-on post titled "And Now Some Useful 2008 Information Survivability Predictions" which speak to some interesting less gloomy things I predict to happen in 2008]

Too Much Risk Management? Not Possible

October 30th, 2007 6 comments

I’ll give Rothman the props/ping for highlighting an interesting post from Sammy Migues at the Cigital "Justice League" blog.

Sammy’s post is titled "The Risk of Too Much Risk Management." 

Short of the title and what I feel is a wholly inappropriate use of the "meaning" of risk management as the hook for the story, the underlying message is sound: security for security’s sake is an obstructionist roadblock to business; the deployment of layer after layer of security controls as a knee-jerk reaction to threats and vulnerabilities is a bad thing.

I totally get that and I totally agree.  The problem I have with Sammy’s post is he’s doing the absolute worst thing possible by defining what he improperly describes as "risk management" and it’s meaning to the business and suggesting that a technology-centric application of rapid-fire reflexive information security is the same as risk management.

It’s basically making excuses for people practicing "information security" and calling it "risk management."

They are NOT the same thing. 

By associating the two he’s burying the value of the message and marginalizing the impact and value that true risk management can have within an organization. 

To wit, let’s take a look at how he describes what risk management means:

Let’s put a stake in the ground on what risk management means. I’m
not referring to how it’s defined so much as what it actually means to
business. Risk management means there is a thought process that
includes ensuring the right people with adequate skills are given
useful information and actually decide whether to do this or that to
more effectively achieve security goals. Something like, “The available
data indicate that path A at price B mitigates problems C, D, and E,
but causes problem F, while path Z at price Y, mitigates problems C, E,
and X, but causes problem W. What’s your decision?”

I’m very puzzled by this description because what’s stated above is not  "…what [risk management] actually means to the business."   The first part which describes ensuring that the right people are given access to the right data at the right time is really the output of a well-oiled business resilience operation (information survivability/assurance) which factors risk assessment and business impact into the decision fabric.

However, the "business" doesn’t "…actually decide whether to do this or that to
more effectively achieve security goals"
they factor on whether they can achieve their business goals.  Security is the stuff that gets added on usually after the decision has been made.

Some people have
good gut instincts, shoot from the hip, and end up with decisions that
only occasionally burst into flames. For my risk appetite, that’s too
little risk management. Others wait for every possible scrap of data,
agonize over the possibilities, and end up with decisions that only
occasionally aren’t completely overcome by events. That’s too much risk

Again, neither of those cases describes "good" risk management.  The example paints a picture of "luck, decent guesswork and perhaps a SWAG at risk assessment" or "irrational and inflexible analysis paralysis due to the lack of a solid framework for risk assessment" respectively.

The impact of too little risk management is usually too few security
controls and, therefore, too much unpredicted expense in a variety of
areas: incident response, litigation, and recovery, to name a few.
These are often the result of public things that can have lasting
effects on brand. This is easy to understand.

The impact of too much risk management is usually too many security
controls and, therefore, too much predicted expense in a variety of
areas: hardware, software, tools, people, processes, and so on. These
are all internal things that can setiously impair agility, efficiency,
and overhead, and this is usually much harder to understand.

This isn’t a game of measures from the perspective of "too little" or "too much."  Risk management isn’t a scaled weighting of how many controls are appropriate by unit volume of measure.  Within this context, risk management describes investing EXACTLY enough — no more and no less — in the controls required to meet the operational, business and security requirements of the organization.

The next paragraph is actually the meat of the topic — albeit with the continued abuse of the term risk management.  Substitute "Information Security" for "Risk Management" and he describes the very set of problems I’ve been talking about in my "Information Security is Dead" posts:

Let me clarify that I’m being a little fast and loose with “too much
risk management” above. In my experience, the problem is almost never
too much “risk management,” it’s almost always too much security fabric
resulting from a fixation on or over-thinking of each and every
security issue, whether applicable or not, combined with a natural
tendency to equate activity with progress. As a consultant, I’ve heard
some form of the following dialog hundreds of times: “What are we doing
about the security problem I’ve heard about?” followed by a confident
“We have people choosing A as we speak.” More security controls,
especially generic plug-n-play things, does not automatically mean less
risk, but it sure is highly demonstrable activity (to managers, to
auditors, to examiners).

The last paragraph basically endorses the practices of most information security programs today inasmuch as it describes what most compliance-driven InfoSec managers already know…"good enough is good enough":

All in all, too few security controls is probably the greater of the
two evils. On the other hand, it’s probably the easiest to remedy. Even
if you do no risk management at all, if you have the money to purchase
and correctly install most of the major security technologies out
there, the shotgun approach will in fact reduce security risk. You’ll
never know if you’ve done enough or if you’ve overspent, but you’ll
have a story to tell to the masses. On the other hand, a thoughtful
security approach based on sound risk management will give you a story
to tell to savvy customers and increasingly well-educated auditors and

If the shotgun approach gives the appearance of "reducing risk" why do anything else?  Sammy certainly did not make the case as to why evolving to managing risk is paramount, valuable, and necessary and worse yet, risk management is ill defined.

If you had limited resources, limited budget, limited time and limited enthusiasm, given the options above, which would you pick?  Exactly.

Risk management is hard work.  Risk management requires a focus, mission, and operational role change.  That sort of thing has to be endorsed by the organization as a whole.  It means that in many cases what you do today is not what you’d do if you transformed your role into managing risk.

Managing risk is a business function.  Your role ought to be advisory in nature.  It can even be operational  once the business decision has been made on how best and how much to invest in any controls needed to manage risk to an appropriate level.

Rothman summarized this well in his post "The
point I want to make is that all risk management (and security for that
matter) need to be based on the NEEDS OF THE BUSINESS. If your business
is culturally risk-taking, entrepreneurial and nimble, then you are
probably going to be on the less side of the risk management continuum.
The converse also applies. Just remember to map your security strategy
to the characteristics of your business, not the other way around."

Today, Information Security has positioned themselves as the judge, jury and executioner as a red-headed stepchild outside of the risk management process.  The problem is, it’s not really Information Security’s problem to "solve," but we nevertheless bear the weight of the crucifix we nail ourselves to.

Time to get off the cross…someone else needs the wood.


Update: Of course the moment I hit "Send" on this, my Google Reader alerted me that Alex Hutton had already responded in kind.  He, of course, does it better and more succinctly 😉

Categories: Risk Management Tags:

Running With Scissors…Security, Survivability, Management, Resilience…Whatever!

October 26th, 2007 4 comments

Pointy Things Can Hurt

Mom always told me not to run with scissors because she knew that ugly things might happen if I did.  I seem to have blocked this advice out of my psyche.  Running with scissors can be exhilarating.

My latest set of posts represent the equivalent of blogging with scissors, it seems. 

Sadly, sometimes one of the only ways to get people to
intelligently engage in contentious discourse on a critical element of our
profession is to bait them into a game of definitional semantics;
basically pushing buttons and debating nuance to finally arrive at the
destination of an "AHA! moment."

Either that, or I just suck at making a point and have to go through all the machinations to arrive at consensus.  I’m the first to admit that I often times find myself misunderstood, but I’ve come to take this with a grain of salt and try to learn from my mistakes.

I don’t conspire to be tricky or otherwise employ cunning or guile to goad people with the goal of somehow making them look foolish, but rather have discussions that need to be had.  You’ll just have to take my word on that.  Or not.

You Say Potato, I say Po-ta-toe…

There are a number of you smart cookies who have been reading my posts on Information Survivability and have asked a set of very similar questions that are really insightful and provoke exactly the sort of discussion I had hoped for.

Interestingly, folks continue to argue definitional semantics without realizing that we’re mostly saying the same thing.  Most of you bristling really aren’t pushing back on the functional aspects of Information Security vs. Information Survivability.  Rather, it seems that you’ve become mired in the selection of words rather than the meme.

What do I mean?  Folks are spending far too much time debating which verb/noun to use to describe what we do and we’re talking past each other.  Granted, a lot of this is my fault for the way I choose to stage the debate and given this medium, it’s hard to sometimes re-focus the conversation because it becomes so fragmented.

Rich Mogull posted a great set of commentary on this titled "Information Security vs. Information Survivability: Retaking Our Vocabulary." wherein he eloquently rounds this out:

The problem is that we’ve lost control of our own vocabulary.
“Information security” as a term has come to define merely a fraction
of its intended scope.

Thus we have to use terms like security risk management and
information survivability to re-define ourselves, despite having a
completely suitable term available to us. It’s like the battle between
the words “hacker” and “cracker”. We’ve lost that fight with
“information security”, and thus need to use new language to advance
the discussion of our field.

When Chris, myself, and others talk about “information
survivability” or whatever other terms we’ll come up with, it’s not
because we’re trying to redefine our practice or industry, it’s because
we’re trying to bring security back to its core principles. Since we’ve
lost control of the vocabulary we should be using, we need to introduce
a new vocabulary just to get people thinking differently.

As usual, Rich follows up and tries to smooth this all out.  I’m really glad he did because the commentary that followed showed exactly the behavior I am referring to in two parts.  This was from a comment left on Rich’s post.  It’s not meant to single out the author but is awkwardly profound in its relevance:

[1] This is the crux of the biscuit. Thanks for saying this. I don’t
like the word “survivability” for the pessimistic connotations it has,
as you pointed out. I also think it’s a subset of information security,
not the other way around.

I can’t possibly fathom how one would suggest that Survivability, which encompasses risk management, resilience and classical CIA assurance with an overarching shift in business-integrated perspective, can be thought of as a subset of a narrow, technically-focused practice like that which Information Security has become.  There’s not much I can say more than I already have on this topic.

[2] Now, if you wanted to go up a level to *information management*,
where you were concerned not only with getting the data to where it
needs to be at the right time, but also with getting *enough* data, and
the *right* data, then I would buy that as a superset of information
security. Information management also includes the practices of
retaining the right information for as long as it’s needed and no
longer, and reducing duplication of information. It includes deciding
which information to release and which to keep private. It includes a
whole lot more than just security.

Um, that’s what Information Survivability *is.*  That’s not what Information Security has become, however, as the author clearly points out.  This is like some sort of weird passive-aggressive recursion.

So what this really means is that people are really not disagreeing that the functional definition of Information Security is outmoded, but they just don’t like the term survivability.  Fine! Call it what you will: Information Resilience, Information Management, Information Assurance, but here’s why:
you can’t call it Information Security (from Lance’s comment here):

It seems like the focus here is less on technology, and more on process
and risk management. How is this approach from ISO 27000, or any other
ISMS? You use the word survivability instead of business process,
however other then that it seems more similar then different.

That’s right.  It’s not a technology-only focus.  Survivability (or whatever you’d like to call it) focuses on integrating risk assessment and risk management concepts with business blueprinting/business process modeling and applying just the right amount of Information Security where, when and how needed to align to the risk tolerance (I dare not say "appetite") of the business.

In a "scientific" taste test, 7/10 information security programs are focused on compliance and managing threats and vulnerabilities.  They don’t holistically integrate and manage risk.  They deploy a bunch of boxes using a cost-model that absolutely sucks donkey…  See Gunnar’s posts on the matter.

LipstickpigThere are more similarities than differences in many cases, but the reality is that most people today in our profession completely abuse the use of the term "risk."  Not intentionally, mind you, but due to the same reason Information Security has been bastardized and spread liberally like some great mitigation marmalade across the toasty canvas of our profession. 

The short of this is that you can playfully toy with putting lipstick on a pig (which I did for argument’s sake) and call what you do anything you like.

However, unless what you do, regardless of what you call it and no matter how much "difference" you seem to think you make, isn’t in alignment with the strategic initiatives of the company, your function over time becomes irrelevant.  Or at least a giant speedbump.

Time for Jiu Jitsu practice!  With Scissors!


Take5 (Episode #6) – Five Questions for Andy Jaquith, Yankee Group Analyst and Metrician…

September 13th, 2007 3 comments

This sixth episode of Take5 interviews Andy Jaquith, Yankee Group analyst and champion of all things Metric…I must tell you that Andy’s answers to my interview questions were amazing to read and I really appreciate the thought and effort he put into this.

First a little background on the victim:

Andrew Jaquith is a program manager in Yankee Group’s Enabling
Technologies Enterprise group with expertise in portable digital
identity and web application security. As Yankee Group’s lead security
analyst, Jaquith drives the company’s security research agenda and
researches disruptive technologies that enable tomorrow’s Anywhere
Enterprise™ to secure its information assets.


has 15 years of IT experience. Before joining Yankee Group, he
co-founded and served as program director at @stake, Inc., a security
consulting pioneer, which Symantec Corporation acquired in 2004. Before
@stake, Jaquith held project manager and business analyst positions at
Cambridge Technology Partners and FedEx Corporation.

His application security and  metrics research has been featured in publications such as CIO, CSO and the IEEE Security & Privacy.
In addition, Jaquith is the co-developer of a popular open source wiki
software package. He is also the author of the recently released
Pearson Addison-Wesley book, Security  Metrics: Replacing Fear, Uncertainty and Doubt. It has been praised by  reviewers as both ”sparking and witty“ and ”one of the best written security  books ever.”
  Jaquith holds a B.A. degree in  economics and political science from Yale University.


1) Metrics.  Why is this such a contentious topic?  Isn't the basic axiom of "you can't 
manage what you don't measure" just common sense?  A discussion on metrics evokes very
passionate discussion amongst both proponents and opponents alike.  Why are we still
debating the utility of measurement?

The arguments over metrics are overstated, but to the extent they are 
contentious, it is because "metrics" means different things to 
different people. For some people, who take a risk-centric view of 
security, metrics are about estimating risk based on a model. I'd put 
Pete Lindstrom, Russell Cameron Thomas and Alex Hutton in this camp. 
For those with an IT operations background, metrics are what you get 
when you measure ongoing activities. Rich Bejtlich and I are 
probably closer to this view of the world. And there is a third camp 
that feels metrics should be all about financial measures, which 
brings us into the whole "return on security investment" topic. A lot 
of the ALE crowd thinks this is what metrics ought to be about. Just 
about every security certification course (SANS, CISSP) talks about 
ALE, for reasons I cannot fathom.

Once you understand that a person's point of view of "metrics" is 
going to be different depending on the camp they are in -- risk, 
operations or financial -- you can see why there might be some 
controversy between these three camps. There's also a fourth group 
that takes a look at the fracas and says, "I know why measuring 
things matter, but I don't believe a word any of you are talking 
about." That's Mike Rothman's view, I suspect.

Personally, I have always taken the view that metrics should measure 
things as they are (the second perspective), not as you imagine, 
model or expect them to be. That's another way of saying that I am an 
epiricist. If you collect data on things and swirl them around in a 
blender, interesting things will stratify out.

Putting it another way: I am a measurer rather than a modeler. I 
don't claim to know what the most important security metrics are. But 
I do know that people measure certain things, and that those things 
gives them insights into their firms performance. To that end, I've 
got about 100 metrics documented in my book; these are largely based 
what people tell me they measure. Dan Geer likes to say, "it almost 
doesn't matter what you measure, but get started and measure 
something." The point of my book, largely, is to give some ideas 
about what those somethings might be, and to suggest techniques for 
analyzing the data once you have them.

Metrics aren't really that contentious. Just about everyone in the community is pretty friendly and courteous. It's 
a "big tent." Most of the differences are with respect to 
inclination. But, outside of the "metrics community" it really comes 
down to a basic question of belief: you either believe that security 
can be measured or you don't.

The way you phrased your question, by the way, implies that you 
probably align a little more closely with my operational/empiricist 
view of metrics. But I'd expect that, Chris -- you've been a CSO, and 
in charge of operational stuff before. :)

2) You've got a storied background from FedEx to @Stake to the Yankee Group. 
I see your experience trending from the operational to the analytical.  How much of your
operational experience lends  itself to the practical collection and presentation of
metrics -- specifically security metrics?  Does your broad experience help you in
choosing what to measure and how?

That's a keen insight, and one I haven't thought of before. You've 
caused me to get all introspective all of a sudden. Let me see if I 
can unravel the winding path that's gotten me to where I am today.

My early career was spent as an IT analyst at Roadway, a serious, 
operationally-focused trucking firm. You know those large trailers 
you see on the highways with "ROADWAY" on them? That's the company I 
was with. They had a reputation as being like the Marines. Now, I 
wasn't involved in the actual day-to-day operations side of the 
business, but when you work in IT for a company like that you get to 
know the business side. As part of my training I had to do "ride 
alongs," morning deliveries and customer visits. Later, I moved to 
the contract logistics side of the house, where I helped plan IT 
systems for transportation brokerage services and contract warehouses 
the company ran. The logistics division was the part of Roadway that 
was actually acquired by FedEx.

I think warehouses are just fascinating. They are one hell of a lot 
more IT intensive than you might think. I don't just mean bar code 
readers, forklifts and inventory control systems; I mean also the 
decision support systems that produce metrics used for analysis. For 
example, warehouses measure an overall metric for efficiency called 
"inventory turns" that describes how fast your stock moves through 
the warehouse. If you put something in on January 1 and move it out 
on December 31 of the same year, that part has a "velocity" of 1 turn 
per year. Because warehouses are real estate like any other, you can 
spread out your fixed costs by increasing the number of turns through 
the warehouse.

For example, one of the reasons why Dell -- a former customer of mine 
at Roadway-- was successful was that they figured out how to make 
their suppliers hold their inventory for them and deliver it to final 
assembly on a "just-in-time" (JIT) basis, instead of keeping lots of 
inventory on hand themselves. That enabled them to increase the 
number of turns through their warehouses to something like 40 per 
year, when the average for manufacturing was like 12. That efficiency 
gain translated directly to profitability. (Digression: Apple, by the 
way, has lately been doing about 50 turns a year through their 
warehouses. Any wonder why they make as much money on PCs as HP, who 
has 6 times more market share? It's not *just* because they choose 
their markets so that they don't get suckered into the low margin 
part of the business; it's also because their supply chain operations 
are phenomenally efficient.)

Another thing I think is fascinating about warehouses is how you 
account for inventory. Most operators divide their inventory into 
"A", "B" and "C" goods based on how fast they turn through the 
warehouse. The "A" parts might circulate 10-50x faster than the C 
parts. So, a direct consequence is that when you lay out a warehouse 
you do it so that you can pick and ship your A parts fastest. The 
faster you do that, the more efficient your labor force and the less 
it costs you to run things. Neat, huh?

Now, I mention these things not strictly speaking to show you what a 
smartypants I am about supp ly chain operations. The real point is to 
show how serious operational decisions are made based on deep 
analytics. Everything I just mentioned can be modeled and measured: 
where you site the warehouses themselves, how you design the 
warehouse to maximize your ability to pick and ship the highest-
velocity items, and what your key indicators are. There's a virtuous 
feedback loop in place that helps operators understand where they are 
spending their time and money, and that in turn drives new 
innovations that increase efficiencies.

In supply chain, the analytics are the key to absolutely everything. 
And they are critical in an industry where costs matter. In that 
regard, manufacturing shares a lot with investment banking: leading 
firms are willing to invest a phenomenal amount of time and money to 
shave off a few basis points on a Treasury bill derivative. But this 
is done with experiential, analytical data used for decision support 
that is matched with a model. You'd never be able to justify 
redesigning a warehouse or hiring all of Yale's graduating math 
majors unless you could quantify and measure the impact of those 
investments on the processes themselves. Experiential data feeds, and 
improves, the model.

In contrast, when you look at security you see nothing like this. 
Fear and uncertainty rule, and almost every spending decision is made 
on the basis of intuition rather than facts. Acquisition costs 
matter, but operational costs don't. Can you imagine what would 
happen if you plucked a seasoned supply-chain operations manager out 
of a warehouse, plonked him down in a chair, and asked him to watch 
his security counterpart in action? Bear in mind this is a world 
where his CSO friend is told that the answer to everything is "buy 
our software" or "install our appliance." The warehouse guy would 
look at him like he had two heads. Because in his world, you don't 
spray dollars all over the place until you have a detailed, grounded, 
empirical view of what your processes are all about. You simply don't 
have the budget to do it any other way.

But in security, the operations side of things is so immature, and 
the gullibility of CIOs and CEOs so high, that they are willing to 
write checks without knowing whether the things they are buying 
actually work. And by "work," I don't mean "can be shown to stop a 
certain number of viruses at the border"; I mean "can be shown to 
decrease time required to respond to a security incident" or "has 
increased the company's ability to share information without 
incurring extra costs," or "has cut the pro-rata share of our IT 
operations spent on rebuilding desktops."

Putting myself in the warehouse manager's shoes again: for security, 
I'd like to know why nobody talks about activity-based costing. Or 
about process metrics -- that is, cycle times for everyday security 
activities -- in a serious way. Or benchmarking -- does my firm have 
twice as many security defects in our web applications as yours? Are 
we in the first, second, third or fourth quartiles?

If the large security companies were serious, we'd have a firmer grip 
on the activity, impact and cost side of the ledger. For example, why 
won't AV companies disclose how much malware is actually circulating 
within their customer bases, despite their promises of "total 
protection"? When the WMF zero-day exploit came out, how come none of 
the security companies knew how many of their customers were 
infected? And how much the cleanup efforts cost? Either nobody knew, 
or nobody wanted to tell. I think it's the former. If I were in the 
shoes of my Roadway operational friend, I'd be pissed off about the 
complete lack of feedback between spending, activities, impact and cost.

If this sounds like a very odd take on security, it is. My mentor and 
former boss, Dan Geer, likes to say that there are a lot of people 
who don't have classical security training, but who bring "hybrid 
vigor" to the field. I identify with that. With my metrics research, 
I just want to see if we can bring serious analytic rigor to a field 
that has resisted it for so long. And I mean that in an operational 
way, not a risk-equation way.

So, that's an exceptionally long-winded way of saying "yes" to your 
question -- I've trended from operational to analytical. I'm not sure 
that my past experience has necessarily helped me pick particular 
security metrics per se, but it has definitely biased me towards 
those that are operational rather than risk-based.

3) You've recently published your book.  I think it was a great appetite whetter but
I was left -- as were I think many of us who are members of the "lazy guild"-- wanting
more.   Do you plan to follow-up with a metrics toolkit of sorts?  You know, a templated
guide -- Metrics for Dummies?

You know, that's a great point. The fabulous blogger Layer 8, who 
gave my book an otherwise stunning review that I am very grateful for 
("I tucked myself into bed, hoping to sleep—but I could not sleep 
until I had read Security Metrics cover to cover. It was That 
Good."), also had that same reservation. Her comment was, "that final 
chapter just stopped short and dumped me off the end of the book, 
without so much as a fare-thee-well Final Overall Summary. It just 
stopped, and without another word, put on its clothes and went home". 
Comparing my prose to a one night stand is pretty funny, and a 

Ironically, as the deadline for the book drew near, I had this great 
idea that I'd put in a little cheat-sheet in the back, either as an 
appendix or as part of the endpapers. But as with many things, I 
simply ran out of time. I did what Microsoft did to get Vista out the 
door -- I had to cut features and ship the fargin' bastid.

One of the great things about writing a book is that people write you 
letters when they like or loathe something they read. Just about all 
of my feedback has been very positive, and I have received a number 
of very thoughtful comments that shed light on what readers' 
companies are doing with metrics. I hope to use the feedback I've 
gotten to help me put together a "cheat sheet" that will boil the 
metrics I discuss in the book into something easier to digest.

4) You've written about the impending death of traditional Anti-Virus technology and its
evolution to combat the greater threats from adaptive Malware.  What role do you think
virtualization technology that provides a sandboxed browsing environment will have in
this space, specifically on client-side security?

It's pretty obvious that we need to do something to shore up the 
shortcomings of signature-based anti- malware software. I regularly 
check out a few of the anti-virus benchmarking services, like the 
OITC site that aggregates the Virustotal scans. And I talk to a 
number of anti-malware companies who tell me things they are seeing. 
It's pretty clear that current approaches are running out of gas. All 
you have to do is look at the numbers: unique malware samples are 
doubling every year, and detection rates for previously-unseen 
malware range from the single digits to the 80% mark. For an industry 
that has long said they offered "total protection," anything less 
than 100% is a black eye.

Virtualization is one of several alternative approaches that vendors 
are using to help boost detection rates. The idea with virtualization 
is to run a piece of suspected malware in a virtual machine to see 
what it does. If, after the fact, you determine that it did something 
naughty, you can block it from running in the real environment. It 
sounds like a good approach to me, and is best used in combination 
with other technologies.

Now, I'm not positive how pervasive this is going to be on the 
desktop. Existing products are already pretty resource-hungry. 
Virtualization would add to the burden. You've probably heard people 
joke: "thank God computers are dual-core these days, because we need 
one of 'em to run the security software on." But I do think that 
virtualized environments used for malware detection will become a 
fixture in gateways and appliances.

Other emergent ideas that complement virtualization are behavior 
blocking and herd intelligence. Herd intelligence -- a huge malware 
blacklist-in-the-sky -- is a natural services play, and I believe all 
successful anti-malware companies will have to embrace something like 
this to survive.

5) We've see the emergence of some fairly important back-office critical applications
make their way to the Web (CRM, ERP, Financials) and now GoogleApps is staking a hold
for the SMB.  How do you see the SaaS model affecting the management of security -- 
and ultimately risk --  over time?

Software as a service for security is already here. We've already 
seen fairly pervasive managed firewall service offerings -- the 
carriers and companies like IBM Global Services have been offering 
them for years. Firewalls still matter, but they are nowhere near as 
important to the overall defense posture as before. That's partly 
because companies need to put a lot of holes in the firewall. But 
it's also because some ports, like HTTP/HTTPS, are overloaded with 
lots of other things: web services, instant messaging, VPN tunnels 
and the like. It's a bit like the old college prank of filling a 
paper bag with shaving cream, sliding it under a shut door, then jumping 
on it and spraying the payload all over the room's occupants. HTTP is 
today's paper bag.

In the services realm, for more exciting action, look at what 
MessageLabs and Postini have done with the message hygiene space. At 
Yankee we've been telling our customers that there's no reason why an 
enterprise should bother to build bespoke gateway anti-spam and anti-
malware infrastructures any more. That's not just because we like 
MessageLabs or Postini. It's also because the managed services have a 
wider view of traffic than a single enterprise will ever have, and 
benefit from economies of scale on the research side, not to mention 
the actual operations.

Managed services have another hidden benefit; you can also change 
services pretty easily if you're unhappy. It puts the service 
provider's incentives in the right place. Qualys, for example, 
understands this point very well; they know that customers will leave 
them in an instant if they stop innovating. And, of course, whenever 
you accumulate large amounts of performance data across your customer 
base, you can benchmark things. (A subject near and dear to my heart, 
as you know.)

With regards to the question about risk, I think managed services do 
change the risk posture a bit. On the one hand, the act of 
outsourcing an activity to an external party moves a portion of the 
operational risk to that party. This is the "transfer" option of the 
classic "ignore, mitigate, transfer" set of choices that risk 
management presents. Managed services also reduce political risk in a 
"cover your ass" sense, too, because if something goes wrong you can 
always point out that, for instance, lots of other people use the 
same vendor you use, which puts you all in the same risk category. 
This is, if you will, the "generally accepted practice" defense.

That said, particular managed services with large customer bases 
could accrue more risk by virtue of the fact that they are bigger 
targets for mischief. Do I think, for example, that spammers target 
some of their evasive techniques towards Postini and MessageLabs? I 
am sure they do. But I would still feel safer outsourcing to them 
rather than maintaining my own custom infrastructure.

Overall, I feel that managed services will have a "smoothing" or 
dampening effect on the risk postures of enterprises taken in 
aggregate, in the sense that they will decrease the volatility in 
risk relative to the broader set of enterprises (the "alpha", if you 
will). Ideally, this should also mean a decrease in the *absolute* 
amount of risk. Putting this another way: if you're a rifle shooter, 
it's always better to see your bullets clustered closely together, 
even if they don't hit near the bull's eye, rather than seeing them 
near the center, but dispersed. Managed services, it seems to me, can 
help enterprises converge their overall levels of security -- put the 
bullets a little closer together instead of all over the place. 
Regulation, in cases where it is prescriptive, tends to do that too.

Bonus Question:
6) If you had one magical dashboard that could display 5 critical security metrics
to the Board/Executive Management, regardless ofindustry, what would those elements

I would use the Balanced Scorecard, a creation of Harvard professors 
Kaplan and Norton. It divides executive management metrics into four 
perspectives: financial, internal operations, customer, and learning 
and growth. The idea is to create a dashboard that incorporates 6-8 
metrics into each perspective. The Balanced Scorecard is well known 
to the corner office, and is something that I think every security 
person should learn about. With a little work, I believe quite 
strongly that security metrics can be made to fit into this framework.

Now, you might ask yourself, I've spent all of this work organizing 
my IT security policies along the lines of ISO 17799/2700x, or COBIT, 
or ITIL. So why can't I put together a dashboard that organizes the 
measurements in those terms? What's wrong with the frameworks I've 
been using? Nothing, really, if you are a security person. But i f you 
really want a "magic dashboard" that crosses over to the business 
units, I think basing scorecards on security frameworks is a bad 
idea. That's not because the frameworks are bad (in fact most of them 
are quite good), but because they aren't aligned with the business. 
I'd rather use a taxonomy the rest of the executive team can 
understand. Rather than make them understand a security or IT 
framework, I'd rather try to meet them halfway and frame things in 
terms of the way they think.

So, for example: for Financial metrics, I'd measure how much my IT 
security infrastructure is costing, straight up, and on an activity-
based perspective. I'd want to know how much it costs to secure each 
revenue-generating transaction; quick-and-dirty risk scores for 
revenue-generating and revenue/cost-accounting systems; DDOS downtime 
costs. For the Customer perspective I'd want to know the percentage 
and number of customers who have access to internal systems; cycle 
times for onboarding/offloading customer accounts; "toxicity rates" 
of customer data I manage; the number of privacy issues we've had; 
the percentage of customers who have consulted with the security 
team; number and kind of remediation costs of audit items that are 
customer-related; number and kind of regulatory audits completed per 
period, etc. The Internal Process perspective has of the really easy 
things to measure, and is all about security ops: patching 
efficiency, coverage and control metrics, and the like. For Learning 
and Growth, it would be about threat/planning horizon metrics, 
security team consultations, employee training effectiveness and 
latency, and other issues that measure whether we're getting 
employees to exhibit the right behaviors and acquire the right skills.

That's meant to be an illustrative list rather than definitive, and I 
confess it is rather dense. At the risk of getting all Schneier on 
you, I'd refer your readers to the book for more details. Readers can 
pick and choose from the "catalog" and find metrics that work for 
their organizations.

Overall, I do think that we need to think a whole lot less about 
things like ISO and a whole lot more about things like the Balanced 
Scorecard. We need to stop erecting temples to Securityness that 
executives don't give a damn about and won't be persuaded to enter. 
And when we focus just on dollars, ALE and "security ROI", we make 
things too simple. We obscure the richness of the data that we can 
gather, empirically, from the systems we already own. Ironically, the 
Balanced Scorecard itself was created to encourage executives to move 
beyond purely financial measures. Fifteen years later, you'd think we 
security practitioners would have taken the hint.
Categories: Risk Management, Security Metrics, Take5 Tags:

So a Bayesian and an Objectivist Walk Into a Bar…

September 2nd, 2007 1 comment

As Shrdlu and CWalsh point out, there’s a fight brewin’ and it’s a good one.

A pugilistic pummeling of perplexing probability proportions!

Cage match.  Two men enter, one man leave.

Bejtlich and Hutton.



Categories: Risk Management Tags:

How To Begin Discussing the Virtualization Threat/Vulnerability Landscape: Proactive Approaches to Managing Emerging Risk?

August 29th, 2007 2 comments

It’s no doubt apparent that trafficking in the ideas and concepts surrounding both virtualized security and securing virtualized environments really honks my horn.  I’ve been writing about it a lot lately, and it’s starting to garner some very interesting amounts of attention from lots of different sources.

One of those sources sent me an email after reading some of my ramblings and framed a discussion point that I was writing about anyway, so I thought it a perfect opportunity to discuss it.

Specifically, when a disruptive emerging technology bursts onto the scene with many of the threats and vulnerabilities associated with said technology being mostly theoretical, conceptual and virtual in nature, how does one have a very real conversation with management regarding what should be done proactively to (and please forgive me both ISS and ISS-naysayers) "get ahead of the threat."

That is, how do you start talking about the need to assess and make actionable, if possible, the things necessary to secure such an impacting technology?  Asked not to be identified when I quoted him, I believe one of my readers summed this up quite nicely:

"I really enjoy your blog posts about virtualization security, since it’s a challenge I’m dealing with right now. The real problem I’m finding is explaining the security issues to people who don’t get security in general, and double-don’t-get-it in the context of virtualization.

The two points I really try to get across are:

1. the fact that there aren’t any common, well-known attacks specific to virtualization in the wild (guest hopping etc) is not a good thing, it’s a BAD thing; they’re coming!

2. a virtual server is like a little mini-network where essentially none of our existing security measures apply (I guess I’m mostly thinking of IDS here)

Am I hitting the right points, do you think? Where else can I go with this, since the "threat" is pretty much "I don’t know but something someday?"

My response is straightforward.   I think that he’s dead-on inasmuch as explaining virtualization and the risks associated with it is difficult, mostly because the "threats" are today mostly theoretical and the surface area for attack — or the vulnerabilities for that matter — just aren’t perceived as being there.

So the normal thing to do is just suggest that what we have will be applied to solve the "same" problems in the virtualized context and we’ll deal with the virtualization-specific threats and vulnerabilities when they become more "real." <sigh>

We can shout to the treetops about what is coming, but people don’t generally invest in security proactively because in many cases we’ve seemed to accept that the war is lost and we’re just looking to win a battle every once and a while.  <sigh^2>

It doesn’t help that we’re trying to build business cases to start thinking about investing in securing virtualized environments when the threats and vulnerabilities are so esoteric and by manner of omission executives are basically told that security is something they do not need to focus on any differently in their virtualization deployments.

So I only have a few suggestions for now:

  1. I’d use my preso. to help lubricate the conversation a little; it sums most of this up nicely
  2. Don’t make the mistake of suggesting the sky is falling — it may be, but that’s not going to get you timeshare or share of wallet
  3. In this nascent market, we have to communicate the potential exposure and elevated risk in
    the language of and terms associated with business; why should you spend time and money on this versus, say, patch management.
  4. You better have an answer to this one: "Virtualization is going to save us money, now you want to spend more to secure it!?"
  5. Abstract the discussion related to investment in terms of pushing vendors in your portfolio (by spending time/money) on making sure they will have something to offer you when you need it and start assessing your business and IT plans to see how they align to policy today
  6. Start to build what will be the best practices for what your virtualized environments ought to look like with what you know now, BEFORE you start having to put them into production next week
  7. Talk with your auditors — make them your allies.  Ask them how they expect to audit and assess your virtual environments (be careful what you ask for, however)
  8. Use what you have; you’re going to have to for a while anyway.
  9. Start testing now; demonstrate empirically how existing compensating controls will/will not satisfy your security policies in a virtualized construct
  10. Keep calm.  By the time we get around to cleaning this mess up, we’ll have another pile right around the corner.  This is a continuum, remember?  Same crap, different decade. At least we have twitter and facebook now.

In closing, and without sounding like a clucking chicken, check out this summary of a recent vulnerability disclosure on how to run arbitrary code on a VMware GuestOS thanks to a "feature" in VMware’s scripting automation API. Dennis Fisher over at SearchSecurity did a nice write-up about Mark Burnett’s recent discovery:

The folks at VMware have been in the news quite a bit of late,
thanks to their big IPO and their discreet acquisition of Determina a
couple of weeks ago. Now, the company’s core virtualization product is
getting some attention, but not the kind company executives will like.
Mark Burnett, an independent security consultant and author, recently
posted a long description of a vulnerability in VMware’s scripting automation API that he found.

The vulnerability comes down to this: The API allows any script on
the host machine to execute code and take other actions on any virtual
machine that’s running on the PC, without requiring any credentials on
the guest operating system. This presents a number of problems, as
Burnett points out:

The problem is that a malicious script running within
the context of a regular user on my desktop can run administrator-level
scripts on any guest I am currently logged in to. Using Ctrl+Alt+Del to
lock the desktop of those machines does not prevent VIX from executing
commands on the guest. Even if I log out of each guest machine the
malware can just queue the command to run the next time I log in at the
console of the guest OS.

However, this is in fact a feature that the VMware developers
intentionally included.
VMware told Burnett that, in essence, anyone
who can access the virtual machine APIs on a machine can access the
virtual hard disks anyway and would be able to attack the PC from that
direction. But it seems to me that Burnett is on to something here.
Sure, there are plenty of other methods for attacking virtual machines,
but that doesn’t mean this should be ignored.

Burnett also has found a way to mitigate the problem by adding a switch to the VMX config file.

This will be the first of many, of that you can be sure.  Without flapping your feathers, however, you can use something like this to start having discussions in a calm, rational manner…before you have to go reconfigure or patch your global virtualized server farms, that is…



Categories: Risk Management, Virtualization Tags: