Archive

Archive for the ‘Risk Assessment’ Category

Incomplete Thought: The Psychology Of Red Teaming Failure – Do Not Pass Go…

August 6th, 2013 14 comments
team fortress red team

team fortress red team (Photo credit: gtrwndr87)

I could probably just ask this of some of my friends — many of whom are the best in the business when it comes to Red Teaming/Pen Testing, but I thought it would be an interesting little dialog here, in the open:

When a Red Team is engaged by an entity to perform a legally-authorized pentest (physical or electronic) with an explicit “get out of jail free card,” does that change the tactics, strategy and risk appetite of the team were they not to have that parachute?

Specifically, does the team dial-up or dial-down the aggressiveness of the approach and execution KNOWING that they won’t be prosecuted, go to jail, etc.?

Blackhats and criminals operating outside this envelope don’t have the luxury of counting on a gilded escape should failure occur and thus the risk/reward mapping *might* be quite different.

To that point, I wonder what the gap is between an authorized Red Team action versus those that have everything to lose?  What say ye?

/Hoff

Enhanced by Zemanta

The Classical DMZ Design Pattern: How To Kill Security In the Cloud

July 7th, 2010 6 comments

Every day I get asked to discuss how Cloud Computing impacts security architecture and what enterprise security teams should do when considering “Cloud.”

These discussions generally lend themselves to a bifurcated set of perspectives depending upon whether we’re discussing Public or Private Cloud Computing.

This is unfortunate.

From a security perspective, focusing the discussion primarily on the deployment model instead of thinking holistically about the “how, why, where, and who” of Cloud, often means that we’re tethered to outdated methodologies because it’s where our comfort zones are.

When we’re discussing Public Cloud, the security teams are starting to understand that the choice of compensating controls and how they deploy and manage them require operational, economic and architectural changes.  They are also coming to terms with the changes to application architectures as it relates to distributed computing and SOA-like implementation.  It’s uncomfortable and it’s a slow-slog forward (for lots of good reasons,) but good questions are asked when considering the security, privacy and compliance impacts of Public Cloud and what can/should be done about them and how things need to change.

When discussing Private Cloud, however, even when a “clean slate design” is proposed, the same teams tend to try to fall back to what they know and preserve the classical n-tier application architecture separated by physical or virtual compensating controls — the classical split-subnet DMZ or perimeterized model of “inside” vs “outside.” They can do this given the direct operational control exposed by highly-virtualized infrastructure.  Sometimes they’re also forced into this given compliance and audit requirements. The issue here is that this discussion centers around molding cloud into the shape of the existing enterprise models and design patterns.

This is an issue; trying to simultaneously secure these diametrically-opposed architectural implementations yields cost inefficiencies, security disparity, policy violations, introduces operational risk and generally means that  the ball doesn’t get moved forward in protecting the things that matter most.

Public Cloud Computing is a good thing for the security machine; it forces us to (again) come face-to-face with the ugliness of the problems of securing the things that matter most — our information. Private Cloud Computing — when improperly viewed from the perspective of simply preserving the status quo — can often cause stagnation and introduce roadblocks.  We’ve got to move beyond this.

Public Cloud speaks to the needs (and delivers on) agility, flexibility, mobility and efficiency. These are things that traditional enterprise security are often not well aligned with.  Trying to fit “Cloud” into neat and tidy DMZ “boxes” doesn’t work.  Cloud requires revisiting our choices for security. We should take advantage of it, not try and squash it.

/Hoff

Enhanced by Zemanta

Patching the (Hypervisor) Platform: How Do You Manage Risk?

April 12th, 2010 7 comments

Hi. Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

Reblog this post [with Zemanta]

ENISA launches Cloud Computing Security Risk Assessment Document

November 20th, 2009 1 comment

ENISA-LOGOENISA (the European Network and Information Security Agency) today launched their 124 page report on Cloud Computing Security Risk Assessment.

At first glance it’s an excellent read and will be a fantastic accompaniment to the the CSA’s guidance.  I plan to dig into it more over the weekend.  I really appreciate the risk assessment approach which allows folks to prioritize their efforts on understanding the relevant high-level issues associed with Cloud.

Very well done.  I look forward to seeing how CSA and ENISA can further work together on upcoming projects!  I think the European perspective will help bring some balance and alternative views on Cloud in regards to legal and compliance issues specifically.

You can find the document here.

ENISA, supported by a group of subject matter expert comprising representatives from Industries, Academia and Governmental Organizations, has conducted, in the context of the Emerging and Future Risk Framework project, an risks assessment on cloud computing business model and technologies. The result is an in-depth and independent analysis that outlines some of the information security benefits and key security risks of cloud computing. The report provide also a set of practical recommendations.


Observations on “Securing Microsoft’s Cloud Infrastructure”

June 1st, 2009 1 comment

notice-angleI was reading a blog post from Charlie McNerney, Microsoft’s GM, Business & Risk Management, Global Foundation Services on “Securing Microsoft’s Cloud Infrastructure.”

Intrigued, I read the white paper to first get a better understanding of the context for his blog post and to also grok what he meant by “Microsoft’s Cloud Infrastructure.”  Was he referring to Azure?

The answer is per the whitepaper that Microsoft — along with everyone else in the industry — now classifies all of its online Internet-based services as “Cloud:”

Since the launch of MSN® in 1994, Microsoft has been building and running online services. The GFS division manages the cloud infrastructure and platform for Microsoft online services, including ensuring availability for hundreds of millions of customers around the world 24 hours a day, every day. More than 200 of the company’s online services and Web portals are hosted on this cloud infrastructure, including such familiar consumer-oriented services as Windows Live™ Hotmail® and Live Search, and business-oriented services such as Microsoft Dynamics® CRM Online and Microsoft Business Productivity Online Standard Suite from Microsoft Online Services. 

Before I get to the part I found interesting, I think that the whitepaper (below) does a good job of providing a 30,000 foot view of how Microsoft applies lessons learned over its operational experience and the SDL to it’s “Cloud” offerings.  It’s something designed to market the fact that Microsoft wants us to know they take security seriously.  Okay.

Here’s what I found interesting in Charlie’s blog post, it appears in the last two sentences (boldfaced): 

The white paper we’re releasing today describes how our coordinated and strategic application of people, processes, technologies, and experience with consumer and enterprise security has resulted in continuous improvements to the security practices and policies of the Microsoft cloud infrastructure.  The Online Services Security and Compliance (OSSC) team within the Global Foundation Services division that supports Microsoft’s infrastructure for online services builds on the same security principles and processes the company has developed through years of experience managing security risks in traditional software development and operating environments. Independent, third-party validation of OSSC’s approach includes Microsoft’s cloud infrastructure achieving both SAS 70 Type I and Type II attestations and ISO/IEC 27001:2005 certification. We are proud to be one of the first major online service providers to achieve ISO 27001 certification for our infrastructure. We have also gone beyond the ISO standard, which includes some 150 security controls. We have developed 291 security controls to date to account for the unique challenges of the cloud infrastructure and what it takes to mitigate some of the risks involved.

I think it’s admirable that Microsoft is sharing its methodologies and ISMS objectives and it’s a good thing that they have adopted ISO standards and secured SAS70 as a baseline.  

However, I would be interested in understanding what 291 security controls means to a security posture versus, say 178.  It sounds a little like Twitter follower counts.

I can’t really explain why those last two sentences stuck in my craw, but they did.

I’d love to know more about what Microsoft considers those “unique challenges of the cloud infrastructure” as well as the risk assessment framework(s) used to manage/mitigate them — I’m assuming they’ve made great STRIDEs in doing so. 😉

/Hoff

Cloud Catastrophes (Cloudtastrophes?) Caused by Clueless Caretakers?

March 22nd, 2009 4 comments
You'll ask "How?" Everytime... 

 

 

 

You'll ask "How?" Everytime...

Enter the dawn of the Cloudtastrophe…

I read a story today penned by Maureen O’Gara titled “Carbonite Loses Cloud-Based Data, Sues Storage Vendor.”

I thought this was going to be another story regarding a data breach (loss) of customer data by a Cloud Computing service vendor.

What I found, however, was another hyperbolic illustration of how the messaging of the Cloud by vendors has set expectations for service and reliability that are out of alignment with reality when you take a lack of: sound enterprise architecture, proper contingency planning, solid engineering and common sense and add the economic lubricant of the Cloud.

Stir in a little journalistic sensationalism, and you’ve got CloudWow!

Carbonite, the online backup vendor, says it lost data belonging to over 7,500 customers in a number of separate incidents in a suit filed in Massachusetts charging Promise Technology Inc with supplying it with $3 million worth of defective storage, according to a story in Saturday’s Boston Globe.

The catastrophe is the latest in a series of cloud failures.

The widgetry was supposed to detect disk failures and transfer the data to a working drive. It allegedly didn’t.

The story says Promise couldn’t fix the errors and “Carbonite’s senior engineers, senior management and senior operations personnel…spent enormous amounts of time dealing with the problems.”

Carbonite claims the data losses caused “serious damage” to its business and reputation for reliability. It’s demanding unspecified damages. Promise told the Globe there was “no merit to the allegations.”

Carbonite, which sells largely to consumers and small businesses and competes with EMC’s Mozy, tells its customers: “never worry about losing your files again.”

The abstraction of infrastructure and democratization of applications and data that Cloud Computing services can bring does not mean that all services are created equal.  It does not make our services or information more secure (or less for that matter.)  Just because a vendor brands themselves as a “Cloud” provider does not mean that “their” infrastructure is any more implicitly reliable, stable or resilient than traditional infrastructure or that proper enterprise architecture as it relates to people, process and technology is in place.  How the infrastructure is built and maintained is just as important as ever.

If you take away the notion of Carbonite being a “Cloud” vendor, would this story read any differently?

We’ve seen a few failures recently of Cloud-based services, most of them sensationally lumped into the Cloud pile: Google, Microsoft, and even Amazon; most of the stories about them relate the impending doom of the Cloud…

Want another example of how Cloud branding, the Web2.0 experience and blind faith makes for another FUDtastic “catastrophe in the cloud?”  How about the well-known service Ma.gnolia?

There was a meltdown at bookmark sharing website Ma.gnolia Friday morning. The service lost both its primary store of user data, as well as its backup. The site has been taken offline while the team tries to reconstruct its databases, though some users may never see their stored bookmarks again.

The failure appears to be catastrophic. The company can’t say to what extent it will be able to restore any of its users’ data. It also says the data failure was so extensive, repairing the loss will take “days, not hours.”

So we find that a one man shop was offering a service that people liked and it died a horrible death.  Because it was branded as a Cloud offering, it “seemed” bigger than it was.  This is where perception definitely was not reality and now we’re left with a fluffy bad taste in our mouths.

Again, what this illustrates is that just because a service is “Cloud-based” does not imply it’s any more reliable or resilient as one that is not. It’s just as important that as enterprises look to move to the Cloud that they perform as much due diligence on their providers as makes sense. We’ll see a weeding out of the ankle-biters in Cloud Computing.

Nobody ever gets fired for buying IBM…

What we’ll also see is that even though we’re not supposed to care what our Cloud providers’ infrastructure is powered by and how, we absolutely will in the long term and the vendors know it.

This is where people start to freak about how standards and consolidation will kill innovation in the space but it’s also where the realities of running a business come crashing down on early adopters.

Large enterprises will move to providers who can demonstrate that their services are solid by way of co-branding with the reputation of the providers of infrastructure coupled with the compliance to “standards.”

The big players like IBM see this as an opportunity and as early as last year introduced a Cloud certification program:

IBM to Validate Resiliency of Cloud Computing Infrastructures

Will Consult With Businesses of All Sizes to Ensure Resiliency, Availability, Security; Drive Adoption of New Technology

ARMONK, NY – 24 Nov 2008: In a move that could spur the rise of the nascent computing model known as “cloud,” IBM (NYSE: IBM) today said it would introduce a program to validate the resiliency of any company delivering applications or services to clients in the cloud environment. As a result, customers can quickly and easily identify trustworthy providers that have passed a rigorous evaluation, enabling them to more quickly and confidently reap the business benefits of cloud services.

Cloud computing is a model for network-delivered services, in which the user sees only the service and does not view the implementation or infrastructure required for its delivery. The success to date of cloud services like storage, data protection and enterprise applications, has created a large influx of new providers. However, unpredictable performance and some high-profile downtime and recovery events with newer cloud services have created a challenge for customers evaluating the move to cloud.

IBM’s new “Resilient Cloud Validation” program will allow businesses who collaborate with IBM on a rigorous, consistent and proven program of benchmarking and design validation to use the IBM logo: “Resilient Cloud” when marketing their services.

Remember the “Cisco Powered Network” program?  How about a “Cisco Powered Cloud?”  See how GoGrid advertises their load balancers are f5?

In the long term, like the CapitalOne credit card commercials challenging the company providing your credit card services by asking “What’s in your wallet?” you can expect to start asking the same thing about your Cloud providers’ offerings, also.

/Hoff

 

Trust But Verify? That’s An Oxymoron…

February 23rd, 2009 4 comments

GBCIA
In response to my post regarding Cloud (SaaS, really) providers' security, Allen Baranov asked me the following excellent question in the comments:

Hoff,

What would make you trust "the Cloud"? Scrap that… stupid question…

What would make you trust SaaS providers?

To which I responded:

Generally, my CEO or CFO. 🙁  

I don't "trust" third party vendors with my data. I never will. I simply exercise the maximal amount of due diligence that I am afforded given prevailing time, money, resources and transparency and assess risk from there.

Even if the data is not critical/sensitive, I don't "trust" that it's not going to be mishandled. Not in today's world.  (Ed: How I deal with that mishandling is the secret sauce…)

I then got thinking about the line that Ronald Reagan is often credited with wherein he described managing relations with the former Soviet Union:

Trust but verify.

Security professionals use that phrase a lot. They shouldn't. It's oxymoronic.

The very definition of "trust" is:

trust |trəst|
noun
firm belief in the reliability, truth, ability, or strength of someone or something relations have to be built on trust they have been able to win the trust of the others.
• acceptance of the truth of a statement without evidence or investigation I used only primary sources, taking nothing on trust.
• the state of being responsible for someone or something a man in a position of trust.
• poetic/literary a person or duty for which one has responsibility rulership is a trust from God.
• poetic/literary a hope or expectation all the great trusts of womanhood.

See the second bullet above "….without evidence or investigation"?  I don't "trust" people over whic
h I have no effective control. With third parties handling your data, you have no effective "control." You have the capability to audit, assess and recover, but control?  Nope.

Does that mean I think you should not put your information into the hands of a third party?  Of course not.  It's inevitable.  You already have. However, admitting defeat and working from there may make Jack a dull boy, but he's also not unprepared for when the bad stuff happens.  And it will.

I stand by my answer to Allen.

You?

/Hoff

The Challenge of Virtualization Security: Organizational and Operational, NOT Technical

March 25th, 2008 7 comments

Bullfight
Taking the bull by the horns…

I’ve spoken many times over the last year on the impact virtualization brings to the security posture of organizations.  While there are certainly technology issues that we must overcome, we don’t have solutions today that can effectively deliver us from evil. 

Anyone looking for the silver bullet is encouraged to instead invest in silver buckshot.  No shocker there.

There are certainly technology and solution providers looking to help solve these problems, but honestly, they are constrained by the availability and visibility to the VMM/Hypervisors of the virtualization platforms themselves. 

Obviously announcements like VMware’s VMsafe will help turn that corner, but VMsafe requires re-tooling of ISV software and new versions of the virtualization platforms.  It’s a year+ away and only addresses concerns for a single virtualization platform provider (VMware) and not others.

The real problem of security in a virtualized world is not technical, it is organizational and operational.

With the consolidation of applications, operating systems, storage, information, security and networking — all virtualized into a single platform rather than being discretely owned, managed and supported by (reasonably) operationally-mature teams — the biggest threat we face in virtualization is now we have lost not only visibility, but the clearly-defined lines of demarcation garnered from a separation of duties we had in the non-virtualized world.

Many companies have segmented off splinter cells of "virtualization admins" from the server teams and they are often solely responsible for the virtualization platforms which includes the care, feeding, diapering and powderering of not only the operating systems and virtualization platforms, but the networking and security functionality also.

No offense to my brethren in the trenches, but this is simply a case of experience and expertise.  Server admins are not experts in network or security architectures and operations, just as the latter cannot hope to be experts in the former’s domain.

We’re in an arms race now where virtualization brings brilliant flexibility, agility and cost savings to the enterprise, but ultimately further fractures the tenuous relationships between the server, network and security teams.

Now that the first-pass consolidation pilots of virtualizing non-critical infrastructure assets has been held up as beaconing examples of ROI in our datacenters, security and networking teams are exercising their veto powers as virtualization efforts creep towards critical production applications, databases and transactional systems.

Quite simply, the ability to express risk, security posture, compliance, troubleshooting and measureing SLA’s and dependencies within the construct of a virtualized world is much more difficult than in the discretely segregated physical world and when taken to the mat on the issues, the virtual server admins simply cannot address these issues competently within the scope of language of the security and risk teams.

This is going to make for some unneeded friction in what was supposed to be a frictionless effort.  If you thought the security teams were thought of as speed bumps before, you’re not going to like what happens soon when they try to delay/halt a business-driven effort to reduce costs, speed time-to-market, increase availability and enable agility.

I’ll summarize my prior recommendations as to how to approach this conundrum in a follow-on post, but the time is now to get these teams together and craft the end-play strategies and desired end-states for enterprise architecture in a virtualized world before we end up right back where we started 15+ years ago…on the hamster wheel of pain!

/Hoff

Risky Business — The Next Audit Cycle: Bellweather Test for Critical Production Virtualized Infrastructure

March 23rd, 2008 3 comments

Riskybusiness
I believe it’s fair to suggest that thus far, the adoption of virtualized infrastructure has been driven largely by consolidation and cost reduction.

In most cases the initial targets for consolidation through virtualization have focused on development environments, internally-facing infrastructure and non-critical application stacks and services.

Up until six months ago, my research indicated that most larger companies were not yet at the point where either critical applications/databases or those that were externally-facing were candidates for virtualization. 

As the virtualization platforms mature, the management and mobility functionality provides leveraged impovement over physical non-virtualized counterparts, and the capabilities to provide for resilient services emerge,  there is mounting pressure to expand virtualization efforts to include these remaining services/functions. 

With cost-reduction and availability improvements becoming more visible, companies are starting to tip-toe down the path of evaluating virtualizing everything else including these critical application stacks, databases and externally-facing clusters that have long depended on physical infrastructure enhancements to ensure availability and resiliency.

In these "legacy" environments, the HA capabilities are often provided by software-based clustering capabilities in the operating systems, applications or via the network thanks to load balancers and the like.  Each of these solutions sets are managed by different teams.  There’s a lot of complexity in making it all appear simple, secure and available.

This raises some very interesting questions that focus on assessing
risk in these environments in which duties and responsibilities are
largely segmented and well-defined versus their prospective virtualized counterparts where the opposite is true.

If companies begin to virtualize
and consolidate the applications, storage, servers, networking, security and high-availability
capabilities into the virtualization platforms, where does the buck
stop in terms of troubleshooting or assurance?  How does one assess risk?  How do we demonstrate compliance and
security when "all the eggs are in one basket?"

I don’t think it’s accurate to suggest that the lack of mature security
solutions has stalled the adoption of virtualization across the board,
but I do think that as companies evaluate virtualization candidacy,
security has been a difficult-to-quantify speed bump that has been
danced around. 

We’ve basically been playing a waiting game.  The debate over virtualization and the
inability to gain consensus in the increase/decrease of risk posture has left us at the point where we
have taken the low-hanging fruit that is either non-critical or has
resiliency built in, and simply consolidated it.  But now we’re at a crossroads as virtualization phase 2 has begun.

It’s time to put up or shut down…

Over the last year since my panel on virtualization security at RSA, I’ve been asking the same question in customer engagements and briefings:

How many of you have been audited by either internal or external governance organizations against critical virtualized infrastructure that are in production roles and/or externally facing? 

A year ago, nobody raised their hands.  I wonder what it will look like this year?

If IT and Security professionals can’t agree on the relative "security" or risk increase/decrease that virtualization brings, what position do you think that leaves the auditors in?  They are basically going to measure relative compliance to guidelines prescribed by governance and regulatory requirements.  Taken quite literally, many production environments featuring virtualized production components would not pass an audit.  PCI/DSS comes to mind.

In virtualized environments we’ve lost visiblity, we’ve lost separation of duties, we’ve lost the inherent simplicity that functions spread over physical entities provides.  Existing controls and processes get us only so far and the technology crutches we used to be able to depend on are buckling when we add the V-word to the mix.

We’ve seen technology initiatives such as VMware’s VMsafe that are still 9-12 months out that will help gain back some purchase in some of these areas, but how does one address these issues with auditors today?

I’m looking forward to the answer to this question at RSA this year to evaluate how companies are dealing with GRC (governance, risk and compliance) audits in complex critical production environments.

/Hoff

Off The Cuff Review: Nemertes Research’s “Virtualization Risk Analysis”

February 12th, 2008 4 comments

Andreas
I just finished reading a research paper from Andreas Antonopoulous from Nemertes titled "A risk analysis of large-scaled and dynamic virtual server environments."  You can find the piece here: 

Executive Summary

As virtualization has gained acceptance in corporate data centers,
security has gone from afterthought to serious concern. Much of the
focus has been on the technologies of virtualization rather than the
operational, organizational and economic context. This comprehensive
risk analysis examines the areas of risk in deployments of virtualized
infrastructures and provides recommendations

I was interested by two things immediately:

  1. While I completely agree with the fact that in regards to virtualization and security the focus has been about the "…technologies of virtualization rather than the
    operational, organizational and economic context"
    I’m not convinced there is an overwhelming consensus that "…security has gone from afterthought to serious concern" mostly because we’re just now getting to see "large-scaled and dynamic virtual server environments.’  It’s still painted on, not baked in.  At least that’s how people react at my talks.
     
  2. Virtualization is about so much more than just servers, and in order to truly paint a picture of analyzing risk within "large-scaled and dynamic virtual server environments" much of the complexity and issues associated specifically with security stem from the operational and organizational elements associated with virtualizing storage, networking, applications, policies, data and the wholesale shift in operationalizing security and who owns it within these environments.

I’ve excerpted the most relevant element of the issue Nemertes wanted to discuss:

With all the
hype surrounding server virtualization come the inevitable security
concerns: are virtual servers less secure? Are we introducing higher
risk into the data center? For server virtualization to deliver
benefits we have to examine the security risks. As with any new
technology there is much uncertainty mixed in with promise. Part of the
uncertainty arises because most companies do not have a good
understanding of the real risks surrounding virtualization.

I’m easily confused…

While I feel the paper does a good job of describing the various stages of
deployment and many of the "concerns" associated with server
virtualization within these contexts, I’m left unsatisfied that I’m anymore prepared to assess and manage risk regarding server virtualization.  I’m concerned that the term "risk" is being spread about rather liberally because there is the presence of a bit of math.

The formulaic "Virtualization Risk Assessment" section is suggested to establish a quantatative basis for computing "relative risk," in the assessment summary.  However, since the variables introduced in the formulae are subjective and specific per asset, it’s odd that the summary table is then seemingly presented generically so as to describe all assets:

Scenario Vulnerability Impact Probability of Attack Overall Risk
Single virtual server (hypervisor risk) Low High Low Low/Medium
Basic services virtualized Low High Medium Medium
Production applications virtualized Medium High High Medium/High
Complete virtualization High High High High

I’m trying to follow this and then get smacked about by this statement, which explains why people just continue to meander along applying the same security strategies toward virtualized servers as they do in conventional environments:

This conclusion might appear to be pessimistic at first glance.
However, note that we are comparing various stages of deployment of
virtual servers. A large deployment of physical servers will suffer
from many of the same challenges that the “Complete Virtualization”
environment suffers from.

Furthermore, it’s unclear to me how to factor in compensating controls into this rendering given what follows:

What is new here is that there are fewer solutions for providing
virtual security than there are for providing physical security with
firewalls and intrusion prevention appliances in the network. On the
other hand, the cost of implementing virtualized security can be
significantly lower than the cost of dedicated hardware appliances,
just like the cost of managing a virtual server is lower than a
physical server.

The security solutions available today are limited by how much integration exists with the virtualization platforms today.  We’ve yet to see the VMM’s/Hypervisors opened up to allow true low-level integration and topology-sensitive security interaction with flow classification, provisioning, and disposition.

Almost all supposed "virtualization-ready" security solutions today are nothing more than virtual appliance versions of existing solutions or simply the same host-based solutions which run in the VM and manage not to cock it up.  Folding your management piece into something like VMware’s VirtualCenter doesn’t count.

In general, I simply disagree that the costs of implementing virtualized security (today) can be significantly lower than the cost of dedicated hardware appliances — not if you’re expecting the same levels of security you get in the conventional, non-virtualized world.

The reasons (as I give in my VirtSec presentations):  Loss of visibility, constraint of the virtual networking configurations, coverage, load on the hosts, licensing.  All really important.

Cutting to the Chase

I’m left waiting for the punchline, much like I was with Burton’s "Immutable Laws of Virtualization," and I think the reason why is that despite these formulae, the somewhat shallow definition of risk seems to still come down to nothing more than reasonably-informed speculation or subjective perception:

So, in the above risk analysis, one must also
consider that the benefits in virtualization far outweigh the risks.

The question is not so much whether companies should proceed with
virtualization – the market is already answering that resoundingly in
the affirmative. The question is how to do that while minimizing the
risk inherent in such a strategy.

These few sentences above seem to almost obviate the need for risk analysis at all and suggests that for most, security is still an afterthought.  High risk or not, the show must go on?

So given the fact that virtualization is happening at breakneck pace, we have few good security solutions available, we speak of risk "relatively," and that operationally the entire role and duty of "security" within virtualized environments is now shifting, how do we end up with this next statement?

In the long run, virtualized security solutions will not only help
mitigate the risk of broadly deployed infrastructure virtualization,
but will also provide new and innovative approaches to information
security that is in itself virtual. The dynamic, flexible and portable
nature of virtual servers is already leading to a new generation of
dynamic, flexible and portable security solutions.

I like the awareness Andreas tries to bring in this paper, but I fear that I am not left with any new information or tools for assessing risk (let alone quantifying it) in a virtual environment. 

So what do I do?!  I still have no answer to the main points of this paper, "With all the
hype surrounding server virtualization come the inevitable security
concerns: are virtual servers less secure? Are we introducing higher
risk into the data center?"

Well?  Are they?  Am I?

/Hoff