Archive

Archive for the ‘Risk Management’ Category

Incomplete Thought: The Psychology Of Red Teaming Failure – Do Not Pass Go…

August 6th, 2013 14 comments
team fortress red team

team fortress red team (Photo credit: gtrwndr87)

I could probably just ask this of some of my friends — many of whom are the best in the business when it comes to Red Teaming/Pen Testing, but I thought it would be an interesting little dialog here, in the open:

When a Red Team is engaged by an entity to perform a legally-authorized pentest (physical or electronic) with an explicit “get out of jail free card,” does that change the tactics, strategy and risk appetite of the team were they not to have that parachute?

Specifically, does the team dial-up or dial-down the aggressiveness of the approach and execution KNOWING that they won’t be prosecuted, go to jail, etc.?

Blackhats and criminals operating outside this envelope don’t have the luxury of counting on a gilded escape should failure occur and thus the risk/reward mapping *might* be quite different.

To that point, I wonder what the gap is between an authorized Red Team action versus those that have everything to lose?  What say ye?

/Hoff

Enhanced by Zemanta

The Classical DMZ Design Pattern: How To Kill Security In the Cloud

July 7th, 2010 6 comments

Every day I get asked to discuss how Cloud Computing impacts security architecture and what enterprise security teams should do when considering “Cloud.”

These discussions generally lend themselves to a bifurcated set of perspectives depending upon whether we’re discussing Public or Private Cloud Computing.

This is unfortunate.

From a security perspective, focusing the discussion primarily on the deployment model instead of thinking holistically about the “how, why, where, and who” of Cloud, often means that we’re tethered to outdated methodologies because it’s where our comfort zones are.

When we’re discussing Public Cloud, the security teams are starting to understand that the choice of compensating controls and how they deploy and manage them require operational, economic and architectural changes.  They are also coming to terms with the changes to application architectures as it relates to distributed computing and SOA-like implementation.  It’s uncomfortable and it’s a slow-slog forward (for lots of good reasons,) but good questions are asked when considering the security, privacy and compliance impacts of Public Cloud and what can/should be done about them and how things need to change.

When discussing Private Cloud, however, even when a “clean slate design” is proposed, the same teams tend to try to fall back to what they know and preserve the classical n-tier application architecture separated by physical or virtual compensating controls — the classical split-subnet DMZ or perimeterized model of “inside” vs “outside.” They can do this given the direct operational control exposed by highly-virtualized infrastructure.  Sometimes they’re also forced into this given compliance and audit requirements. The issue here is that this discussion centers around molding cloud into the shape of the existing enterprise models and design patterns.

This is an issue; trying to simultaneously secure these diametrically-opposed architectural implementations yields cost inefficiencies, security disparity, policy violations, introduces operational risk and generally means that  the ball doesn’t get moved forward in protecting the things that matter most.

Public Cloud Computing is a good thing for the security machine; it forces us to (again) come face-to-face with the ugliness of the problems of securing the things that matter most — our information. Private Cloud Computing — when improperly viewed from the perspective of simply preserving the status quo — can often cause stagnation and introduce roadblocks.  We’ve got to move beyond this.

Public Cloud speaks to the needs (and delivers on) agility, flexibility, mobility and efficiency. These are things that traditional enterprise security are often not well aligned with.  Trying to fit “Cloud” into neat and tidy DMZ “boxes” doesn’t work.  Cloud requires revisiting our choices for security. We should take advantage of it, not try and squash it.

/Hoff

Enhanced by Zemanta

Patching the (Hypervisor) Platform: How Do You Manage Risk?

April 12th, 2010 7 comments

Hi. Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

Reblog this post [with Zemanta]

RSA Interview (c/o Tripwire) On the State Of Information Security In Virtualized/Cloud Environments.

March 7th, 2010 1 comment

David Sparks (c/o Tripwire) interviewed me on the state of Information Security in virtualized/cloud environments.  It’s another reminder about Information Centricity.

Direct Link here.

Emedded below:

Reblog this post [with Zemanta]

ENISA launches Cloud Computing Security Risk Assessment Document

November 20th, 2009 1 comment

ENISA-LOGOENISA (the European Network and Information Security Agency) today launched their 124 page report on Cloud Computing Security Risk Assessment.

At first glance it’s an excellent read and will be a fantastic accompaniment to the the CSA’s guidance.  I plan to dig into it more over the weekend.  I really appreciate the risk assessment approach which allows folks to prioritize their efforts on understanding the relevant high-level issues associed with Cloud.

Very well done.  I look forward to seeing how CSA and ENISA can further work together on upcoming projects!  I think the European perspective will help bring some balance and alternative views on Cloud in regards to legal and compliance issues specifically.

You can find the document here.

ENISA, supported by a group of subject matter expert comprising representatives from Industries, Academia and Governmental Organizations, has conducted, in the context of the Emerging and Future Risk Framework project, an risks assessment on cloud computing business model and technologies. The result is an in-depth and independent analysis that outlines some of the information security benefits and key security risks of cloud computing. The report provide also a set of practical recommendations.


Observations on “Securing Microsoft’s Cloud Infrastructure”

June 1st, 2009 1 comment

notice-angleI was reading a blog post from Charlie McNerney, Microsoft’s GM, Business & Risk Management, Global Foundation Services on “Securing Microsoft’s Cloud Infrastructure.”

Intrigued, I read the white paper to first get a better understanding of the context for his blog post and to also grok what he meant by “Microsoft’s Cloud Infrastructure.”  Was he referring to Azure?

The answer is per the whitepaper that Microsoft — along with everyone else in the industry — now classifies all of its online Internet-based services as “Cloud:”

Since the launch of MSN® in 1994, Microsoft has been building and running online services. The GFS division manages the cloud infrastructure and platform for Microsoft online services, including ensuring availability for hundreds of millions of customers around the world 24 hours a day, every day. More than 200 of the company’s online services and Web portals are hosted on this cloud infrastructure, including such familiar consumer-oriented services as Windows Live™ Hotmail® and Live Search, and business-oriented services such as Microsoft Dynamics® CRM Online and Microsoft Business Productivity Online Standard Suite from Microsoft Online Services. 

Before I get to the part I found interesting, I think that the whitepaper (below) does a good job of providing a 30,000 foot view of how Microsoft applies lessons learned over its operational experience and the SDL to it’s “Cloud” offerings.  It’s something designed to market the fact that Microsoft wants us to know they take security seriously.  Okay.

Here’s what I found interesting in Charlie’s blog post, it appears in the last two sentences (boldfaced): 

The white paper we’re releasing today describes how our coordinated and strategic application of people, processes, technologies, and experience with consumer and enterprise security has resulted in continuous improvements to the security practices and policies of the Microsoft cloud infrastructure.  The Online Services Security and Compliance (OSSC) team within the Global Foundation Services division that supports Microsoft’s infrastructure for online services builds on the same security principles and processes the company has developed through years of experience managing security risks in traditional software development and operating environments. Independent, third-party validation of OSSC’s approach includes Microsoft’s cloud infrastructure achieving both SAS 70 Type I and Type II attestations and ISO/IEC 27001:2005 certification. We are proud to be one of the first major online service providers to achieve ISO 27001 certification for our infrastructure. We have also gone beyond the ISO standard, which includes some 150 security controls. We have developed 291 security controls to date to account for the unique challenges of the cloud infrastructure and what it takes to mitigate some of the risks involved.

I think it’s admirable that Microsoft is sharing its methodologies and ISMS objectives and it’s a good thing that they have adopted ISO standards and secured SAS70 as a baseline.  

However, I would be interested in understanding what 291 security controls means to a security posture versus, say 178.  It sounds a little like Twitter follower counts.

I can’t really explain why those last two sentences stuck in my craw, but they did.

I’d love to know more about what Microsoft considers those “unique challenges of the cloud infrastructure” as well as the risk assessment framework(s) used to manage/mitigate them — I’m assuming they’ve made great STRIDEs in doing so. 😉

/Hoff

Cloud Catastrophes (Cloudtastrophes?) Caused by Clueless Caretakers?

March 22nd, 2009 4 comments
You'll ask "How?" Everytime... 

 

 

 

You'll ask "How?" Everytime...

Enter the dawn of the Cloudtastrophe…

I read a story today penned by Maureen O’Gara titled “Carbonite Loses Cloud-Based Data, Sues Storage Vendor.”

I thought this was going to be another story regarding a data breach (loss) of customer data by a Cloud Computing service vendor.

What I found, however, was another hyperbolic illustration of how the messaging of the Cloud by vendors has set expectations for service and reliability that are out of alignment with reality when you take a lack of: sound enterprise architecture, proper contingency planning, solid engineering and common sense and add the economic lubricant of the Cloud.

Stir in a little journalistic sensationalism, and you’ve got CloudWow!

Carbonite, the online backup vendor, says it lost data belonging to over 7,500 customers in a number of separate incidents in a suit filed in Massachusetts charging Promise Technology Inc with supplying it with $3 million worth of defective storage, according to a story in Saturday’s Boston Globe.

The catastrophe is the latest in a series of cloud failures.

The widgetry was supposed to detect disk failures and transfer the data to a working drive. It allegedly didn’t.

The story says Promise couldn’t fix the errors and “Carbonite’s senior engineers, senior management and senior operations personnel…spent enormous amounts of time dealing with the problems.”

Carbonite claims the data losses caused “serious damage” to its business and reputation for reliability. It’s demanding unspecified damages. Promise told the Globe there was “no merit to the allegations.”

Carbonite, which sells largely to consumers and small businesses and competes with EMC’s Mozy, tells its customers: “never worry about losing your files again.”

The abstraction of infrastructure and democratization of applications and data that Cloud Computing services can bring does not mean that all services are created equal.  It does not make our services or information more secure (or less for that matter.)  Just because a vendor brands themselves as a “Cloud” provider does not mean that “their” infrastructure is any more implicitly reliable, stable or resilient than traditional infrastructure or that proper enterprise architecture as it relates to people, process and technology is in place.  How the infrastructure is built and maintained is just as important as ever.

If you take away the notion of Carbonite being a “Cloud” vendor, would this story read any differently?

We’ve seen a few failures recently of Cloud-based services, most of them sensationally lumped into the Cloud pile: Google, Microsoft, and even Amazon; most of the stories about them relate the impending doom of the Cloud…

Want another example of how Cloud branding, the Web2.0 experience and blind faith makes for another FUDtastic “catastrophe in the cloud?”  How about the well-known service Ma.gnolia?

There was a meltdown at bookmark sharing website Ma.gnolia Friday morning. The service lost both its primary store of user data, as well as its backup. The site has been taken offline while the team tries to reconstruct its databases, though some users may never see their stored bookmarks again.

The failure appears to be catastrophic. The company can’t say to what extent it will be able to restore any of its users’ data. It also says the data failure was so extensive, repairing the loss will take “days, not hours.”

So we find that a one man shop was offering a service that people liked and it died a horrible death.  Because it was branded as a Cloud offering, it “seemed” bigger than it was.  This is where perception definitely was not reality and now we’re left with a fluffy bad taste in our mouths.

Again, what this illustrates is that just because a service is “Cloud-based” does not imply it’s any more reliable or resilient as one that is not. It’s just as important that as enterprises look to move to the Cloud that they perform as much due diligence on their providers as makes sense. We’ll see a weeding out of the ankle-biters in Cloud Computing.

Nobody ever gets fired for buying IBM…

What we’ll also see is that even though we’re not supposed to care what our Cloud providers’ infrastructure is powered by and how, we absolutely will in the long term and the vendors know it.

This is where people start to freak about how standards and consolidation will kill innovation in the space but it’s also where the realities of running a business come crashing down on early adopters.

Large enterprises will move to providers who can demonstrate that their services are solid by way of co-branding with the reputation of the providers of infrastructure coupled with the compliance to “standards.”

The big players like IBM see this as an opportunity and as early as last year introduced a Cloud certification program:

IBM to Validate Resiliency of Cloud Computing Infrastructures

Will Consult With Businesses of All Sizes to Ensure Resiliency, Availability, Security; Drive Adoption of New Technology

ARMONK, NY – 24 Nov 2008: In a move that could spur the rise of the nascent computing model known as “cloud,” IBM (NYSE: IBM) today said it would introduce a program to validate the resiliency of any company delivering applications or services to clients in the cloud environment. As a result, customers can quickly and easily identify trustworthy providers that have passed a rigorous evaluation, enabling them to more quickly and confidently reap the business benefits of cloud services.

Cloud computing is a model for network-delivered services, in which the user sees only the service and does not view the implementation or infrastructure required for its delivery. The success to date of cloud services like storage, data protection and enterprise applications, has created a large influx of new providers. However, unpredictable performance and some high-profile downtime and recovery events with newer cloud services have created a challenge for customers evaluating the move to cloud.

IBM’s new “Resilient Cloud Validation” program will allow businesses who collaborate with IBM on a rigorous, consistent and proven program of benchmarking and design validation to use the IBM logo: “Resilient Cloud” when marketing their services.

Remember the “Cisco Powered Network” program?  How about a “Cisco Powered Cloud?”  See how GoGrid advertises their load balancers are f5?

In the long term, like the CapitalOne credit card commercials challenging the company providing your credit card services by asking “What’s in your wallet?” you can expect to start asking the same thing about your Cloud providers’ offerings, also.

/Hoff

 

Trust But Verify? That’s An Oxymoron…

February 23rd, 2009 4 comments

GBCIA
In response to my post regarding Cloud (SaaS, really) providers' security, Allen Baranov asked me the following excellent question in the comments:

Hoff,

What would make you trust "the Cloud"? Scrap that… stupid question…

What would make you trust SaaS providers?

To which I responded:

Generally, my CEO or CFO. 🙁  

I don't "trust" third party vendors with my data. I never will. I simply exercise the maximal amount of due diligence that I am afforded given prevailing time, money, resources and transparency and assess risk from there.

Even if the data is not critical/sensitive, I don't "trust" that it's not going to be mishandled. Not in today's world.  (Ed: How I deal with that mishandling is the secret sauce…)

I then got thinking about the line that Ronald Reagan is often credited with wherein he described managing relations with the former Soviet Union:

Trust but verify.

Security professionals use that phrase a lot. They shouldn't. It's oxymoronic.

The very definition of "trust" is:

trust |trəst|
noun
firm belief in the reliability, truth, ability, or strength of someone or something relations have to be built on trust they have been able to win the trust of the others.
• acceptance of the truth of a statement without evidence or investigation I used only primary sources, taking nothing on trust.
• the state of being responsible for someone or something a man in a position of trust.
• poetic/literary a person or duty for which one has responsibility rulership is a trust from God.
• poetic/literary a hope or expectation all the great trusts of womanhood.

See the second bullet above "….without evidence or investigation"?  I don't "trust" people over whic
h I have no effective control. With third parties handling your data, you have no effective "control." You have the capability to audit, assess and recover, but control?  Nope.

Does that mean I think you should not put your information into the hands of a third party?  Of course not.  It's inevitable.  You already have. However, admitting defeat and working from there may make Jack a dull boy, but he's also not unprepared for when the bad stuff happens.  And it will.

I stand by my answer to Allen.

You?

/Hoff

Asset Focused, Not Auditor Focused

May 3rd, 2008 5 comments

Grcsoup
Gunnar Peterson wrote a great piece the other day on the latest productization craze in InfoSec – GRC (Governance, Risk Management and Compliance) wherein he asks "GRC – To Be or To Do?"

I don’t really recall when or from whence GRC sprung up as an allegedly legitimate offering, but to me it seems like a fashionably over-sized rug under which the existing failures of companies to effectively execute on the individual G, R, and C initiatives are conveniently being swept.

I suppose the logic goes something like this: "If you cant effectively
govern, manage risk or measure compliance it must be because what you’re doing is fragmented and siloed.  What you need is
a product/framework/methodology that takes potentially digestible
deliverables and perspectives and "evolves" them into a behemoth suite instead?"

I do not dispute that throughout most enterprises, the definitions, approaches and processes in managing each function are largely siloed and fragmented and I see the attractiveness of integrating and standardizing them, but  I am unconvinced that re-badging a control and policy framework collection constitutes a radical new approach. 

GRC appears to be a way to sell more products and services under a fancy new name to address problems rather than evaluate and potentially change the way in which we solve them.  Look at who’s pushing this: large software companies and consultants as well as analysts looking to pin their research to something more meaningful.

From a first blush, GRC isn’t really about governance or managing risk.  It’s audit-driven compliance all tarted up.

It’s a more fashionable way of getting all your various framework and control definitions in one place and appealing to an auditor’s desire for centralized "stuff" in order to document the effectiveness of controls and track findings against some benchmark.  I’m not really sure where the business-driven focus comes into play?

It’s also sold as a more efficient way of reducing the scope and costs of manual process controls.  Fine.  Can’t argue with that.  I might even say it’s helpful, but at what cost?

Gunnar said:

GRC (or Governance, Risk Management, and Compliance for
the uninitiated) is all the rage, but I have to say I think that again
Infosec has the wrong focus.

Instead of Risk Management helping to deliver transparent Governance and as a natural by-product demonstrate compliance as a function of the former, the model’s up-ended with compliance driving the inputs and being mislabeled.

As I think about it, I’m not sure GRC would be something a typical InfoSec function would purchase or use unless forced which is part of the problem.  I see internal audit driving the adoption which given today’s pressures (especially in public companies) would first start in establishing gaps against regulatory compliance.

If the InfoSec function is considering an approach that drives protecting the things that matter most and managing risk to an acceptable level and one that is not compliance-driven but rather built upon a business and asset-driven approach, rather than make a left turn Gunnar suggested:

Personally, I am happy sticking to classic infosec knitting – delivering confidentiality, integrity, and availability through authentication, authorization, and auditing. But if you are looking for a next generation conceptual horse to bet on, I don’t think GRC is it, I would look at information survivability. Hoff’s information survivability primer is a great starting point for learning about survivability.

Why survivability is more valuable over the long haul than GRC is that survivability is focused on assets not focused on giving an auditor what they need, but giving the business what it needs.

Seminal paper on survivability by Lipson, et al. "survivability solutions are best understood as risk management strategies that first depend on an intimate knowledge of the mission being protected." Make a difference – asset focus, not auditor focus.

For obvious reasons, I am compelled to say "me, too."

I would really like to talk to someone in a large enterprise who is using one of these GRC suites — I don’t really care which department you’re from.  I just want to examine my assertions and compare them against my efforts and understanding.

/Hoff

The Challenge of Virtualization Security: Organizational and Operational, NOT Technical

March 25th, 2008 7 comments

Bullfight
Taking the bull by the horns…

I’ve spoken many times over the last year on the impact virtualization brings to the security posture of organizations.  While there are certainly technology issues that we must overcome, we don’t have solutions today that can effectively deliver us from evil. 

Anyone looking for the silver bullet is encouraged to instead invest in silver buckshot.  No shocker there.

There are certainly technology and solution providers looking to help solve these problems, but honestly, they are constrained by the availability and visibility to the VMM/Hypervisors of the virtualization platforms themselves. 

Obviously announcements like VMware’s VMsafe will help turn that corner, but VMsafe requires re-tooling of ISV software and new versions of the virtualization platforms.  It’s a year+ away and only addresses concerns for a single virtualization platform provider (VMware) and not others.

The real problem of security in a virtualized world is not technical, it is organizational and operational.

With the consolidation of applications, operating systems, storage, information, security and networking — all virtualized into a single platform rather than being discretely owned, managed and supported by (reasonably) operationally-mature teams — the biggest threat we face in virtualization is now we have lost not only visibility, but the clearly-defined lines of demarcation garnered from a separation of duties we had in the non-virtualized world.

Many companies have segmented off splinter cells of "virtualization admins" from the server teams and they are often solely responsible for the virtualization platforms which includes the care, feeding, diapering and powderering of not only the operating systems and virtualization platforms, but the networking and security functionality also.

No offense to my brethren in the trenches, but this is simply a case of experience and expertise.  Server admins are not experts in network or security architectures and operations, just as the latter cannot hope to be experts in the former’s domain.

We’re in an arms race now where virtualization brings brilliant flexibility, agility and cost savings to the enterprise, but ultimately further fractures the tenuous relationships between the server, network and security teams.

Now that the first-pass consolidation pilots of virtualizing non-critical infrastructure assets has been held up as beaconing examples of ROI in our datacenters, security and networking teams are exercising their veto powers as virtualization efforts creep towards critical production applications, databases and transactional systems.

Quite simply, the ability to express risk, security posture, compliance, troubleshooting and measureing SLA’s and dependencies within the construct of a virtualized world is much more difficult than in the discretely segregated physical world and when taken to the mat on the issues, the virtual server admins simply cannot address these issues competently within the scope of language of the security and risk teams.

This is going to make for some unneeded friction in what was supposed to be a frictionless effort.  If you thought the security teams were thought of as speed bumps before, you’re not going to like what happens soon when they try to delay/halt a business-driven effort to reduce costs, speed time-to-market, increase availability and enable agility.

I’ll summarize my prior recommendations as to how to approach this conundrum in a follow-on post, but the time is now to get these teams together and craft the end-play strategies and desired end-states for enterprise architecture in a virtualized world before we end up right back where we started 15+ years ago…on the hamster wheel of pain!

/Hoff