Archive

Archive for November, 2010

On Security Conference Themes: Offense *Versus* Defense – Or, Can You Code?

November 22nd, 2010 7 comments

This morning’s dialog on Twitter from @wmremes and @singe reminded me of something that’s been bouncing around in my head for some time.

Wim blogged about a tweet Jeff Moss made regarding Black Hat DC in which he suggested CFP submissions should focus on offense (versus defense.)

Black Hat (and Defcon) have long focused on presentations which highlight novel emerging attacks.  There are generally not a lot of high-profile “defensive” presentations/talks because for the most part, they’re just not sexy, generally they involve hard work/cultural realignment and the reality that as hard as we try, attackers will always out-innovate and out-pace defenders.

More realistically, offense is sexy and offense sells — and it often sells defense.  That’s why vendors sponsor those shows in the first place.

Along these lines, one will notice that within our industry, the defining criterion for the attack versus defend talks and those that give them, is one’s ability to write code and produce tools that demonstrate the vulnerability via exploit.  Conceptual vulnerabilities paired with non-existent exploits are generally thought of as fodder for academia.  Only when a tool that weaponizes an attack shows up do people pay attention.

Zero days rule by definition. There’s no analog on the defensive side unless you buy into marketing like “…ahead of the threat.” *cough* Defense for offense that doesn’t exist generally doesn’t get the majority of the funding ;)

So it’s no wonder that security “rockstars” in our industry are generally those who produce attack/offensive code which illustrate how a vector can be exploited.  It’s tangible.  It’s demonstrable.  It’s sexy.

On the other hand, most defenders are reconciled to using tools that others wrote — or become specialists in the integration of them — in order to parlay some advantage over the ever-increasing wares of the former.

Think of those folks who represent the security industry in terms of mindshare and get the most amount of press.  Overwhelmingly it’s those “hax0rs” who write cool tools — tools that are more offensive in nature, even if they produce results oriented toward allowing practitioners to defend better (or at least that’s how they’re sold.)  That said, there are also some folks who *do* code and *do* create things that are defensive in nature.

I believe the answer lies in balance; we need flashy exploits (no matter how impractical/irrelevant they may be to a large amount of the population) to drive awareness.  We also need  more practitioner/governance talks to give people platforms upon which they can start to architect solutions.  We need more defenders to be able to write code.

Perhaps that’s what Richard Bejtlich meant when he tweeted: “Real security is built, not bought.”  That’s an interesting statement on lots of fronts. I’m selfishly taking Richard’s statement out of context to support my point, so hopefully he’ll forgive me.

That said, I don’t write code.  More specifically, I don’t write code well.  I have hundreds of ideas of things I’d like to do but can’t bridge the gap between ideation and proof-of-concept because I can’t write code.

This is why I often “invent” scenarios I find plausible, talk about them, and then get people thinking about how we would defend against them — usually in the vacuum of either offensive or defensive tools being available, or at least realized.

Sometimes there aren’t good answers.

I hope we focus on this balance more at shows like Black Hat — I’m lucky enough to get to present my “research” there despite it being defensive in nature but we need more defensive tools and talks to make this a reality.

/Hoff

Enhanced by Zemanta

The Future Of Audit & Compliance Is…Facebook?

November 20th, 2010 No comments
SAN FRANCISCO - NOVEMBER 15:  Facebook founder...
Image by Getty Images via @daylife

I’ve had an ephiphany.  The future is coming wherein we’ll truly have social security…

As the technology and operational models of virtualization and cloud computing mature and become operationally ubiquitous, ultimately delivering on the promise of agile, real-time service delivery via extreme levels of automation, the ugly necessities of security, audit and risk assessment will also require an evolution via automation to leverage the same.

At some point, that means the automated collection and overall assessment of posture (from a security, compliance, and risk perspective) will automagically occur (lest we continue to be the giant speed bump we’re described to be,) and pop out indicatively with glee with an end result of “good,” “bad,” or “pass,” “fail,” not unlike one of those in-flesh turkey thermometers that indicates doneness once a pre-set temperature is reached.

What does that have to do with Facebook?

Simple.

When we’ve all been sucked into the collective hive of the InterCloud matrix, the CISO/assessor/auditor/regulator will look at the score, the resultant assertions and the supporting artifacts gathered via automation and simply click on a button:

You see, the auditor/regulator really is your friend. ;)

It’s a cruel future.  We’re all Zuck’d.

/Hoff

Enhanced by Zemanta
Categories: Compliance Tags:

Incomplete Thought: Compliance – The Autotune Of The Security Industry

November 20th, 2010 3 comments
LOS ANGELES, CA - JANUARY 31:  Rapper T-Pain p...
Image by Getty Images via @daylife

I don’t know if you’ve noticed, but lately the ability to carry a tune while singing is optional.

Thanks to Cher and T-Pain, the rampant use of the Autotune in the music industry has enabled pretty much anyone to record a song and make it sound like they can sing (from the Autotune of encyclopedias, Wikipedia):

Auto-Tune uses a phase vocoder to correct pitch in vocal and instrumental performances. It is used to disguise off-key inaccuracies and mistakes, and has allowed singers to perform perfectly tuned vocal tracks without the need of singing in tune. While its main purpose is to slightly bend sung pitches to the nearest true semitone (to the exact pitch of the nearest tone in traditional equal temperament), Auto-Tune can be used as an effect to distort the human voice when pitch is raised/lowered significantly.[3]

A similar “innovation” has happened to the security industry.  Instead of having to actually craft and execute a well-tuned security program which focuses on managing risk in harmony with the business, we’ve simply learned to hum a little, add a couple of splashy effects and let the compliance Autotune do it’s thing.

It doesn’t matter that we’re off-key.  It doesn’t matter that we’re not in tune.  It doesn’t matter that we hide mistakes.

All that matters is that auditors can sing along, repeating the chorus and ensure that we hit the Top 40.

/Hoff

Enhanced by Zemanta

FedRAMP. My First Impression? We’re Gonna Need A Bigger Boat…

November 3rd, 2010 3 comments

I’m grumpy, confused and scared.  Classic signs of shock.  I can only describe what I’m feeling by virtue of an analog…

There’s a scene in the movie Jaws where Chief Brody, chumming with fish guts to attract and kill the giant shark from the back of the boat called “The Orca,” meets said fish for the first time.  Terrified by it’s menacing size, he informs [Captain] Quint “You’re gonna need a bigger boat.”

I felt like that today as I read through the recently released draft of the long-anticipated FedRAMP documents.  I saw the menace briefly surface, grin at me, and silently slip back into the deep.  Sadly, channeling Brody, I whispered to myself “…we’re gonna need something much sturdier to land this fish we call cloud.”

I’m not going to make any friends with this blog.

I can barely get my arms around all of the issues I have.  There will be sequels, just like with Jaws, though unlike Roy Schneider, I will continue to be as handsome as ever.

Here’s what I do know…it’s 81 pages long and despite my unhappiness with the content and organization, per Vivek Kundra’s introduction, I can say that it will certainly “encourage robust debate on the best path forward.”  Be careful what you ask for, you might just get it…

What I expected isn’t what was delivered in this document. Perhaps in the back of my mind it’s exactly what I expected, it’s just not what I wanted.

This is clearly a workstream product crafted by committee and watered down in the process.  Unlike the shark in Jaws, it’s missing it’s teeth, but it’s just as frightening because its heft is scary enough.  Even though all I can see is the dorsal fin cresting the water’s surface,  it’s enough to make me run for the shore.

As I read though the draft, I was struck by a wave of overwhelming disappointment.  This reads like nothing more than a document which scrapes together other existing legacy risk assessment, vulnerability management, monitoring and reporting frameworks and loosely defines interactions between various parties to arrive at a certification which I find hard to believe isn’t simply a way for audit companies to make more money and service providers to get rubber-stamped service ATO’s without much in the way of improved security or compliance.

This isn’t bettering security, compliance, governance or being innovative.  It’s not solving problems at a mass scale through automation or using new and better-suited mousetraps to do it.  It’s gluing stuff we already have together in an attempt to make people feel better about a hugely disruptive technical, cultural, economic and organizational shift.  This isn’t Gov2.0 at all.  It’s Gov1.0 with a patch.  It’s certainly not Cloud.

Besides the Center for Internet Security reference, there’s no mention of frameworks, tools, or organizations outside of government at all…that explains the myopic focus of “what we have” versus “what we need.”

The document is organized into three chapters:

Chapter 1: Cloud Computing Security Requirement Baseline
This chapter presents a list of baseline security controls for Low and Moderate
impact Cloud systems. NIST Special Publication 800-53R3 provided the foundation
for the development of these security controls.

Chapter 2: Continuous Monitoring
This chapter describes the process under which authorized cloud computing systems
will be monitored. This section defines continuous monitoring deliverables,
reporting frequency and responsibility for cloud service provider compliance with
FISMA.

Chapter 3: Potential Assessment & Authorization Approach
This chapter describes the proposed operational approach for A&A’s for cloud
computing systems. This reflects upon all aspects of an authorization (including
sponsorship, leveraging, maintenance and continuous monitoring), a joint
authorization process, and roles and responsibilities for Federal agencies and Cloud
Service Providers in accordance with the Risk Management Framework detailed in
NIST Special Publication 800-37R1.

It’s clear that the document was written almost exclusively from the perspective of farming out services to Public cloud providers capable of meeting FIPS 199 Low/Moderate requirements.  It appears to be written in the beginning from the perspective of SaaS services and the scoping and definition of cloud isn’t framed — so it’s really difficult to understand what sort of ‘cloud’ services are in scope.  NIST’s own cloud models aren’t presented.  Beyond Public SaaS services, it’s hard to understand whether Private, Hybrid, and Community clouds — PaaS or IaaS — were considered.

It’s like reading an article in Wired about the Administration’s love affair with Google while the realities of security and compliance are cloudwashed over.

I found the additional requirements and guidance related to the NIST 800-53-aligned control objectives to be hit or miss and some of them utterly laughable (such as SC-7 – Boundary Protection: “Requirement: The service provider and service consumer ensure that federal information (other than unrestricted information) being transmitted from federal government entities to external entities using information systems providing cloud services is inspected by TIC processes.”  Good luck with that.  Sections on backup are equally funny.

The “Continuous Monitoring” section requirements wherein the deliverable frequency and responsibile party is laid out engenders a response from “The Princess Bride:”

You keep using that word (continuous)…I do not think it means what you think it means…

Only 2 of the 14 categories are those which FedRAMP is required to provide (pentesting and IV&V of controls.)  All others are the responsibility of the provider.

Sigh.

There’s also not a clear distinction that in a service deployed on IaaS (as an example) where anything in the workload’s VM fits into this scheme (you know…all the really important stuff like information and applications) and how agency processes intersect with the CSP, FedRAMP and the  JAB.

The very dynamism and agility of cloud are swept under the rug, especially in sections discussing change control.  It’s almost laughable…code changes in some “cloud” SaaS vendors every few hours.  The rigid and obtuse classification of the severity of changes is absolutely ludicrous.

I’m unclear if the folks responsible for some of this document have ever used cloud based services, frankly.

“Is there anything good in the document,” you might ask?  Yes, yes there is. Firstly, it exists and frames the topic for discussion.  We’ll go from there.

However, I’m at a loss as how to deliver useful and meaningful commentary back to this team using the methodology they’ve constructed…there’s just so much wrong here.

I’ll do my best to hook up with folks at the NIST Cloud Workshop tomorrow and try, however if I smell anything remotely like seafood, I’m outa there.

/Hoff

Related articles

Enhanced by Zemanta

Navigating PCI DSS (2.0) – Related to Virtualization/Cloud, May the Schwartz Be With You!

November 1st, 2010 3 comments

[Disclaimer: I'm not a QSA. I don't even play one on the Internet. Those who are will generally react to posts like these with the stock "it depends" answer, to which I respond "you're right, it does.  Not sure where that leaves us other than with a collective sigh, but...]

The Payment Card Industry (PCI) last week released version 2.0 of the Data Security Standard (DSS.) [Legal agreement required]  This is an update from v1.2.1 but strangely does not introduce any major new requirements but instead clarifies language.

Accompanying this latest revision is also a guidance document titled “Navigating PCI DSS: Understanding the Intent of the Requirements, v2.0” [PDF]

One of the more interesting additions in the guidance is the direct call-out of virtualization which, although late to the game given the importance of this technology and its operational impact, is a welcome edition to this reader.  I should mention I’ve sat in on three of the virtualization SIG calls which gives me an interesting perspective as I read through the document.  Let me just summarize by saying that “…you can’t please all the people, all of the time…” ;)

What I find profoundly interesting is that since virtualization is a such a prominent and enabling foundational technology in IaaS Cloud offerings, the guidance is still written as though the multi-tenant issues surrounding cloud computing (as an extension of virtualization) don’t exist and that shared infrastructure doesn’t complicate the picture.  Certainly there are “cloud” providers who don’t use infrastructure shared with other providers beyond themselves in order to deliver service to different customers (I think we call them SaaS providers,) but think about the context of people wanting to use AWS to deliver services that are in scope for PCI.

Here’s what the navigation document has to say specific to virtualization and ultimately how that maps to IaaS cloud offerings.  We’re going to cover just the introductory paragraph in this post with the guidance elements and the actual DSS in a follow-on.  However, since many people are going to use this navigation document as their first blush, let’s see where that gets us:

PCI DSS requirements apply to all system components. In the context of PCI DSS, “system components” are defined as any network component, server or application that is included in, or connected to, the cardholder data environment. System components” also include any virtualization components such as virtual machines, virtual switches/routers, virtual appliances, virtual applications/desktops, and hypervisors.

I would have liked to see specific mention of virtual storage here and although it’s likely included by implication in the management system/sub-system mentions above and below, the direct mention of APIs. Thanks to heavy levels of automation, the operational movements related to DevOps and with APIs becoming the interface of the integration and management planes, these are unexplored lands for many.

I’m also inclined to wonder about virtualization approaches that is not server-centric such as physical networking devices, databases, etc.

If virtualization is implemented, all components within the virtual environment will need to be identified and considered in scope for the review, including the individual virtual hosts or devices, guest machines, applications, management interfaces, central management consoles, hypervisors, etc. All intra-host communications and data flows must be identified and documented, as well as those between the virtual component and other system components.

It can be quite interesting to imagine the scoping exercises (or de-scoping more specifically) associated with this requirement in a cloud environment.  Even if the virtualized platforms are operated solely on behalf of a single customer (read: no shared infrastructure — private cloud,)  this is still an onerous task, so I wonder how — if at all — this could be accomplished in a public IaaS offering given the lack of transparency we see in today’s cloud operators.  Much of what is being asked for relating to infrastructure and “data flows” between the “virtual component and other system components” represents the CSP’s secret sauce.

The implementation of a virtualized environment must meet the intent of all requirements, such that the virtualized systems can effectively be regarded as separate hardware. For example, there must be a clear segmentation of functions and segregation of networks with different security levels; segmentation should prevent the sharing of production and test/development environments; the virtual configuration must be secured such that vulnerabilities in one function cannot impact the security of other functions; and attached devices, such as USB/serial devices, should not be accessible by all virtual instances.

“…clear segmentation of functions and segregation of networks with different security levels” and “the virtual configuration must be secured such that vulnerabilities in one function cannot impact the security of other functions,” eh? I don’t see how anyone can expect to meet this requirement in any system underpinned with a virtualized infrastructure stack (hardware or software) whether it’s multi-tenant or not.  One vulnerability in the hypervisor makes this an impossibility.  Add in management, storage, networking. This basically comes down to trusting in the sanctity of the hypervisor.

Additionally, all virtual management interface protocols should be included in system documentation, and roles and permissions should be defined for managing virtual networks and virtual system components. Virtualization platforms must have the ability to enforce separation of duties and least privilege, to separate virtual network management from virtual server management.

Special care is also needed when implementing authentication controls to ensure that users authenticate to the proper virtual system components, and distinguish between the guest VMs (virtual machines) and the hypervisor.

The rest is pretty standard stuff, but if you read the guidance sections (next post) it gets even more fun.  This is why the subjectivity, expertise and experience of the QSA is so related to the quality of the audit when virtualization and cloud are involved.  For example, let’s take a sneak peek at section 2.2.1, as it is a bit juicy:

2.2.1 Implement only one primary function per server to prevent functions that require different security levels from co-existing
on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)
Note: Where virtualization technologies are in use, implement only one primary function per virtual system component
.

I  acknowledge that there are “cloud” providers who are PCI certified at the highest tier.  Many of them are SaaS providers.  Many simply use their own server stacks in co-located facilities but due to their size and services merely call themselves cloud providers — many aren’t even virtualized per the description above.   Further, there are also methods of limiting scope and newer technologies such as tokenization that can assist in solving some of the information-centric issues with what would otherwise be in-scope data, but they offset many of the cost-driven efficiencies marketed by mass-market, low-cost cloud providers today.

Love to hear from an IaaS public cloud provider who is PCI certified (to the VM boundary) with customers that are in turn certified with in-scope applications and cardholder data or even a SaaS provider who sits atop an IaaS provider…

Just read this first before responding, please.

/Hoff

Enhanced by Zemanta