Archive

Archive for the ‘Information Security’ Category

Incomplete Thought: Why Security Doesn’t Scale…Yet.

January 11th, 2011 1 comment
X-ray machines and metal detectors are used to...
Image via Wikipedia

There are lots of reasons one might use to illustrate why operationalizing security — both from the human and technology perspectives — doesn’t scale.

I’ve painted numerous pictures highlighting the cyclical nature of technology transitions, the supply/demand curve related to threats, vulnerabilities, technology and compensating controls and even relevant anecdotes involving the intersection of Moore’s and Metcalfe’s laws.  This really was a central theme in my Cloudinomicon presentation; “idempotent infrastructure, building survivable systems and bringing sexy back to information centricity.”

Here are some other examples of things I’ve written about in this realm.

Batting around how public “commodity” cloud solutions forces us to re-evaluate how, where, why and who “does” security was an interesting journey.  Ultimately, it comes down to architecture and poking at the sanctity of models hinged on an operational premise that may or may not be as relevant as it used to be.

However, I think the most poignant and yet potentially obvious answer to the “why doesn’t security scale?” question is the fact that security products, by design, don’t scale because they have not been created to allow for automation across almost every aspect of their architecture.

Automation and the interfaces (read: APIs) by which security products ought to be provisioned, orchestrated, and deployed are simply lacking in most security products.

Yes, there exist security products that are distributed but they are still managed, provisioned and deployed manually — generally using a management hub-spoke model that doesn’t lend itself to automated “anything” that does not otherwise rely upon bubble-gum and bailing wire scripting…

Sure, we’ve had things like SNMP as a “standard interface” for “management” for a long while 😉  We’ve had common ways of describing threats and vulnerabilities.  Recently we’ve seen the emergence of XML-based APIs emerge as a function of the latest generation of (mostly virtualized) firewall technologies, but most products still rely upon stand-alone GUIs, CLIs, element managers and a meat cloud of operators to push the go button (or reconfigure.)

Really annoying.

Alongside the lack of standard API-based management planes, control planes are largely proprietary and the output for correlated event-driven telemetry at all layers of the stack is equally lacking.  Of course the applications and security layers that run atop infrastructure are still largely discrete thus making the problem more difficult.

The good news is that virtualization in the enterprise and the emergence of the cultural and operational models predicated upon automation are starting to influence product roadmaps in ways that will positively affect the problem space described above but we’ve got a long haul as we make this transition.

Security vendors are starting to realize that they must retool many of their technology roadmaps to deal with the impact of dynamism and automation.  Some, not all, are discovering painfully the fact that simply creating a virtualized version of a physical appliance doesn’t make it a virtual security solution (or cloud security solution) in the same way that moving an application directly to cloud doesn’t necessarily make it a “cloud application.”

In the same way that one must often re-write or specifically design applications “designed” for cloud, we have to do the same for security.  Arguably there are things that can and should be preserved; the examples of the basic underpinnings such as firewalls that at their core don’t need to change but their “packaging” does.

I’m privy to lots of the underlying mechanics of these activities — from open source to highly-proprietary — and I’m heartened by the fact that we’re beginning to make progress.  We shouldn’t have to make a distinction between crafting and deploying security policies in physical or virtual environments.  We shouldn’t be held hostage by the separation of application logic from the underlying platforms.

In the long term, I’m optimistic we won’t have to.

/Hoff

Related articles

Enhanced by Zemanta

On Security Conference Themes: Offense *Versus* Defense – Or, Can You Code?

November 22nd, 2010 7 comments

This morning’s dialog on Twitter from @wmremes and @singe reminded me of something that’s been bouncing around in my head for some time.

Wim blogged about a tweet Jeff Moss made regarding Black Hat DC in which he suggested CFP submissions should focus on offense (versus defense.)

Black Hat (and Defcon) have long focused on presentations which highlight novel emerging attacks.  There are generally not a lot of high-profile “defensive” presentations/talks because for the most part, they’re just not sexy, generally they involve hard work/cultural realignment and the reality that as hard as we try, attackers will always out-innovate and out-pace defenders.

More realistically, offense is sexy and offense sells — and it often sells defense.  That’s why vendors sponsor those shows in the first place.

Along these lines, one will notice that within our industry, the defining criterion for the attack versus defend talks and those that give them, is one’s ability to write code and produce tools that demonstrate the vulnerability via exploit.  Conceptual vulnerabilities paired with non-existent exploits are generally thought of as fodder for academia.  Only when a tool that weaponizes an attack shows up do people pay attention.

Zero days rule by definition. There’s no analog on the defensive side unless you buy into marketing like “…ahead of the threat.” *cough* Defense for offense that doesn’t exist generally doesn’t get the majority of the funding 😉

So it’s no wonder that security “rockstars” in our industry are generally those who produce attack/offensive code which illustrate how a vector can be exploited.  It’s tangible.  It’s demonstrable.  It’s sexy.

On the other hand, most defenders are reconciled to using tools that others wrote — or become specialists in the integration of them — in order to parlay some advantage over the ever-increasing wares of the former.

Think of those folks who represent the security industry in terms of mindshare and get the most amount of press.  Overwhelmingly it’s those “hax0rs” who write cool tools — tools that are more offensive in nature, even if they produce results oriented toward allowing practitioners to defend better (or at least that’s how they’re sold.)  That said, there are also some folks who *do* code and *do* create things that are defensive in nature.

I believe the answer lies in balance; we need flashy exploits (no matter how impractical/irrelevant they may be to a large amount of the population) to drive awareness.  We also need  more practitioner/governance talks to give people platforms upon which they can start to architect solutions.  We need more defenders to be able to write code.

Perhaps that’s what Richard Bejtlich meant when he tweeted: “Real security is built, not bought.”  That’s an interesting statement on lots of fronts. I’m selfishly taking Richard’s statement out of context to support my point, so hopefully he’ll forgive me.

That said, I don’t write code.  More specifically, I don’t write code well.  I have hundreds of ideas of things I’d like to do but can’t bridge the gap between ideation and proof-of-concept because I can’t write code.

This is why I often “invent” scenarios I find plausible, talk about them, and then get people thinking about how we would defend against them — usually in the vacuum of either offensive or defensive tools being available, or at least realized.

Sometimes there aren’t good answers.

I hope we focus on this balance more at shows like Black Hat — I’m lucky enough to get to present my “research” there despite it being defensive in nature but we need more defensive tools and talks to make this a reality.

/Hoff

Enhanced by Zemanta

Incomplete Thought: Compliance – The Autotune Of The Security Industry

November 20th, 2010 3 comments
LOS ANGELES, CA - JANUARY 31:  Rapper T-Pain p...
Image by Getty Images via @daylife

I don’t know if you’ve noticed, but lately the ability to carry a tune while singing is optional.

Thanks to Cher and T-Pain, the rampant use of the Autotune in the music industry has enabled pretty much anyone to record a song and make it sound like they can sing (from the Autotune of encyclopedias, Wikipedia):

Auto-Tune uses a phase vocoder to correct pitch in vocal and instrumental performances. It is used to disguise off-key inaccuracies and mistakes, and has allowed singers to perform perfectly tuned vocal tracks without the need of singing in tune. While its main purpose is to slightly bend sung pitches to the nearest true semitone (to the exact pitch of the nearest tone in traditional equal temperament), Auto-Tune can be used as an effect to distort the human voice when pitch is raised/lowered significantly.[3]

A similar “innovation” has happened to the security industry.  Instead of having to actually craft and execute a well-tuned security program which focuses on managing risk in harmony with the business, we’ve simply learned to hum a little, add a couple of splashy effects and let the compliance Autotune do it’s thing.

It doesn’t matter that we’re off-key.  It doesn’t matter that we’re not in tune.  It doesn’t matter that we hide mistakes.

All that matters is that auditors can sing along, repeating the chorus and ensure that we hit the Top 40.

/Hoff

Enhanced by Zemanta

Reflections on SANS ’99 New Orleans: Where It All Started

July 25th, 2010 1 comment

A few weeks ago I saw some RT’s/@’s on Twitter referencing John Flowers and that name brought back some memories.

Today I sent a tweet to John asking him if I remembered correctly that he was at SANS in New Orleans in 1999 when he was still at Hiverworld.

He responded back confirming he was, indeed, at SANS ’99.  I remarked that this was where I first met many of today’s big names in security: Ed Skoudis, Ron Gula, Marty Roesch, Stephen Northcutt, Chris Klaus, JD Glaser, Greg Hoglund, and Bruce Schneier.

John responded back:

I couldn’t agree more.  That was an absolutely amazing time. I was on my second security startup (NodeWarrior Networks,) times were booming and this generation of the security industry as we know it was being given birth to.

I remember many awesome things from that week:

  • Sitting in “Intrusion Detection Shadow Style” with Stephen Northcut and Judy Novak for something like 8 hours going cross-eyed reading tcpdump packet traces and getting every question Stephen asked wrong. Well, some of them, anyway 😉
  • Asking Ron Gula’s wife something about Dragon and her looking back at me like I was a total n00b
  • Asking Ron Gula the same question and having him confirm that I was, in fact, a complete tool
  • Staying up all night drinking, writing code in Perl and doing dangerous things on other people’s networks
  • Participating in my first CTF
  • Almost getting arrested for B&E as I tried to rig the CTF contest by attempting to steal/clone/pwn/replace the HDD in the target machine. The funniest part of that was almost pulling it off (stealing the removable drive) but electrocuting myself in the process — which is what alerted my presence to the security guard.
  • Interrupting Lance Spitzner’s talk by stringing a poster behind him that said “www.lancespitznerismyhero.com” (a domain I registered during the event.)
  • Watching Bruce Schneier scream at the book store guy because they, incredulously, did not stock “Practical Cryptography
  • Sitting down with Ed Skoudis (who was with SAIC at the time, I believe,) looking at one another and wondering just what the hell we were going to do with our careers in security
  • Spending $14,000 (I shit you not, it was the Internet BOOM time, remember) by hitting 6 of the best restaurants in New Orleans with a party of hax0rs and working the charge department at American Express into a frenzy (not to mention actually using the line from Pretty Woman: “we’re going to spend obscene amounts of money here” in order to get in…)
  • Burning the roof of my mouth by not heeding the warnings of the waitress at Cafe Dumonde, biting into a beignet which cauterized my mouth as I simultaneously tried to extinguish the pain with scalding hot Chicory coffee.

I came back from that week knowing with every molecule in my body that even though I’d been “doing” security for 5 years already, it was exactly what I wanted to for the rest of my life.

I have Stephen Northcut to thank for that.  I haven’t been to a SANS since 1999 (don’t ask me why) but I am so excited about going back in August in DC (SANS What Works In Virtualization and Cloud Computing Summit) and giving a keynote at the event.

It’s been a long time.  Too long.

/Hoff

Enhanced by Zemanta

Incomplete Thought: Why We Need Open Source Security Solutions More Than Ever…

July 17th, 2010 1 comment
Illustrates a rightward shift in the demand curve.
Image via Wikipedia

I don’t have time to write a big blog post and quite frankly, I don’t need to. Not on this topic.

I do, however, feel that it’s important to bring back into consciousness how very important open source security solutions are to us — at least those of us who actually expect to make an impact in our organizations and work toward making a dent in our security problem pile.

Why do open source solutions matter so much in our approach to dealing with securing the things that matter most to us?

It comes down to things we already know but are often paralyzed to do anything about:

  1. The threat curve and innovation of attacker outpaces that of the defender by orders of magnitudes (duh)
  2. Disruptive technology and innovation dramatically impacts the operational, threat and risk modeling we have to deal with (duh duh)
  3. The security industry is not in the business of solving security problems that don’t have a profit motive/margin attached to it (ugh)

We can’t do much about #1 and #2 except be early adopters, by agile/dynamic and plan for change. I’ve written about this many times and built and entire series of talks presentations (Security and Disruptive Innovation) that Rich Mogull and I have taken to updating over the last few years.

We can do something about #3 and we can do it by continuing to invest in the development, deployment, support, and perhaps even the eventual commercialization of open source security solutions.

To be clear, it’s not that commercialization is required for success, but often it just indicates it’s become mainstream and valued and money *can* be made.)

When you look at the motivation most open source project creators bring a solution to market, it’s because the solution generally is not commercially available, it solves an immediate need and it’s contributed to by a community. These are all fantastic reasons to use, support, extend and contribute back to the open source movement — even if you don’t code, you can help by improving the roadmaps of these projects by making suggestions and promoting their use.

Open source security solutions deliver and they deliver quickly because the roadmaps and feature integration occur in an agile, meritocratic and vetted manner than often times lacks polish but delivers immediate value — especially given their cost.

We’re stuck in a loop (or a Hamster Sine Wave of Pain) because the problems we really need to solve are not developed by the companies that are in the best position to develop them in a timely manner. Why? Because when these emerging solutions are evaluated, they live or die by one thing: TAM (total addressable market.)

If there’s no big $$$ attached and someone can’t make the case within an organization that this is a strategic (read: revenue generating) big bet, the big companies wait for a small innovative startup to develop technology (or an open source tool,) see if it lives long enough for the market demand to drive revenues and then buy them…or sometimes develop a competitive solution.

Classical crossing the chasm/Moore stuff.

The problem here is that this cycle is broken horribly and we see perfectly awesome solutions die on the vine. Sometimes they come back to life years later cyclically when the pain gets big enough (and there’s money to be made) or the “market” of products and companies consolidate, commoditize and ultimately becomes a feature.

I’ve got hundreds of examples I can give of this phenomenon — and I bet you do, too.

That’s not to say we don’t have open-source-derived success stories (Snort, Metasploit, ClamAV, Nessus, OSSec, etc.) but we just don’t have enough of them. Further, there are disruptions such as virtualization and cloud computing that fundamentally change the game that we can harness in conjunction with open source solutions that can accelerate the delivery and velocity of solutions because of how impacting the platform shift can be.

I’ve also got dozens of awesome ideas that could/would fundamentally solve many attendant issues we have in security — but the timing, economics, culture, politics and readiness/appetite for adoption aren’t there commercially…but they can be via open source.

I’m going to start a series which identifies and highlights solutions that are either available as kernel-nugget technology or past-life approaches that I think can and should be taken on as open source projects that could fundamentally help our cause as a community.

Maybe someone can code/create open source solutions out of them that can help us all.  We should encourage this behavior.

We need it more than ever now.

/Hoff

Enhanced by Zemanta

The Security Hamster Sine Wave Of Pain: Public Cloud & The Return To Host-Based Protection…

July 7th, 2010 7 comments
Snort Intrusion Detection System Logo
Image via Wikipedia

This is a revisitation of a blog I wrote last year: Incomplete Thought: Cloud Security IS Host-Based…At The Moment

I use my ‘Security Hamster Sine Wave of Pain” to illustrate the cyclical nature of security investment and deployment models over time and how disruptive innovation and technology impacts the flip-flop across the horizon of choice.

To wit: most mass-market Public Cloud providers such as Amazon Web Services rely on highly-abstracted and limited exposure of networking capabilities.  This means that most traditional network-based security solutions are impractical or non-deployable in these environments.

Network-based virtual appliances which expect generally to be deployed in-line with the assets they protect are at a disadvantage given their topological dependency.

So what we see are security solution providers simply re-marketing their network-based solutions as host-based solutions instead…or confusing things with Barney announcements.

Take a press release today from SourceFire:

Snort and Sourcefire Vulnerability Research Team(TM) (VRT) rules are now available through the Amazon Elastic Compute Cloud (Amazon EC2) in the form of an Amazon Machine Image (AMI), enabling customers to proactively monitor network activity for malicious behavior and provide automated responses.

Leveraging Snort installed on the AMI, customers of Amazon Web Services can further secure their most critical cloud-based applications with Sourcefire’s leading protection. Snort and Sourcefire(R) VRT rules are also listed in the Amazon Web Services Solution Partner Directory, so that users can easily ensure that their AMI includes the latest updates.

As far as I can tell, this means you can install a ‘virtual appliance’ of Snort/Sourcefire as a standalone AMI, but there’s no real description on how one might actually implement it in an environment that isn’t topologically-friendly to this sort of network-based implementation constraint.*

Since you can’t easily “steer traffic” through an IPS in the model of AWS, can’t leverage promiscuous mode or taps, what does this packaging implementation actually mean?  Also, if  one has a few hundred AMI’s which contain applications spread out across multiple availability zones/regions, how does a solution like this scale (from both a performance or management perspective?)

I’ve spoken/written about this many times:

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where… and

Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye

Ultimately, expect that Public Cloud will force the return to host-based HIDS/HIPS deployments — the return to agent-based security models.  This poses just as many operational challenges as those I allude to above.  We *must* have better ways of tying together network and host-based security solutions in these Public Cloud environments that make sense from an operational, cost, and security perspective.

/Hoff

Related articles by Zemanta

* I “spoke” with Marty Roesch on the Twitter and he filled in the gaps associated with how this version of Snort works – there’s a host-based packet capture element with a “network” redirect to a stand-alone AMI:

@Beaker AWS->Snort implementation is IDS-only at the moment, uses software packet tap off customer app instance, not topology-dependent

and…

they install our soft-tap on their AMI and send the traffic to our AMI for inspection/detection/reporting.

It will be interesting to see how performance nets out using this redirect model.

Enhanced by Zemanta

The Four Horsemen Of the Virtualization (and Cloud) Security Apocalypse…

April 25th, 2010 No comments

I just stumbled upon this YouTube video (link here, embedded below) interview I did right after my talk at Blackhat 2008 titled “The 4 Horsemen of the Virtualization Security Apocalypse (PDF)” [There’s a better narrative to the PDF that explains the 4 Horsemen here.]

I found it interesting because while it was rather “new” and interesting back then, if you ‘s/virtualization/cloud‘ especially from the perspective of heavily virtualized or cloud computing environments, it’s even more relevant today!  Virtualization and the abstraction it brings to network architecture, design and security makes for interesting challenges.  Not much has changed in two years, sadly.

We need better networking, security and governance capabilities! 😉

Same as it ever was.

/Hoff

Reblog this post [with Zemanta]

Patching the (Hypervisor) Platform: How Do You Manage Risk?

April 12th, 2010 7 comments

Hi. Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

Reblog this post [with Zemanta]

Chattin’ With the Boss: “Securing the Network” (Waiting For the Jet Pack)

March 7th, 2010 8 comments

At the RSA security conference last week I spent some time with Tom Gillis on a live uStream video titled “Securing the Network.”

Tom happens to be (as he points out during a rather funny interlude) my boss’ boss — he’s the VP and GM of Cisco‘s STBU (Security Technology Business Unit.)

It’s an interesting discussion (albeit with some self-serving Cisco tidbits) surrounding how collaboration, cloud, mobility, virtualization, video, the consumerizaton of IT and, um, jet packs are changing the network and how we secure it.

Direct link here.

Embedded below:

Reblog this post [with Zemanta]

RSA Interview (c/o Tripwire) On the State Of Information Security In Virtualized/Cloud Environments.

March 7th, 2010 1 comment

David Sparks (c/o Tripwire) interviewed me on the state of Information Security in virtualized/cloud environments.  It’s another reminder about Information Centricity.

Direct Link here.

Emedded below:

Reblog this post [with Zemanta]