Archive

Archive for the ‘Information Survivability’ Category

The Four Horsemen Of the Virtualization (and Cloud) Security Apocalypse…

April 25th, 2010 No comments

I just stumbled upon this YouTube video (link here, embedded below) interview I did right after my talk at Blackhat 2008 titled “The 4 Horsemen of the Virtualization Security Apocalypse (PDF)” [There’s a better narrative to the PDF that explains the 4 Horsemen here.]

I found it interesting because while it was rather “new” and interesting back then, if you ‘s/virtualization/cloud‘ especially from the perspective of heavily virtualized or cloud computing environments, it’s even more relevant today!  Virtualization and the abstraction it brings to network architecture, design and security makes for interesting challenges.  Not much has changed in two years, sadly.

We need better networking, security and governance capabilities! 😉

Same as it ever was.

/Hoff

Reblog this post [with Zemanta]

Patching the (Hypervisor) Platform: How Do You Manage Risk?

April 12th, 2010 7 comments

Hi. Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

Reblog this post [with Zemanta]

Chattin’ With the Boss: “Securing the Network” (Waiting For the Jet Pack)

March 7th, 2010 8 comments

At the RSA security conference last week I spent some time with Tom Gillis on a live uStream video titled “Securing the Network.”

Tom happens to be (as he points out during a rather funny interlude) my boss’ boss — he’s the VP and GM of Cisco‘s STBU (Security Technology Business Unit.)

It’s an interesting discussion (albeit with some self-serving Cisco tidbits) surrounding how collaboration, cloud, mobility, virtualization, video, the consumerizaton of IT and, um, jet packs are changing the network and how we secure it.

Direct link here.

Embedded below:

Reblog this post [with Zemanta]

RSA Interview (c/o Tripwire) On the State Of Information Security In Virtualized/Cloud Environments.

March 7th, 2010 1 comment

David Sparks (c/o Tripwire) interviewed me on the state of Information Security in virtualized/cloud environments.  It’s another reminder about Information Centricity.

Direct Link here.

Emedded below:

Reblog this post [with Zemanta]

Slides from My Cloud Security Alliance Keynote: The Cloud Magic 8 Ball (Future Of Cloud)

March 7th, 2010 No comments

Here are the slides from my Cloud Security Alliance (CSA) keynote from the Cloud Security Summit at the 2010 RSA Security Conference.

The punchline is as follows:

All this iteration and debate on the future of the “back-end” of Cloud Computing — the provider side of the equation — is ultimately less interesting than how the applications and content served up will be consumed.

Cloud Computing provides for the mass re-centralization of applications and data in mega-datacenters while simultaneously incredibly powerful mobile computing platforms provide for the mass re-distribution of (in many cases the same) applications and data.  We’re fixated on the security of the former but ignoring that of the latter — at our peril.

People worry about how Cloud Computing puts their applications and data in other people’s hands. The reality is that mobile computing — and the clouds that are here already and will form because of them — already put, quite literally, those applications and data in other people’s hands.

If we want to “secure” the things that matter most, we must focus BACK on information centricity and building survivable systems if we are to be successful in our approach.  I’ve written about the topics above many times, but this post from 2009 is quite apropos: The Quandary Of the Cloud: Centralized Compute But Distributed Data You can find other posts on Information Centricity here.

Slideshare direct link here (embedded below.)

Reblog this post [with Zemanta]

Comments on the PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

February 16th, 2010 2 comments

I saw a very interesting post on LinkedIn with the title PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

PricewaterhouseCoopers are working with the Technology Strategy Board (part of BIS) on a high profile research project which aims to identify future technology and cyber security trends. These statements are forward looking and are intended to purely start a discussion around emerging/possible future trends. This is a great chance to be involved in an agenda setting piece of research. The findings will be released in the Spring at Infosec. We invite you to offer your thoughts…

The cloud/thin computing will fundamentally change the nature of cyber security…

The nature of cyber security threats will fundamentally change as the trend towards thin computing grows. Security updates can be managed instantly by the solution provider so every user has the latest security solution, the data leakage threat is reduced as data is stored centrally, systems can be scanned more efficiently and if Botnets capture end-point computers, the processing power captured is minimal. Furthermore, access to critical data can be centrally managed and as more email is centralised, malware can be identified and removed more easily. The key challenge will become identity management and ensuring users can only access their relevant files. The threat moves from the end-point to the centre.

What are your thoughts?

My response is simple.

Cloud Computing or “Thin Computing” as described above doesn’t change the “nature” of (gag) “cyber security” it simply changes its efficiency, investment focus, capital model and modality. As to the statement regarding threats with movement “…from the end-point to the centre,” the surface area really becomes amorphous and given the potential monoculture introduced by the virtualization layers underpinning these operations, perhaps expands.

Certainly the benefits described in the introduction above do mean changes to who, where and when risk mitigation might be applied, but those activities are, in most cases, still the same as in non-Cloud and “thick” computing.  That’s not a “fundamental change” but rather an adjustment to a platform shift, just like when we went from mainframe to client/server.  We are still dealing with the remnant security issues (identity management, AAA, PKI, encryption, etc.) from prior  computing inflection points that we’ve yet to fix.  Cloud is a great forcing function to help nibble away at them.

But, if you substitute “client server” in relation to it’s evolution from the “mainframe era” for “cloud/thin computing” above, it all sounds quite familiar.

As I alluded to, there are some downsides to this re-centralization, but it is important to note that I do believe that if we look at what PaaS/SaaS offerings and VDI/Thin/Cloud computing offers, it makes us focus on protecting our information and building more survivable systems.

However, there’s a notable bifurcation occurring. Whilst the example above paints a picture of mass re-centralization, incredibly powerful mobile platforms are evolving.  These platforms (such as the iPhone) employ a hybrid approach featuring both native/local on-device applications and storage of data combined with the potential of thin client capability and interaction with distributed Cloud computing services.*

These hyper-mobile and incredibly powerful platforms — and the requirements to secure them in this mixed-access environment — means that the efficiency gains on one hand are compromised by the need to once again secure  diametrically-opposed computing experiences.  It’s a “squeezing the balloon” problem.

The same exact thing is occurring in the Private versus Public Cloud Computing models.

/Hoff

* P.S. Bernard Golden also commented via Twitter regarding the emergence of Sensor nets which also have a very interesting set of implications on security as it relates to both the examples of Cloud and mobile computing elements above.

Reblog this post [with Zemanta]

How Many Open Letters To Howard Schmidt Do We Need? Just One.

December 23rd, 2009 4 comments

My friend Adam at the The New School Information Security Blog wrote An Open Letter to the New Cyber-Security Czar:

Congratulations on the new job! Even as a cynic, I’m surprised at just how fast the knives have come out, declaring that you’ll get nothing done. I suppose that low expectations are easy to exceed. We both know you didn’t take this job because you expected it to be easy or fun, but you know better than most how hard it will be to make a difference without a budget or authority. You know about many of the issues you’ll need to work through, and I’d like to suggest a few less traditional things which you can accomplish that will help transform cyber-security.

Adam’s thoughtful post was chock full of interesting points and guidance associated with what he and others think Howard Schmidt ought to consider in his “new” role as Cyber-Security Coordinator.

My suggestion was a little more simple in nature:

Dear Howard:

I’ll keep it short.

Let me know how we can help you be successful; it’s a two-way street. No preaching here.

Regards,

/Hoff

In addition, here’s my simple open response to all those who have suggestions for Howard — it’s not an attempt to be self-righteous, critical of others or antagonistic — but I, like Adam, am amazed at how cynical and defeatist people in our community have become.

If Howard called me tomorrow and asked me to quit my job and make sacrifices in order to join up and help achieve the lofty tasks before him for the betterment of all, I would.

Guaranteed.  Would you?

I’m glad you stepped up, Howard. Thank you.

/Hoff

DDoS – A Moose On Cloud’s Table Or A Pea Under The Mattress?

September 7th, 2009 7 comments

DDoSReaders of my blog will no doubt be familiar with Roland Dobbins.  He’s commented on lots of posts here and whilst we don’t always see eye-to-eye, I really respect both his intellect and his style.

So it’s fair to say that Roland is not a shy lad.  Formerly at Cisco and now at Arbor, he’s made his position (and likely his living) on dealing with a rather unpleasant issue in the highly distributed and networked InterTubes: Distributed Denial of Service (DDoS) attacks.

A recent article in ITWire titled “DDoS, the biggest threat to Cloud Computing” sums up Roland’s focus:

“According to Roland Dobbins, solutions architect for network security specialist Arbor Networks, distributed denial of service attacks are one of the must under-rated and ill-guarded against security threats to corporate IT, and in particular the biggest threat facing cloud computing.”

DDOS, Dobbins claims, is largely ignored in many discussions around network and cloud computing security. “Most discussions around cloud security are centred around privacy, confidentially, the separation of data from the application logic, but the security elephant in the room that very few people seem to want to talk about is DDOS. This is the number one security threat facing the cloud model,” he told last week’s Ausnog conference in Sydney.

“In cloud computing where infrastructure is shared by potentially millions of users, DDOS attacks have the potential to have much greater impact than against single tenanted architectures,” Dobbins argues. Yet, he says, “The cloud providers emerging as leaders don’t tend to talk much about their resiliency to DDOS attacks.”

Depending upon where you stand, especially if we’re talking about Public Clouds — and large Public Cloud providers such as Google, Amazon, Microsoft, etc. — you might cock your head to one side, raise an eyebrow and focus on the sentence fragment “…and in particular the biggest threat facing cloud computing.”  One of the reasons DDoS is under-appreciated is because in relative frequency — and in the stable of solutions and skill sets to deal with them — DDoS is a long tail event.

With unplanned outages afflicting almost all major Cloud providers today, the moose on the table seems to be good ol’ internal operational issues at the moment…that’s not to say it won’t become a bigger problem as the models for networked Cloud resources changes, but as the model changes, so will the defensive options in the stable.

With the decentralization of data but the mass centralization of data centers featured by these large Cloud providers, one might see how this statement could strike fear into the hearts of potential Cloud consumers everywhere and Roland is doing his best to serve us a warning — a Public (denial of) service announcement.

Sadly, at this point, however, I’m not convinced that DDoS is “the biggest threat facing Cloud Computing” and whilst providers may not “…talk much about their resiliency to DDoS attacks,” some of that may likely be due to the fact that they don’t talk much about security at all.  It also may be due to the fact that in many cases, what we can do to respond to these attacks is directly proportional to the size of your wallet.

Large network and service providers have been grappling with DDoS for years, so have large enterprises.  Folks like Roland have been on the front lines.

Cloud will certainly amplify the issues of DDoS because of how resources — even when distributed and resiliently load balanced in elastic and “perceptively infinitely scalable” ways — are ultimately organized, offered and consumed.  This is a valid point.

But if we look at the heart of most criminal elements exploiting the Internet today (and what will become Cloud,) you’ll find that the great majority want — no, *need* — victims to be available.  If they’re not, there’s no exploiting them.  DDoS is blunt force trauma — with big, messy, bloody blows that everybody notices.  That’s simply not very good for business.

At the end of the day, I think DDoS is important to think about.  I think variations of DDoS are, too.

I think that most service providers are thinking about it and investing in technology from companies such as Cisco and Arbor to deal with it, but as Roland points out, most enterprises are not — and if Cloud has its way, they shouldn’t have to:

Paradoxically, although Dobbins sees DDOS as the greatest threat to cloud computing, he also sees it as the potential solution for organisations grappling with the complexities of securing the network infrastructure.

“One answer is to get rid of all IT systems and hand them over to an organisation that specialises in these things. If the cloud providers are following best practice and have the visibility to enable them to exert control over their networks it is possible for organisation to outsource everything to them.”

For those organisations that do run their own data centres, he suggests they can avail themselves of ‘clean pipe’ services which protect against DDOS attacks According to Nick Race, head of Arbor Networks Australia, Telstra, Optus and Nextgen Networks all offer such services.

So what about you?  Moose on the table or pea under the mattress?

/Hoff

Cloud Security: Waiting For Godot & His Silver Bullet

July 15th, 2009 No comments

It’s that time again.  I am compelled after witnessing certain behaviors to play anthropologist and softly whisper my observations in your ear.godot

You may be familiar with Beckett’s “Waiting For Godot”*:

Waiting for Godot follows two days in the lives of a pair of men who divert themselves while they wait expectantly and unsuccessfully for someone named Godot to arrive. They claim him as an acquaintance but in fact hardly know him, admitting that they would not recognise him were they to see him. To occupy themselves, they eat, sleep, converse, argue, sing, play games, exercise, swap hats, and contemplate suicide — anything “to hold the terrible silence at bay”

Referencing my prior post about the state of Cloud security, I’m reminded of the fact that as a community of providers and consumers, we continue to wait for the security equivalent of Godot to arrive and solve all of our attendant Cloud security challenges with the offer of some mythical silver bullet.  We wait and wait for our security Godot as I mix metaphors and butcher Beckett’s opus to pass the time.

Here’s a classic illustration of hoping our way to Cloud security from a ComputerWeekly post titled “Cryptography breakthrough paves way to secure cloud services:

A research student who had a summer job at IBM, has cracked a cryptography problem that has baffled experts for over 30 years. The breakthrough may pave the way to secure cloud computing services.

This sounds fantastic and much has been written about this “homomorphic encryption,” with many people espousing how encryption will “solve our Cloud security problems.”

It’s a very interesting concept, but as to paving the “…path to secure cloud computing,” the reality is that it won’t.  At least not in isolation and not without some serious scale in ancillary support mechanisms including non-trivial issues like federated identity.

Bruce Schneier wades in with his assessment:

Unfortunately — you knew that was coming, right? — Gentry’s scheme is completely impractical…Despite this, IBM’s PR machine has been in overdrive about the discovery. Its press release makes it sound like this new homomorphic scheme is going to rewrite the business of computing: not just cloud computing, but “enabling filters to identify spam, even in encrypted email, or protection information contained in electronic medical records.” Maybe someday, but not in my lifetime.

The reality is that in addition to utilizing encryption — both existing and new approaches — we still continue to need all the usual suspects as they deal with the fact that fundamentally we’re still in a cycle of constructing insecure code in infostructure sitting atop infrastructure and metastructure that has its own fair share of growing up to do.

As a security architect, engineer, or manager, you need to continue to invest in understanding how what you have does or does not work within the context of Cloud.

You will likely find that you will need to continue to invest in threat and trust models analysis, risk management, vulnerability assessment, (id)entity management, compensating controls implemented as hardware and software technology solutions such as firewalls, IDP, DLP, and policy instantiation, etc. as well as host of modified and new approaches to dealing with Cloud-specific implementation challenges, especially those based on virtualization and massive scale with multitenancy.

These problems don’t solve themselves and we are simply not changing our behavior.  We wait and wait for our Godot.

So here’s the obligatory grumpy statement of the obvious as providers of solutions and services churn to deliver more capable solutions to put in your hands:

There is no silver bullet, just a lot of silver buckshot.  Use it all.  You’re going to have to deal with the cards we are dealt for the foreseeable future whilst we retool our approach in the longer term and technology equalizes some of our shortfalls.

Godot is not coming and you likely wouldn’t recognize him if he showed up anyway because he’d be dressed in homomorphic invisible hotpants…

Get on with it.  Treat security as the enterprise architecture element it is and use Cloud as the excuse to make things better by working on the things that matter.

If Godot does happen to show up, tell him I want my weed whacker back that he borrowed last summer.

/Hoff

* Wikipedia

Hey, Uh, Someone Just Powered Off Our Firewall Virtual Appliance…

June 11th, 2009 11 comments

onoffswitchI’ve covered this before in more complex terms, but I thought I’d reintroduce the topic due to a very relevant discussion I just had recently (*cough cough*)

So here’s an interesting scenario in virtualized and/or Cloud environments that make use of virtual appliances to provide security capabilities*:

Since virtual appliances (VAs) are just virtual machines (VMs) what happens when a SysAdmin spins down or moves one that happens to be your shiny new firewall protecting your production VMs behind it, accidentally or maliciously?  Brings new meaning to the phrase “failing closed.”

Without getting into the vagaries of vendor specific mobility-enabled/enabling technologies, one of the issues with VMs/VAs is that there’s not really a good way of designating one as being “more important” or functionally differentiated such as “security” or “critical application” that would otherwise ensure a higher priority for service availability (read: don’t spin this down unless…) or provide a topological dependency hierarchy in virtualized network constructs.

Unlike physical environments where system administrators (servers) are segregated from access to network and security appliances, this isn’t the case in virtual environments. In Cloud environments (especially public, multi-tenant) where we are often reliant only upon virtual security capabilities since we have no option for physical alternatives, this is an interesting corner case.

We’ve talked a lot about visibility, audit and policy management in virtual environments and this is a poignant example.

/Hoff

*Despite the silly notion that the Google dudes tried to suggest I equated virtualization with Cloud as one-in-the-same, I don’t.