Archive

Archive for the ‘Virtualization Security’ Category

Patching the (Hypervisor) Platform: How Do You Manage Risk?

April 12th, 2010 7 comments

Hi. Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

Reblog this post [with Zemanta]

More On High Assurance (via TPM) Cloud Environments

April 11th, 2010 14 comments
North Bridge Intel G45
Image via Wikipedia

Back in September 2009 after presenting at the Intel Virtualization (and Cloud) Security Summit and urging Intel to lead by example by pushing the adoption and use of TPM in virtualization and cloud environments, I blogged a simple question (here) as to the following:

Does anyone know of any Public Cloud Provider (or Private for that matter) that utilizes Intel’s TXT?

Interestingly the replies were few; mostly they were along the lines of “we’re considering it,” “…it’s on our long radar,” or “…we’re unclear if there’s a valid (read: economically viable) use case.”

At this year’s RSA Security Conference, however, EMC/RSA, Intel and VMware made an announcement regarding a PoC of their “Trusted Cloud Infrastructure,” describing efforts to utilize technology across the three vendors’ portfolios to make use of the TPM:

The foundation for the new computing infrastructure is a hardware root of trust derived from Intel Trusted Execution Technology (TXT), which authenticates every step of the boot sequence, from verifying hardware configurations and initialising the BIOS to launching the hypervisor, the companies said.

Once launched, the VMware virtualisation environment collects data from both the hardware and virtual layers and feeds a continuous, raw data stream to the RSA enVision Security Information and Event Management platform. The RSA enVision is engineered to analyse events coming through the virtualisation layer to identify incidents and conditions affecting security and compliance.

The information is then contextualised within the Archer SmartSuite Framework, which is designed to present a unified, policy-based assessment of the organisation’s security and compliance posture through a central dashboard, RSA said.

It should be noted that in order to take advantage of said solution, the following components are required: a future release of RSA’s Archer GRC console, the upcoming Intel Westmere CPU and a soon-to-be-released version of VMware’s vSphere.  In other words, this isn’t available today and will require upgrades up and down the stack.

Sam Johnston today pointed me toward an announcement from Enomaly referencing the “High Assurance Edition” of ECP which laid claims of assurance using the TPM beyond the boundary of the VMM to include the guest OS and their management system:

Enomaly’s Trusted Cloud platform provides continuous security assurance by means of unique, hardware-assisted mechanisms. Enomaly ECP High Assurance Edition provides both initial and ongoing Full-Stack Integrity Verification to enable customers to receive cryptographic proof of the correct and secure operation of the cloud platform prior to running any application on the cloud.

  • Full-Stack Integrity Verification provides the customer with hardware-verified proof that the cloud stack (encompassing server hardware, hypervisor, guest OS, and even ECP itself) is intact and has not been tampered with. Specifically, the customer obtains cryptographically verifiable proof that the hardware, hypervisor, etc. are identical to reference versions that have been certified and approved in advance. The customer can therefore be assured, for example, that:
  • The hardware has not been modified to duplicate data to some storage medium of which the application is not aware
  • No unauthorized backdoors have been inserted into the cloud managment system
  • The hypervisor has not been modified (e.g. to copy memory state)
  • No hostile kernel modules have been injected into the guest OS
This capability therefore enables customers to deploy applications to public clouds with confidence that the confidentiality and integrity of their data will not be compromised.

Of particular interest was Enomaly’s enticement of service providers with the following claim:

…with Enomaly’s patented security functionality, can deliver a highly secure Cloud Computing service – commanding a higher price point than commodity public cloud providers.

I’m looking forward to exploring more regarding these two example solutions as they see the light of day (and how long this will take given the need for platform-specific upgrades up and down the stack) as well as whether or not customers are actually willing to pay — and providers can command — a higher price point for what these components may offer.  You can bet certain government agencies are interested.

There are potentially numerous benefits with the use of this technology including security, compliance, assurance, audit and attestation capabilities (I hope also to incorporate more of what this might mean into the CloudAudit/A6 effort) but I’m very interested as to the implications on (change) management and policy, especially across heterogeneous environments and the extension and use of TPM’s across mobile platforms.

Of course, researchers are interested in these things too…see Rutkowska, et. al and “Attacking Intel Trusted Execution Technology” as an example.

/Hoff

Related articles by Zemanta

Reblog this post [with Zemanta]

Incomplete Thought: The Other Side Of Cloud – Where The (Wild) Infrastructure Things Are…

March 9th, 2010 3 comments

This is bound to be an unpopular viewpoint.  I’ve struggled with how to write it because I want to inspire discussion not a religious battle.  It has been hard to keep it an incomplete thought. I’m not sure I have succeeded 😉

I’d like you to understand that I come at this from the perspective of someone who talks to providers of service (Cloud and otherwise) and large enterprises every day.  Take that with a grain of whatever you enjoy ingesting.  I have also read some really interesting viewpoints contrary to mine, many of which I find really fascinating, just not subscribed to my current interpretation of reality.

Here’s the deal…

While our attention has turned to the wonders of Cloud Computing — specifically the elastic, abstracted and agile delivery of applications and the content they traffic in — an interesting thing occurs to me related to the relevancy of networking in a cloudy world:

All this talk of how Cloud Computing commoditizes “infrastructure” and challenges the need for big iron solutions, really speaks to compute, perhaps even storage, but doesn’t hold true for networking.

The evolution of these elements run on different curves.

Networking ultimately is responsible for carting bits in and out of compute/storage stacks.  This need continues to reliably intensify (beyond linear) as compute scale and densities increase.  You’re not going to be able to satisfy that need by trying to play packet ping-pong and implement networking in software only on the same devices your apps and content execute on.

As (public) Cloud providers focus on scale/elasticity as their primary disruptive capability in the compute realm, there is an underlying assumption that the networking that powers it is magically and equally as scaleable and that you can just replicate everything you do in big iron networking and security hardware and replace it one-for-one with software in the compute stacks.

The problem is that it isn’t and you can’t.

Cloud providers are already hamstrung by how they can offer rich networking and security options in their platforms given architectural decisions they made at launch – usually the pieces of architecture that provide for I/O and networking (such as the hypervisor in IaaS offerings.)  There is very real pain and strain occurring in these networks.  In Cloud IaaS solutions, the very underpinnings of the network will be the differentiation between competitors.  It already is today.

See Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where… or Incomplete Thought: The Cloud Software vs. Hardware Value Battle & Why AWS Is Really A Grid… or Big Iron Is Dead…Long Live Big Iron… and I Love the Smell Of Big Iron In the Morning.

With the enormous I/O requirements of virtualized infrastructure, the massive bandwidth requirements that rich applications, video and mobility are starting to place on connectivity, Cloud providers, ISPs, telcos, last mile operators, and enterprises are pleading for multi-terabit switching fabrics in their datacenters to deal with load *today.*

I was reminded of this today, once again, by the announcement of a 322 Terabit per second switch.  Some people shrugged. Generally these are people who outwardly do not market that they are concerned with moving enormous amounts of data and abstract away much of the connectivity that is masked by what a credit card and web browser provide.  Those that didn’t shrug are those providers who target a different kind of consumer of service.

Abstraction has become a distraction.

Raw networking horsepower, especially for those who need to move huge amounts of data between all those hyper-connected cores running hundreds of thousands of VM’s or processes, still know it as a huge need.

Before you simply think I’m being a shill because I work for networking vendor (and the one that just announced that big switch referenced above,) please check out the relevant writings on this viewpoint which I have held for years which is that we need *both* hardware and software based networking to scale efficiently and the latter simply won’t replace the former.

Virtualization and Cloud exacerbate the network-centric issues we’ve had for years.

I look forward to the pointers to the sustainable, supportable and scaleable 322 Tb/s software-based networking solutions I can download and implement today as a virtual appliance.

/Hoff

Reblog this post [with Zemanta]

Chattin’ With the Boss: “Securing the Network” (Waiting For the Jet Pack)

March 7th, 2010 8 comments

At the RSA security conference last week I spent some time with Tom Gillis on a live uStream video titled “Securing the Network.”

Tom happens to be (as he points out during a rather funny interlude) my boss’ boss — he’s the VP and GM of Cisco‘s STBU (Security Technology Business Unit.)

It’s an interesting discussion (albeit with some self-serving Cisco tidbits) surrounding how collaboration, cloud, mobility, virtualization, video, the consumerizaton of IT and, um, jet packs are changing the network and how we secure it.

Direct link here.

Embedded below:

Reblog this post [with Zemanta]

RSA Interview (c/o Tripwire) On the State Of Information Security In Virtualized/Cloud Environments.

March 7th, 2010 1 comment

David Sparks (c/o Tripwire) interviewed me on the state of Information Security in virtualized/cloud environments.  It’s another reminder about Information Centricity.

Direct Link here.

Emedded below:

Reblog this post [with Zemanta]

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…

January 31st, 2010 15 comments

Allan Leinwand from GigaOm wrote a great article asking “Where are the network virtual appliances?” This was followed up by another excellent post by Rich Miller.

Allan sets up the discussion describing how we’ve typically plumbed disparate physical appliances into our network infrastructure to provide discrete network and security capabilities such as load balancers, VPNs, SSL termination, firewalls, etc.  He then goes on to describe the stunted evolution of virtual appliances:

To be sure, some networking devices and appliances are now available in virtual form.  Switches and routers have begun to move toward virtualization with VMware’s vSwitch, Cisco’s Nexus 1000v, the open source Open vSwitch and routers and firewalls running in various VMs from the company I helped found, Vyatta.  For load balancers, Citrix has released a version of its Netscaler VPX software that runs on top of its virtual machine, XenServer; and Zeus Systems has an application traffic controller that can be deployed as a virtual appliance on Amazon EC2, Joyent and other public clouds.

Ultimately I think it prudent for discussion’s sake to separate routing, switching and load balancing (connectivity) from functions such as DLP, firewalls, and IDS/IPS (security) as lumping them together actually abstracts the problem which is that the latter is completely dependent upon the capabilities and functionality of the former.  This is what Allan almost gets to when describing his lament with the virtual appliance ecosystem today:

Yet the fundamental problem remains: Most networking appliances are still stuck in physical hardware — hardware that may or may not be deployed where the applications need them, which means those applications and their associated VMs can be left with major gaps in their infrastructure needs. Without a full-featured and stateful firewall to protect an application, it’s susceptible to various Internet attacks.  A missing load balancer that operates at layers three through seven leaves a gap in the need to distribute load between multiple application servers. Meanwhile, the lack of an SSL accelerator to offload processing may lead to performance issues and without an IDS device present, malicious activities may occur.  Without some (or all) of these networking appliances available in a virtual environment, a VM may find itself constrained, unable to take full advantage of the possible economic benefits.

I’ve written about this many, many times. In fact almost three years ago I created a presentation called  “The Four Horsemen of the Virtualization Security Apocalypse” which described in excruciating detail how network virtual appliances were a big ball of fail and would be for some time. I further suggested that much of the “best-of-breed” products would ultimately become “good enough” features in virtualization vendor’s hypervisor platforms.

Why?  Because there are some very real problems with virtualization (and Cloud) as it relates to connectivity and security:

  1. Most of the virtual network appliances, especially those “ported” from the versions that usually run on dedicated physical hardware (COTS or proprietary) do not provide feature, performance, scale or high-availability parity; most are hobbled or require per-platform customization or re-engineering in order to function.
  2. The resilience and high availability options from today’s off-the-shelf virtual connectivity does not pair well with the mobility and dynamism of de-coupled virtual machines; VMs are ultimately temporal and networks don’t like topological instability due to key components moving or disappearing
  3. The performance and scale of virtual appliances still suffer when competing for I/O and resources on the same physical hosts as the guests they attempt to protect
  4. Virtual connectivity is a generally a function of the VMM (or a loadable module/domain therein.) The architecture of the VMM has dramatic impact upon the architecture of the software designed to provide the connectivity and vice versa.
  5. Security solutions are incredibly topology sensitive.  Given the scenario in #1 when a VM moves or is distributed across the pooled infrastructure, unless the security capabilities are already present on the physical host or the connectivity and security layers share a control plane (or at least can exchange telemetry,) things will simply break
  6. Many virtualization (and especially cloud) platforms do not support protocols or topologies that many connectivity and security virtual appliances require to function (such as multicast for load balancing)
  7. It’s very difficult to mimic the in-line path requirements in virtual networking environments that would otherwise force traffic passing through the connectivity layers (layers 2 through 7) up through various policy-driven security layers (virtual appliances)
  8. There is no common methodology to express what security requirements the connectivity fabrics should ensure are available prior to allowing a VM to spool up let alone move
  9. Virtualization vendors who provide solutions for the enterprise have rich networking capabilities natively as well as with third party connectivity partners, including VM and VMM introspection capabilities. As I wrote about here, mass-market Cloud providers such as Amazon Web Services or Rackspace Cloud have severely crippled networking.
  10. Virtualization and cloud vendors generally force many security vs. performance tradeoffs when implementing introspection capabilities in their platforms: third party code running in the kernel, scheduler prioritization issues, I/O limitations, etc.
  11. Much of the basic networking capabilities are being pushed lower into silicon (into the CPUs themselves) which makes virtual appliances even further removed from the guts that enable them
  12. Physical appliances (in the enterprise) exist en-mass.  Many of them provide highly scalable solutions to the specific functions that Alan refers to.  The need exists, given the limitations I describe above, to provide for integration/interaction between them, the VMM and any virtual appliances in order to offload certain functions as well as provide coverage between the physical and the logical.

What does this mean?  It means that ultimately to ensure their own survival, virtualization and cloud providers will depend less upon virtual appliances and add more of the basic connectivity AND security capabilities into the VMMs themselves as its the only way to guarantee performance, scalability, resilience and satisfy the security requirements of customers. There will be new generations of protocols, APIs and control planes that will emerge to provide for this capability, but this will drive the same old integration battles we’re supposed to be absolved from with virtualization and Cloud.

Connectivity and security vendors will offer virtual replicas of their physical appliances in order to gain a foothold in virtualized/cloud environments in order to intercept traffic (think basic traps/ACL’s) and then interact with higher-performing physical appliance security service overlays or embedded line cards in service chassis.  This is especially true in enterprises but poses many challenges in software-only, mass-market cloud environments where what you’ll continue to get is simply basic connectivity and security with limited networking functionality.  This implies more and more security will be pushed into the guest and application logic layers to deal with this disconnect.

This is exactly where we are today with Cloud providers like Amazon Web Services: basic ingress-only filtering with a very simplistic, limited and abstracted set of both connectivity and security capability.  See “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye”  Will they add more functionality?  Perhaps. The question is whether they can afford to in order to limit the impact that connecitivity and security variability/instability can bring to an environment.

That said, it’s certainly achievable, if you are willing and able to do so, to construct a completely software-based networking environment, but these environments require a complete approach and stack re-write with an operational expertise that will be hard to support for those who have spent the last 20 years working in a different paradigm and that’s a huge piece of this problem.

The connectivity layer — however integrated into the virtualized and cloud environments they seem — continues to limit how and what the security layers can do and will for some time, thus limiting the uptake of virtual network and security appliances.

Situation normal.

/Hoff

Reblog this post [with Zemanta]

Hacking Exposed: Virtualization & Cloud Computing…Feedback Please

January 30th, 2010 26 comments

Craig Balding, Rich Mogull and I are working on a book due out later this year.

It’s the latest in the McGraw-Hill “Hacking Exposed” series.  We’re focusing on virtualization and cloud computing security.

We have a very interesting set of topics to discuss but we’d like to crowd/cloud-source ideas from all of you.

The table of contents reads like this:

Part I: Virtualization & Cloud Computing:  An Overview
Case Study: Expand the Attack Surface: Enterprise Virtualization & Cloud Adoption
Chapter 1: Virtualization Defined
Chapter 2: Cloud Computing Defined

Part II: Smash the Virtualized Stack
Case Study: Own the Virtualized Enterprise
Chapter 3: Subvert the CPU & Chipsets
Chapter 4: Harass the Host, Hypervisor, Virtual Networking & Storage
Chapter 5: Victimize the Virtual Machine
Chapter 6: Conquer the Control Plane & APIs

Part III: Compromise the Cloud
Case Study: Own the Cloud for Fun and Profit
Chapter 7: Undermine the Infrastructure
Chapter 8: Manipulate the Metastructure
Chapter 9: Assault the Infostructure

Part IV: Appendices

We’ll have a book-specific site up shortly, but if you’d like to see certain things covered (technology, operational, organizational, etc.) please let us know in the comments below.

Also, we’d like to solicit a few critical folks to provide feedback on the first couple of chapters. Email me/comment if interested.

Thanks!

/Hoff, Craig and Rich.

Reblog this post [with Zemanta]

To Achieve True Cloud (X/Z)en, One Must Leverage Introspection

January 6th, 2010 No comments

Back in October 2008, I wrote a post detailing efforts around the Xen community to create a standard security introspection API (Xen.Org Launches Community Project To Bring VM Introspection to Xen🙂

The Xen Introspection Project is a community effort within Xen.org to leverage the existing research presented above with other work not yet public to create a standard API specification and methodology for virtual machine introspection.

That blog was focused on introspection for virtualization proper but since many of the larger cloud providers utilize Xen virtualization as an underpinning of their service architecture and as an industry we’re suffering from a lack of visibility and deployable security capabilities, the relevance of VM and VMM introspection to cloud computing is quite relevant.

I thought I’d double around and see where we are.

It looks as though there’s been quite a bit of recent activity from the folks at Georgia Tech (XenAccess Project) and the University of Alaska at Fairbanks (Virtual Introspection for Xen) referenced in my previous blog.  The vCloud API proffered via the DMTF seems to also leverage (at least some of) the VMsafe API capabilities present in VMware‘s vSphere virtualization platform.

While details are, for obvious reasons sketchy, I am encouraged in speaking to representatives from a few cloud providers who are keenly interested in including these capabilities in their offerings.  Wouldn’t that be cool?

Adoption and inclusion of introspection capabilities will overcome some of the inherent security and visibility limitations we face in highly-virtualized multi-tenant environments due to networking constraints for integrating security functionality that I wrote about here.

I plan a follow-on blog in more detail once I finish some interviews.

/Hoff

Reblog this post [with Zemanta]

Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye

December 4th, 2009 6 comments

sucklessThere are lots of great discussions these days about how infrastructure and networking need to become more dynamic and intelligent in order to more fully enable the mobility and automation promised by both virtualization and cloud computing.  There are many examples of how that’s taking place in the enterprise.

Incumbent networking vendors and emerging cloud/network startups are coming to terms with the impact of virtualization and cloud as juxtaposed with that of (and you’ll excuse the term) “pure” cloud vendors and those more traditional (Inter)networking service providers who have begun to roll out Cloud services atop or alongside their existing portfolio of offerings.

  • On the one hand we see hardware-based networking vendors adding software-based virtual switching and virtual appliance extensions in order to claw back the networking and security functions which have been abstracted into the virtualization and cloud stacks.  This is a big deal in the enterprise and especially with vendors looking to stake a claim in the private cloud space which is the evolution of traditional datacenter capabilities extended with virtualization and leverages the attributes of Cloud to provide for a more frictionless computing experience.  Here is where we see innovation and evolution with the likes of converged data and storage networking and unified fabric solutions.

  • On the other hand we see massively-scaled public cloud providers and evolving (Inter)networking service providers who have essentially absorbed the networking layers into their cloud operating platforms and rely on the software functionality embedded within to manifest the connectivity required to enable service.  There is certainly networking hardware sitting beneath these offerings, but depending upon their provenance, there are remarkable differences in the capabilities and requirements between them and those mentioned above.  Mostly, these providers are really shouting for multi-terabit layer two switching fabric interconnects to which they interface their software-enabled compute platforms.  The secret sauce is primarily in software.

For the purpose of this post, I’m not going to focus on the private Cloud camp and enterprise cloud plays, or those “Cloud” providers who replicate the same architectures to serve these customers, rather, I want to focus on those service providers/Cloud providers who offer massively scalable Infrastructure and Platform-as-a-Service offerings as in the second example above and highlight two really important points:

  1. From a physical networking perspective, most of these providers rely, in some large part, on giant, flat, layer two physical networks with the actual “intelligence,” segmentation, isolation and logical connectivity provided by the hypervisor and their orchestration/provisioning/automation layers.
  2. Most of the networking implementations in these environments are seriously retarded as it relates to providing flexible and extensible networking topologies which make for n-Tier application mapping nightmares for an enterprise looking to move a reasonable application stack to their service.

I’ve been experimenting with taking several reasonably basic n-Tier app stacks which require mutiple levels of security, load balancing and message bus capabilities and design them using several cloud platform providers offerings today.

The dirty little secret is that there are massive trade-offs with each of them, mostly due to constraints related to the very basic networking and security functionality offered by the hypervisors that power their services today.  The networking is basic.  Just the way they like it. It sucks for me.

This is a problem I demonstrated in enterprise virtualization in my Four Horsemen of the Virtualization Apocalypse presentation two years ago.  It’s much, much worse in Cloud.

Not supporting multiple virtual interfaces, not supporting multiple IP addresses per instance/VM, not supporting multicast or broadcast capabilities for software-based load balancing (and resiliency of the LB engines themselves)…these are nasty issues that in many cases require wholesale re-engineering of app stacks and push things like resiliency and high availability into uncertain waters.

It’s also going to cost me more.

Sure, there are ways of engineering around these inadequacies, but they require additional levels of complexity, more cost, additional providers or instances and still leave me without many introspection options and detective and preventative security controls that I’m used to being able to rely on in traditional networking environments using colocation services or natively within the enterprise.

I’m sure I’ll see comments (public and private) suggesting all sorts of reasons why these are non-issues and how it’s silly to try and replicate the enterprise approach in the cloud.  I have 500 reasons why they’re wrong…the Fortune 500, that is.  You should also know I’m not apologizing for the sorry state of non-dynamic infrastructure, but I am suggesting that forcing me to re-tool app stacks to fit your flat network topologies without giving me better security and flexible connectivity options simply sucks.

In may cases, people just can’t get there from here.

I don’t want to have to re-architect my app stacks to work in the cloud simply because of a lack of maturity from a networking perspective.  I shouldn’t have to. That’s simply backward.  If the power of Cloud is its ability to quickly, flexibly, and easily allow me to provision, orchestrate and deploy services, that must include the network, also!

The networking and security capabilities of  public Cloud providers needs to improve — and quickly.  Applications that are not network topology-dependent and only require a single interface (or more specifically an IP address/socket) to communicate aren’t the problem.  It’s when you need to integrate applications and/or infrastructure solutions that require multiple interfaces, that *are* topology dependent and require insertion between these monolithic applications that things break down. Badly.

The “app on a stick” model doesn’t work when enterprises struggle with taking isolated clusters of applications (tiers) and isolate/protect them with physical or virtual appliances that require multiple interfaces to do so.  ACL’s don’t cut it, not when I need FW, IPS, DLP, WAF, etc. functionality.  Let’s not forget dedicated management, storage or backup interfaces.  These are many of the differences between public and private cloud offerings.

I can’t do many of the things I need to do easily in the Cloud today, not without serious trade-offs that incur substantial cost and given the immaturity of the market as a whole put me at risk.

For the large enterprise, if the fundamental networking and security architectures don’t allow for easy portability that does not require massive re-engineering of app stacks, these enterprises are going to turn to niche or evolving (Inter)networking providers who offer them the capability to do so, even if they’re not as massively scaleable, or they’ll simply build private clouds instead.

/Hoff

Great InformationWeek/Dark Reading/Black Hat Cloud & Virtualization Security Virtual Panel on 12/9

December 3rd, 2009 No comments

darkreading-blackhat

I wanted to let you know about about a cool virtual panel I’m moderating as part of the InformationWeek/Dark Reading/Black Hat virtual event titled “IT Security: The Next Decade” on December 9th.

There are numerous awesome speakers throughout the day, but the panel I’m moderating is especially interesting to me because I was able to get an amazing set of people to participate.  Here’s the rundown — check out the panelists:

Virtualization, Cloud Computing, And Next-Generation Security

The concept of cloud computing creates new challenges for security, because sensitive data may no longer reside on dedicated hardware.  How can enterprises protect their most sensitive data in the rapidly-evolving world of shared computing resources? In this panel, Black Hat researchers who have found vulnerabilities in the cloud and software-as-a-service models meet other experts on virtualization and cloud computing to discuss the question of cloud computing’s impact on security and the steps that will be required to protect data in cloud environments.

Panelists: Glenn Brunette, Distinguished Engineer and Chief Security Architect, Sun Microsystems; Edward Haletky, Virtualization Security Expert; Chris Wolf, Virtualization Analyst, Burton Group; Jon Oberheide, Security Researcher; Craig Balding, Cloud Security Expert, cloudsecurity.org

Moderator: Christofer Hoff, Contributing Editor, Black Hat

I wanted the perspective of architects/engineers, practitioners, researchers and analysts — and I couldn’t have asked for a better group.

Our session is 6:15pm – 7:00 EST.

Hope to “see” you there.

/Hoff