Archive for April, 2009

My Talks/Panels At AGC InfoSec & RSA Security Conferences

April 17th, 2009 No comments

Here’s what I’ve got planned for next week at the America’s Growth Capital InfoSec and RSA Security Conferences:

America’s Growth Capital 5th Annual Information Security Conference

  1. Monday, April 20th – Keynote 3:00pm – 3:30pm – The Frogs / Cloud Computing and Virtualization Security Fable
  2. Monday, April 20th – Panel Moderator 3:30 – 4:15pm – Virtualization, Security and Management with:
    Simon Crosby, CTO, Citrix (CTXS)
    Dennis Moreau, CTO, Configuresoft
    Jay Litkey, President and CEO, Embotics
    Wael Mohamed, President and CEO, Third Brigade
    Allwyn Sequeira, VMware (VMW)

RSA Security Conference

  1. Wednesday, April 22nd – 10:40 – 11:40am Panel Discussion – Host 203 Defending & Deconstructing Virtualization Best Practices with:
    Rob Randell Senior Security Specialist, VMware
    Dave Shackleford Chief Security Officer, Configuresoft
    Moderator:   Chris Farrow Vice President, Configuresoft

  2. Wednesday, April 22nd – 2:45pm – 3:45pm Panelist/Founding Member – Cloud Security Alliance Kick-off
  3. Wednesday, April 22nd – 3:00pm – 6:00pm Panelist Jericho Forum Cloud Computing Event
  4. Thursday, April 23rd – 10:40-11:40 Panel Discussion – FEA 303 VirtSec Cage Match with:
    Andreas Antonopoulos Sr. Vice President, Nemertes Research
    Michael Berman CTO, Catbird
    Stephen Herrod CTO and VP of R&D, VMware
    Simon Crosby CTO, Citrix Systems
  5. Friday, April 24th – 10:10am – 11:10 am Speaker w/Rich Mogull (Securosis) – Bus 402 – Disruptive Innovation & The Future of Security

I’ve got a bunch of press interviews, videos and briefings going also. Just so you know, Wednesday evening is overbooked 8 times at this point. 😉

If you need to reach me, ping me via email (choff @ packetfilter. com,) DM me via Twitter (@beaker) or call my voice router +1.978.631.0302


Categories: Security Conferences Tags:

OVF: The Root Of All Evil. We Must Exterminate It NOW!

April 17th, 2009 4 comments

Today I was rudely interrupted from my Cyber-dopamine-drip as I hungrily anticipated Oprah’s next tweet such that I might become complete.

My Google reader flashed its welcome yellow folder highlight as it indicated an RSS feed had been tickled.

Little did I know this pollen-tinted shimmer would bring such discord to what was shaping up otherwise to be a perfectly lovely spring day.

agentmaxwellIt seems the singularity is upon us as chronicled by Kris Buytaert in his post titled: On the Dangers of OVF.

It’s not often that I’m awe-struck into silence, but if you read this, I am convinced you will draw your own conclusions:

Usually I`m all in favour of Open Standards that are supported by different parties, and the Open Virtual Machine Format (OVF) pretty much matches these requirements.
The last Virtualbox has support for it, Simon is telling about it being part of the new XenConvert v2 Tech Preview .
However, Reuven wonders why it hasn’t gained widespread adoption yet.

Here’s my take, .. I`m not in favour of a standard as OVF that provides an easy way to transfer packaged virtual machine instance between different platforms.

Why ? Because I don’t think transferring full images of Virtual machines around is a good idea, not on 1 platform, not on different platforms.
And I`m not the only one with that opinion.

A Virtual Machine image is the perfect vehicle for malware in your network … some prepares an image for you , you run it on your network, and you set loose the devil, who knows it does a networkscan in the background and sends the info

OVF is a good breeding area for VM Image Sprawl,the effect you get when the number of images you have grows beyond what you can easily maintain, and this time it can grow beyond the people only using proprietary software , where as Image Sprawl used to be a disease mostly diagnosed within the VMWare usergroups and sysdamins with no clue on large scale deployments OVF

Sure OVF will assist smooth migration between different platforms so vendors want to keep it as far away from their users as possible, but people that already have a platform agnostic deployment framework in place don’t really need to worry about deploying on different platforms.

<Silence punctuated only by the sounds of me choking on my own tongue>

Sigh.  It must be WTF Friday.


Security Researchers Turn Their Venom Loose On Twitter…No More Free Re-Tweets!

April 16th, 2009 1 comment


It’s getting brutal out there, kids.

It all started innocently enough on Twitter when rockstar bughunter Alex Sotirov commanded:


Of course, I couldn’t help myself:


Pretty soon, my eeeevvvvviiillllll followers caused this:






Categories: Jackassery Tags:

Jericho Forum’s Cloud Cube Model…Rubik, Rubric and Righteous!

April 16th, 2009 No comments

I’m looking forward to the RSA conference this year; I am going to get to discuss Virtualization and Cloud Computing security a lot.

One of the events I’m really looking forward to is a panel discussion at the Jericho Forum’s event (Wednesday the 22nd, starting at 3pm) with some really good friends of mine and members of the Forum.

We’re going to be discussing Jericho’s Cloud Cube Model:

jericho-cloudcubeI think that the Cloud Cube does a nice job describing the multi-dimensional elements of Cloud Computing and frames not only Cloud use cases, but also how they are deployed and utilized.

Here’s why I am excited; if you look at the Cube above and the table I built below in this blog (The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…) to get deeper into Cloud definitions, despite the differences in names, you will notice some remarkable similarities, especially the notion of how “internal/external” is called out separately from “perimertized/de-perimeterized.” This is akin to my table labeling of “Infrastructure located” and “managed by” column headings.  Further the “outsourced/insourced” maps to my “managed by:” column.  I like the “proprietary/open” dimension, also, which I didn’t include in my table, but I did reference in my Frogs presentation. I think I’ll extend the table to show that, also.


I am very much looking forward to discussing this on the panel.  I’ve been preaching about the Jericho Forum since my religious conversion many years ago.

As I said in my Frogs preso, Cloud Computing is the evolution of the “re-perimeterization” model on steroids.


Hoff’s (Still) For Hire: There’s Only So Many Honey-Do’s I can Do’s…

April 15th, 2009 No comments


hoffforhireUpdate: Since I posted this in February, I’ve had some awesome opportunities arise but I haven’t yet secured my dream job, so I thought I’d repost this prior to the RSA Security show next week.

I’ll be keynoting at the America’s Growth Capital Information Security Conference as well as speaking numerous times at RSA.  You can reach me in any of the ways listed below.

The last two years have been a blast but all things must come to an end.

At the conclusion of March, I am moving on to newer pastures.  Where that is may be up to you.
I am exploring all options with a focus on traditional security roles including CISO/CSO, but I’d prefer architect/evangelist/CTO roles that focus more on virtualization and Cloud Computing security.  Start-ups, Up-Starts or large companies are all game.

If you’ve got an opportunity that you think we’d both be a match for, feel free to reach out.  

A dose of reality: If you’re not serious about envelope pushing, thought/industry leadership, world domination and unabashed enthusiasm sprinkled with rational pragmatism, I’m not your guy…

My LinkedIn profile is here.  My email is here.  You can reach my call router at +1.978.631.0302.  You can find me on Twitter here: @beaker



Categories: Career Tags:

Private Clouds: Even A Blind Squirrel Finds A Nut Once In A While

April 12th, 2009 6 comments

Over the last month it’s been gratifying to watch the “mainstream” IT press provide more substantive coverage on the emergence and acceptance of Private Clouds after the relatively dismissive stance prior.  

I think this has a lot to do with the stabilization of definitions and applications of Cloud Computing and it’s service variants as well as the realities of Cloud adoption in large enterprises and the timing it involves.

To me, Private Clouds represent the natural progression toward wider scale Cloud adoption for larger enterprises with sunk costs and investments in existing infrastructure and it has always meant more than simply “Amazon-izing your Intranet.”  Private Clouds offer larger enterprises a logical, sustainable and intelligent path forward from their virtualization and automation initiatives in play already.

I think my definition a few months ago was still a little rough, but it gets the noodle churning:

Private clouds are about extending the enterprise to leverage infrastructure that makes use of cloud computing capabilities and is not (only) about internally locating the resources used to provide service.  It’s also not an all-or-nothing proposition.

It occurs to me that private clouds make a ton of sense as an enabler to enterprises who want to take advantage of cloud computing for any of the oft-cited reasons, but are loathe to (or unable to) surrender their infrastructure and applications without sufficient control.  Private clouds mean that an enterprise can decide how and how much of the infrastructure can/should be maintained as a non-cloud operational concern versus how much can benefit from the cloud.

Private clouds make a ton of sense; they provide the economic benefits of outsourced scaleable infrastructure that does not require capital outlay, the needed control over that infrastructure combined with the ability to replicate existing topologies and platforms and ultimately the portability of applications and workflow.  These capabilities may eliminate the re-write and/or re-engineering of applications like is often required when moving to typical IaaS (infrastructure as a Service) player such as Amazon.

From a security perspective — which is very much my focus — private clouds provide me with a way of articulating and expressing the value of cloud computing while still enabling me to manage risk to an acceptable level as chartered by my mandate.

Here are some of the blog entries I’ve written on Private Clouds. I go into reasonable detail in my “Frogs Who Desired a King” Cloud Security presentation.  James Urquhart’s got some doozies, too.  Here’s a great one.  Chuck Hollis has been pretty vocal on the subject.

My Google Reader has no less than 10 articles on Private Clouds in the last day or so including an interesting one featuring GE’s initiative over the next three years.

I hope the dialog continues and we can continue to make headway in arriving at common language and set of use cases, but as I discovered a couple of weeks ago, in my post titled “The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…”, the definition of Private Cloud is the most variable of all and promotes the most contentious of debates:


Private Clouds seem to point to validate the proimise of what real time infrastructure/adapative enterprise visions painted many years ago, with the potential for even more scale and control.  The intersection of virtualization, automation, Cloud and converged and unified computing are making sure of that.


Categories: Cloud Computing, Cloud Security Tags:

Does Cloud Infrastructure Matter? You Bet Your Ass(ets) It Does!

April 8th, 2009 5 comments

James Urquhart wrote a great blog today titled “The new cloud infrastructure: do you care?” in which he says:

…if you are a consumer of cloud-based resources, the mantra has long been that you can simply deploy or consume your applications/services without any regard to the infrastructure on which they are being hosted. A very cool concept for an application developer, to be sure, but I think it’s a mistake to ignore what lies under the hood.

At the very least, the future of hardware ought to touch the inner geek in all of us.

What is happening in data center infrastructure is a complete rethinking of the architectures utilized to deliver online services, from the overall data center architectures all the way down to the very components that serve the “big four” elements of the data center: facilities, servers, storage and networking.


While James’ post focused mostly on how the underlying compute platforms are changing such as his illustration with Cisco’s UCS, Rackable’s C2 and Google’s custom machines, this trend will expand up and down the infrastructure stack.

From a technologist or architect’s perspective, what powers the underlying Cloud infrastructure is really important. As James alludes to, issues of interoperability can and will be impacted by the underlying platforms upon which the abstracted application resources sit.  This may sound contentious from the PaaS and SaaS perspective, but not so from that of IaaS, afterall the “I” in IaaS stands for infrastructure.

I made this point recently from a security perspective in my blog post titled “The Cloud Is a Fickle Mistress: DDoS&M…”  wherein I said:

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

…or here in Cloud Catastrophes (Cloudtastophes?) Caused by Clueless Caretakers?:

The abstraction of infrastructure and democratization of applications and data that Cloud Computing services can bring does not mean that all services are created equal.  It does not make our services or information more secure (or less for that matter.)  Just because a vendor brands themselves as a “Cloud” provider does not mean that “their” infrastructure is any more implicitly reliable, stable or resilient than traditional infrastructure or that proper enterprise architecture as it relates to people, process and technology is in place.  How the infrastructure is built and maintained is just as important as ever.

What we’ll also see is that even though we’re not supposed to care what our Cloud providers’ infrastructure is powered by and how, we absolutely will in the long term and the vendors know it.   This is where people start to freak about how standards and consolidation will kill innovation in the space but it’s also where the realities of running a business come crashing down on early adopters. Large enterprises will move to providers who can demonstrate that their services are solid by way of co-branding with the reputation of the providers of infrastructure coupled with the compliance to “standards.”

Remember the “Cisco Powered Network” program?  How about a “Cisco Powered Cloud?”  See how GoGrid advertises their load balancers are f5?

In the long term, like the CapitalOne credit card commercials challenging the company providing your credit card services by asking “What’s in your wallet?” you can expect to start asking the same thing about your Cloud providers’ offerings, also.

So, depending on what you do and what you need, your choice of provider — and what sits under their hood — may matter a ton.


Categories: Cloud Computing, Cloud Security Tags:

Google’s Updated App Engine – “Secure” Data Connector: Your Firewall Means Nothing (Again)

April 8th, 2009 3 comments

This will be a quickie.  

This is such a juicy topic and really merits a ton more than just a mention, but unfortunately, I’m out of time.

Google’s latest updates to the Google App Engine Platform has all sorts of interesting  functionality:

  • Access to firewalled data: grant policy-controlled access to your data behind the firewall.
  • Cron support: schedule tasks like report generation or DB clean-up at an interval of your choosing.
  • Database import: move GBs of data easily into your App Engine app. Matching export capabilities are coming soon, hopefully within a month.

To me, the most interesting is the boldfaced item above…Google Apps access to information behind corporate firewalls*

From a Cloud interoperability and integration perspective, this is fantastic.  From a security perspective, I am as intrigued and concerned as I am about anytime I hear “access internal data from an external service.”

The capability to gain access to internal data is provided by the Secure Data Connector.  You can find reasonably detailed information about it here.

Here’s how it works:

SDC forms an encrypted connection between your data and Google Apps. SDC lets you control who in your domain can access which resources using Google Apps.

SDC works with Google Apps to provide data connectivity and enable IT administrators to control the data and services that are accessible in Google Apps. With SDC, you can build private gadgets, spreadsheets, and applications that interact with your existing corporate systems.

The following illustration shows SDC connection components.

Secure Data Connector Components

The steps are:

  1. Google Apps forwards authorized data requests from users who are within the Google Apps domain to the Google tunnel protocol servers.
  2. The tunnel servers validate that a user is authorized to make the request to the specified resource. Google tunnel servers are connected by an encrypted tunnel to SDC, which runs within a company’s internal network.
  3. The tunnel protocol allows SDC to connect to a Google tunnel server, authenticate, and encrypt the data that flows across the Internet.
  4. SDC uses resource rules to validate if a user is authorized to make a request to a specified resource.
  5. An optional intranet firewall can be used to provide extra network security.
  6. SDC performs a network request to the specified resource or services.
  7. The service verifies the signed requests and if the user is authorized, returns the data.

From a security perspective, access control and confidentiality are provided by filters, resource rules, and SSL/TLS encrypted tunnels.  We’ll take this apart in detail (as time permits) later.

In the mean time, here’s a link to the SDC Security guide for developers.

…and no, you’re firewall likely won’t help save you (again.) 

At least I won’t be bored now.


* The database import/export is profound also. Craig Balding followed up with his OAuth-focused commentary here.

Categories: Cloud Computing, Cloud Security, Google Tags:

Pimping My Friends: One Of My Favorite NonCons – Troopers

April 8th, 2009 No comments

One of my favorite international security conferences is happening April 22nd/23rd in Munich, Germany. It’s run by my good friend Enno Rey and his team at ERNW:

Troopers09 is an international IT-Security Conference on the 22nd and 23rd of April 2009 in Munich, Germany. This event is created for CISOs, ISOs, IT-Auditors, IT-Sec-Admins, IT-Sec Consultants and everyone who is involved with IT-Security on a professional basis. The goal is to share in-depth knowledge about the aspects of attacking and defending information technology infrastructure and applications. The featured presentations and demonstrations represent the latest discoveries and developments of the global hacking scene and will provide the audience with valuable practical know-how.

Troopers09 is hosted by ERNW GmbH, an independent IT-Security consultancy from Heidelberg, Germany. In the past years, speakers from ERNW were invited all around the world to present their latest IT-Sec research results and to share their knowledge within the global hacking community. With this global experience in mind ERNW decided to launch an international conference in Germany in 2008. After last year’s success of Troopers08 we’re thrilled to do it again. Once more it’s going to be an event unlike all other „Security Conferences“ we have seen in Germany so far: No product presentations, no marketing blabla, no bull*ht-bingo – just pure practical IT-Security. Real answers and practical benefits to meet today´s and tomorrows threats.

Troopers08 was a fantastic event, so I can only imagine that this year’s will be just as good if not better.

Check it out here.


Categories: Security Conferences Tags:

HyTrust: An Elegant Solution To a Messy Problem

April 6th, 2009 8 comments

logo_hytrust I had a pre-release briefing with the folks from HyTrust on Friday and was impressed with their solution.  I had previously met with the VC’s within whose portfolio HyTrust sits and they were bullish on the team and technology approach.  Here’s why.

  “Security” solutions in virtualized environments are becoming less about “pure” security functions like firewalls and IDP and much more focused on increasing the management and visibility of virtualization and keeping pace with the velocity of change, configuration control and compliance.  I’ve talked about that a lot recently.

HyTrust approaches this problem in a very elegant manner. Their approach is based on the old adage “you cannot manage that which you cannot see.”  

In the case of VMware, there are numerous vectors for managing and configuring the platform; from the various host and platform management interfaces to the guests and virtual networking components.

There are many tools on the market which address these issues. Reflex, Third Brigade and Catbird come to mind with the latter being the most similar.

The difference between HyTrust and their competitors is how they integrate their solution to provide visibility and protect the management network.  

HyTrust’s answer is to both physically and logically sit in front of the the virtualization platform management network and actually proxy each configuration request, whether that’s an SSH session to the service console, or a VirtualCenter configuration
change through the GUI. 

These requests are mapped to roles which are in turn authenticated against an Enterprises’ Active Directory service so fine-grained role-based access to specific functions via templates can be performed. Further, since every request is proxied, logging is robust and can be mapped back directly to a single user.

The policy engine and templates appear quite easy to use given the demo I saw and the logging and reporting looks good.

Actions that violate policy can be allowed or permitted and can either be simply logged or even remediated should a violation occur.

This centralized approach is very elegant. It has its downsides, of course, inasmuch as it becomes a single point of failure and performance and high-availability should be paid close attention to.

 The HyTrust offering will be available as both a hardware appliance as well as a virtual appliance. They will also release what they call a FREE “Community Edition” which is a full-featured version but is limited to securing three VMware ESX hosts.

Check them out here.


Categories: Virtualization Security, VMware Tags: