Archive for October, 2008

Security, Drinking Straws, Cavities and Wrinkles…

October 31st, 2008 6 comments

StrawsI was reading an article on SlashFood titled "Drinking Straw: Friend or Foe" and chuckled at the parallels to the reflexive hyping, purchase and (oft failed) use of "solutions" in the security space.  Sometimes I think we need a

Recently, a friend passed along a tip from a dermatologist: Stop
sipping through straws. The doctor said it was the number one cause of

Even more recently, at lunch one day my aunt relayed
some info from her husband, an orthodontist. He said that drinking
through a straw prevents cavities and tooth decay, since straws allow
sugary beverages to bypass your teeth. When my aunt said this,
everybody around the table (six women) stuck straws in their drinks.

But when I countered with the skincare side of the question, my aunt
was the first to pluck her straw right back out again.

Brings new meaning to "security sucks."  What's your favorite "security straw" analogy?


Categories: Jackassery Tags:

There’s Only One Way To Settle This Crosby: Security Sumo Suit Smackdown…

October 30th, 2008 1 comment


I'm afraid it's come to this, Simon.

It occurs to me that the only way we can settle our debate to finality is via mortal combat.

I'm calling you out:

What: Sumo Suit VirtSec Smackdown (how Xen/Zen!)

Who: Simon Crosby vs. Chris Hoff

RSA 2009, Moscone Center, San Francisco, Venue TBD

When: During the April 20-24th, 2009 timeframe

Why: You know why…

Wow: This will be a charity event with the proceeds going to Johnny Long's Hackers for Charity which you can find out about here.

Real shipping versions of you only, no virtual replicas or stand-ins allowed.  We'll get sponsors.

You wouldn't want to let down the community now would you Simon?

See you in San Francisco…


UPDATE: Simon is THE man!  He's accepted the battle.  We'll have an
all-star panel of judges and Dan Kaminsky has agreed to referee. 
Winner gets grandma's cookies! w00t!

Categories: Jackassery Tags:

Citrix’s Crosby Says I’m Confused and He’s RIGHT.

October 30th, 2008 3 comments

Simon Crosby and I have been going 'round a bit lately arguing the premise of where, why, when, how and how much security should be invested by either embedding it in the virtualization platform itself or being addressed by third parties.

Simon's last sentence in his latest riposte titled "Hoff is Still Confused" was interesting:

Re-reading Hoff's posts, I find that I agree with him in just about every respect in his assessment of the technology and its implications, and I think we're doing exactly as he would recommend, so I'll be interested to hear if he has more to say on this

Well, how the hell am I supposed to argue with that!? ;)  OK, now I am confused! Simon's taken the high road and thus I shall try to do so, too.  I wrote a ton more in response, but I'm not sure anybody cares. ūüėČ

All told, I think we're both aiming at a similar goal in spite of our disparate approaches: achieving a more secure virtualized environment.

But seriously, I don't think that I'm confused about Citrix's position on this matter, I just fundamentally disagree with it.

I feel strongly that Simon and I really are on different sides of a religious issue but without a more reasonable platform for discussion, I'm not sure how we'll intelligently discuss this more coherently without all the back and forth.  Perhaps a cage match in sumo suits!?

I appreciate Simon clarifying his position and reaching out to ensure we are on the same page.  We're not, but the book's not closed yet. 

So we agree to disagree, and I respect Simon for his willingness to debate the issue.


Categories: Citrix, Virtualization, VMware Tags:

Please Help Me: I Need a QSA To Assess PCI/DSS Compliance In the Cloud…

October 29th, 2008 23 comments


I wonder if you might help me.

I operate an e-commerce Internet-based business that processes and stores cardholder data.

I need a QSA to assess my infrastructure and operations for PCI/DSS compliance.

Oh, I forgot to mention.  All my infrastructure is in the cloud.  It's all virtualized.  It runs on Amazon's EC2.  All my data is hosted outside of my direct stewardship.  I don't own anything.

Since the cloud hides all the infrastructure and moving parts from me, I don't know if I meet any of the following PCI requirements:

I don't know if there are firewalls. I don't know about the cloud-vendor's passwords, AV, access control/monitoring, vulnerability management or security processes.

A friend told me about section 12.8, but it doesn't really apply because the "service" provider just provides me cycles and storage, I run the apps I build but I don't see any of the underlying infrastructure.

Also, I have no portability for BCP/DR because my AMI only runs on the Amazon cloud, nowhere else.  I don't know who/how backups are done outside of my manifest.

I'm sure we could ask though, right?

Update: OK, this post worked out exactly as I hoped it would.  On the one hand you have PCI experts who plainly point to the (contrived) example I used and rule empirically that there's no chance for PCI certification.   To their point, it's black and white; either Amazon (in this example) absorbs the risk or you can't use their services if you expect to be in compliance with PCI.

Seems logical…

However, this is the quandary we're facing with virtualization and cloud computing.  In terms of the companies that hire these PCI compliance experts, the assessment methodology/requirements are predicated upon a "standard" that continues to be out of touch with the economic and technological world around it.  That's not the experts' fault, they're scoring you against a set of requirements that are black and white. 

As companies try and leverage technology to be more secure, to transfer risk, to focus on the things that matter most and reduce costs — if you believe the marketing — It's really a no-win situation.

The PCI Security Standards Council doesn't even have a SIG for virtualization and yet we see the crushing onslaught of virtualization with no guidance and this tidal wave has been rushing at us for at least 3-5 years.   If you believe the uptake of cloud computing, we're blindly hurdling over the challenges that virtualized internally-owned infrastructure brings and careening headlong down a path to cloud computing that leaves us in non-compliance.

The definition of what a "service provider" means and how they interact with the cardholder data companies are supposed to protect needs to be redefined.

It's time the PCI Council steps up and gets in front of the ball and not crushed by it such that the companies that would do the right thing — if they knew what that meant — aren't punished by an out-of-touch set of standards.

Categories: Cloud Computing, PCI, Virtualization Tags:

Gunnar Peterson Channels Tina Turner (Sort Of): What’s Happiness Got To Do With It?

October 29th, 2008 1 comment

Gunnar just hit a home run responding to John Pescatore's one line, twelve word summarization of how to measure a security program's effectiveness.  Read Gunnar's post in it's entirety but here's the short version:

Pescatore says:

The best security program is at the business with the happiest customers.

To which Gunnar suggests:

There's a fine line between happy customers and playing piano in a bordello.

…and revises Pescatore's assertion to read:

The best security program is at the business with sustainable competitive advantage.

To which, given today's economic climate, I argue the following simplification:

The best security program is at the business that is, itself, sustainable.

I maintain that if, as John suggests, you want to introduce the emotive index of "happiness" and relate it to a customer's overall experience when interacting with your business, then the best security program is one that isn't seen or felt at all.  Achieving that Zen-like balance is, well, difficult.

It's hard enough to derive metrics that adequately define a security program's effectiveness, value, and impact on risk.  Balanced scorecard or not, the last thing we need is the introduction of a satisfaction quotient that tries to quantify (on a scale from 1-10?) the "warm and fuzzies" a customer enjoys whilst having their endpoint scanned by a NAC device before attaching to your portal… ūüėČ

I understand what John was shooting for, but it's like suggesting that there's some sort of happiness I can achieve when I go shopping for car insurance.


Xen.Org Launches Community Project To Bring VM Introspection to Xen

October 29th, 2008 No comments

Hat-tip to David Marshall for the pointer.

In what can only be described as the natural evolution of Xen's security architecture, news comes of a Xen community project to integrate a VM Introspection API and accompanying security functionality into Xen.  Information is quite sparse, but I hope to get more information from the project leader, Stephen Spector, shortly. (*Update: Comments from Stephen below)

This draws naturally obvious parallels to VMware's VMsafe/vNetwork API's which will yield significant differentiation and ease in integrating security capabilities with VMware infrastructure when solutions turn up starting in Q1'09.

From the Xen Introspection Project wiki:

purpose of the Xen Introspection Project is to design an API for
performing VM introspection and implement the necessary functionality
into Xen. It is anticipated that the project will include the following
activities (in loose order): (1) identification of specific
services/functions that introspection should support, (2) discussion of
how that functionality could be achieved under the Xen architecture,
(3) prioritization of functionality and activities, (4) API definition,
and (5) implementation.

Some potential applications of VM introspection include security, forensics, debugging, and systems management.

It is important to note that this is not the first VMI project for Xen. 
There is also the Georgia Tech XenAccess project lead by Bryan Payne which is a library which allows a privileged domain to gain access to the runtime state of another domain.  XenAccess focuses (initially) on memory introspection but is adaptable to disk I/O also:


I wonder if we'll see XenAccess fold into the VMI Xen project?

Astute readers will also remember my post titled "The Ghost of Future's Past: VirtSec Innovation Circa 2002" in which I reviewed work done by Mendel Rosenblum and Tal Garfinkel (both of VMware fame) on the LiveWire project which outlined VMI for isolation and intrusion detection:


What's old is new again.

Given my position advocating VMI and the need for inclusion of this capacity in all virtualization platforms versus that of Simon Crosby, Citrix's (XenSource) CTO in our debate on the matter, I'll be interested to see how this project develops and if Citrix contributes. 

Microsoft desperately needs a similar capability in Hyper-V if they are to be successful in ensuring security beyond VMM integrity in their platform and if I were a betting man, despite their proclivity for open-closedness, I'd say we'll see something to this effect soon.

I look forward to more information and charting the successful evolution of both the Xen Introspection Project and XenAccess.


Update: I reached out to Stephen Spector and he was kind enough to respond to a couple of points raised in this blog (paraphrased from a larger email):

Bryan Payne from Georgia tech will be participating in the project and there is some other work going on at the University of Alaska at Fairbanks. The leader for the project is Stephen Brueckner from NYC-AT.

As for participation, Citrix has people already committed and I have 14 people who have asked to take part.

Sounds like the project is off to a good start! 

Categories: Citrix, Microsoft, Virtualization, VMware Tags:

Microsoft’s Azure: When Clouds Encircle Islands, Things Get Foggy…

October 27th, 2008 6 comments

Microsoft’s announcements today at OzzieFest (Microsoft’s PDC) include the unveiling of Windows Azure.


The Azure “services platform” is described as:

…an internet-scale cloud services platform hosted in Microsoft data centers, which provides an
operating system and a set of developer services that can be used individually or together. Azure’s flexible and interoperable platform
can be used to build new applications to run from the cloud or enhance existing applications with cloud-based capabilities. Its open
gives developers the choice to build web applications, applications running on connected devices, PCs, servers, or hybrid
solutions offering the best of online and on-premises.

Holy buzzword bingo, batman!

Look, when I’m forced into vendor lock-in in order to host my applications and I am confined to one vendor’s datacenters without portability, that’s not ” the cloud” and it’s not an “open architecture,” it’s marketing-speak for “we’re now your ASP/XaaS service provider of choice.”

Azure doesn’t run “in the cloud.” It’s a set of hosted services connected to the Internet. In this case the “cloud” is more like fog which encircles the islands of data inhabited by Dr. Moreau and his ghoulish API-infected creatures. (Ed: In full disclosure a year later this strategy makes a crap-load more sense. I simply didn’t get it at all back when I wrote this post)

Amazon has their hosting infrastructure and API’s/SDK’s, Microsoft has theirs. Google, too.

You might convince me there is such thing as THE cloud if there was ONE standardized API subscribed to by everyone who claims membership in the cloud. But there isn’t. Everyone is announcing their own little island with their own API, own “datacenter operating system,” etc.

I go back to my recent rant titled “Will You All Please Shut-Up About Securing THE Cloud…NO SUCH THING…” wherein I stated:

There is no singularity that can be described as “THE Cloud.”

There are many clouds, they’re not federated, they don’t natively
interoperate at the application layer and they’re all mostly
proprietary in their platform and operation. They’re also not all
“public” and most don’t exchange data in any form. The notion that
we’re all running out to put our content and apps in some common repository
on someone else’s infrastructure (or will) is bullshit. Can we stop selling this lemon already?

Just like there are many types of
real billowing humid masses (cumulonimbus, fibratus, undulatus, etc.)
there are many instantiations of resource-based computing models that
float about in use today —,, Clean Pipes from
ISP’s, Google/Google Apps, Amazon EC2, WebEx — all “cloud” services.
The only thing they have in common is they speak a dialect called IP…

Again, I’m not suggesting that this model is not reasonable, warranted or worthwhile. I am a big believer in leveraging open architectures for the interoperable exchange of data as well as resiliency, scale and utility computing.

I’m simply suggesting that re-branding the word “Internet” and implementing ROT13 to arrive at “Cloud” is really confusing and intellectually dishonest.

It’s not FUD, it’s FOG.


Categories: Cloud Computing, Virtualization Tags:

Cloud Computing Security In Poetic Review

October 27th, 2008 5 comments

This is in response to my buddy Alex Hutton's blog post titled "Cloud Computing – Stormy Weather?"

If you took a poll
of folks in a crowd
asking them to define
what they thought of "the cloud"

I'd bet the dough in my pocket
not one could agree
on the relative impact
it will have on IT

Outsourced computing,
utility, grid,
distributed resources
with the moving parts hid

whatever you call it
its adoption is brisk
but like most "innovation"
we've forgotten 'bout risk

Cloud computing's a trade off
Be sovereign or efficient
I guess it depends
on where you think you're proficient

Some things are ripe for the Cloud
others not so much 
Some things we'll let go of
others tightly we'll clutch

Most companies I know
manage risk with their gut
when new tech comes along
they're still mired in that rut

So security gets blamed
for standing in progress' way
yet we're stuck with defending
C, I and A

We need to be agile
but oh yeah, compliant
Though the potential for loss,
means our exposure is giant

Cloud advocates say
Amazon's never been breached
so we can trust that our data
will never be leached?

I guess this all depends
on which model of cloud
you decide to rely on
to make your CIO proud

We've got wares as a service,
Web 2 dot 0, SOA
'lastic clouds, fuzzy storage
It's the future, some say

But I can't help but think
the handwaving's distracting
from the uncomfortable truths
of what this is impacting

We can't even manage
the stuff that we own
yet we're willing to outsource
where our assets call home?

We don't classify data,
can't control where it goes
but we'll transfer our risk
to someone nobody knows?

Diguising marketing efforts
as tech. innovation
and suggesting that insight
will spur risk ideation?


Reduce risk?
Reduce loss?
Create efficient operations?
Those are quite lofty goals,
worthwhile machinations

But the cloud ain't an answer
it's a cyclic response,
evolutionary next-steps
to what the tech. industry wants

They can't solve real problems
so a new one's created
to distract from the point
that we're being masturbated

I'm all for the cloud
been doing it for years!
Got a real game changer?
Hey man, I'm all ears.

You dress up this pig
in a nice looking dress
security will be here
to clean up the mess

Categories: Jackassery, Poetry Tags:

Patching The Cloud?

October 25th, 2008 2 comments

PatchingJust to confuse you, as a lead-in on this topic, please first read my recent rant titled “Will You All Please Shut-Up About Securing THE Cloud…NO SUCH THING…”

Let’s say the grand vision comes to fruition where enterprises begin to outsource to a cloud operator the hosting of previously internal complex mission-critical enterprise applications.

After all, that’s what we’re being told is the Next Big Thing‚ĄĘ

In this version of the universe, the enterprise no longer owns the operational elements¬†involved in making the infrastructure tick — the lights blink, packets get delivered, data is served up¬†and it costs less for what is advertised is the same if not better¬†reliability, performance and resilience.

Oh yes, “Security” is magically provided as an integrated functional delivery of service.

Tastes great, less datacenter filling.

So, in a corner case example, what does a boundary condition like the out-of-cycle patch release of MS08-067 mean when your infrastructure and applications are no longer yours to manage and the ownership of the “stack” disintermediates you from being able to control how, when or even if vulnerability remediation anywhere in the stack (from the network on up to the app) is assessed, tested or deployed.

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.¬† This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.¬† The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.¬† Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?¬† How does an enterprise that has no insight into the “black box” model of the cloud operator, setup a dev/test/staging environment that acceptably mimics the operating environment?

What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)

How does one negotiate the process for determining when and how a patch is deployed?  Where does the cloud operator draw the line?   If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service?  If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.

I hate to get all “Star Trek II: The Wrath of Khan” on you, but as Spock said, “The needs of the many outweigh the needs of the few.”¬† How, when and if a provider might roll a patch has a broad impact across the entire customer base — as it has had in the hosting markets for years — but again the types of applications we are talking about here are far different than what we we’re used to today where the applications and the infrastructure are inextricably joined at the hip.

Hosting/SaaS providers today can scale because of one thing: standardization.¬† Certainly COTS applications can be easily built on standardized tiered models for compute, storage and networking, but again, we’re being told that enterprises will move all their applications to the cloud, and that includes bespoke creations.

If that’s not the case, and we end up with still having to host some apps internally and some apps in the cloud, we’ve gained nothing (from a cost reduction perspective) because we won’t be able to eliminate the infrastructure needed to support either.

Taking it one step further, what happens if there is standardization on the underlying Cloud platform (CloudOS?) and one provider “patches” or updates their Cloud offering but another does or cannot? If we ultimately talk about VM portability between providers running the “same” platform, what will this mean? ¬†Will things break horribly or be instantiated in an insecure manner?

What about it?¬† Do you see cloud computing as just an extension of SaaS and hosting of today?¬† Do you see dramatically different issues arise based upon the types of information and applications that are being described in this model?¬† We’ve seen issues such as data ownership, privacy and portability bubble up, but these are much more basic operational questions.

This is obviously a loaded set of questions for which I have much to¬†say — some of which is obvious — but I’d like to start a discussion,¬†not a rant.


*This little ditty was inspired by a Twitter exchange with Bob Rudis who was complaining that Amazon’s EC2 service did not have the MS08-067 patch built into the AMI…Check out this forum entry from Amazon, however, as it’s rather apropos regarding the very subject of this blog…

Arista Networks: Cloud Networking?

October 24th, 2008 1 comment

Arista Networks is a company stocked with executives whose pedigrees read like the who's-who from the networking metaverse.  The CEO of Arista is none other than Jayshree Ullal, the former senior Vice President at Cisco responsible for their Data Center, Switching and Services and Andres von Bechtolsheim from Sun/Granite/Cisco serves as Chief Development Officer and Chairman.

I set about to understand what business Arista was in and what problems they aim to solve given their catchy (kitchy?) tagline of Cloud Networking‚ĄĘ

Arista makes 10GE switches utilizing a Linux-based OS they call EOS which provides high-performance networking. 

The EOS features a "…multi-process state sharing architecture that completely
separates networking state from the processing itself. This enables
fault recovery and incremental software updates on a fine-grain process
basis without affecting the state of the system."

I read through the definition/criteria that describes Arista's Cloud Networking value proposition: scalability, low latency, guaranteed delivery, extensible management and self-healing resiliency.

These seem like a reasonable set of assertions but I don't see much of a difference between these requirements and the transformative requirements of internal enterprise networks today, especially with the adoption of virtualization and real time infrastructure. 

Pawing through their Cloud Networking Q&A, I was struck by the fact that the fundamental assumptions being made by Arista around the definition of Cloud Computing are very myopic and really seem to echo the immaturity of the definition of the "cloud" TODAY based upon the industry bellweathers being offered up as examples of leaders in the "cloud" space.

Let's take a look at a couple of points that make me scratch my head:

Q1:     What is Cloud Computing?    
A1: Cloud Computing is hosting applications and data in large centralized datacenters and accessing them from anywhere on the web, including wireless and mobile devices. Typically the applications and data is distributed to  make them scalable and fault tolerant. This has been pioneered by applications such as Google Apps and, but by now there are
hundreds of services and applications that are available over the net, including platform services such as Amazon Elastic Cloud and Simple Storage Service.

That's  a very narrow definition of cloud computing and seems to be rooted in examples of large, centrally-hosted providers today such as those quoted.  This definition seems to be at odds with other cloud computing providers such as 3tera and others who rely on distributed computing resources that may or may not be centrally located.

Q4:     Is Enterprise Cloud Computing the same as Server Virtualization? 
A4:     They are not. Server Virtualization means running multiple virtualized operating systems on a single physical server using a Hypervisor, such as VMware, HyperV, or KVM/XVM .  Cloud computing is delivering scalable applications that run on a remote pool of servers and are available to users from anywhere. Basically all cloud computing applications today run directly on a physical server without the use of virtualization or Hypervisors. However, virtualization is a great building block for enterprise cloud computing environments that use dynamic resource allocation across a pool of servers.

While I don't disagree that consolidation through server virtualization is not the same thing as cloud computing, the statement that "basically all cloud computing applications today run directly
on a physical server without the use of virtualization or Hypervisors" is simply untrue.

Q5:     What is Cloud Networking?  
A5:     Cloud Networking is the networking infrastructure required to support cloud computing, which requires fundamental improvement in network scalability, reliability, and latency beyond what traditional enterprise networks have offered.  In each of these dimension the needs of a cloud computing network are at least an order of magnitude greater than for traditional enterprise networks.

I don't see how that assertion has been formulated or substantiated.

I'm puzzled when I look at Arista's assertion that existing and emerging networking solutions from the likes of Cisco are not capable of providing these capabilities while they simultaneously seem to shrug off the convergence of storage and networking.  Perhaps they simply plan on supporting FCoE over 10GE to deal with this?

Further,  ignoring the (initial) tighter coupling of networkng with virtualization to become more virtualization-aware with the likes of what we see from the Cisco/VMware partnership delivering VN-Link and the Nexus 1000v, Ieaves me shaking my head in bewilderment.

Further, with the oft-cited example of Amazon's cloud model as a reference case for Arista, they seem to ignore the fact that EC2 is based upon Xen and is now offering both virtualized Linux and Windows VM support for their app. stack.

It's unclear to me what problem they solve that distinguishes them from entrenched competitors/market leaders in the networking space unless the entire value proposition is really hinged on lower cost.  Further, I couldn't find much information on who funded (besides the angel round from von Bechtolsheim) Arista and I can't help but wonder if this is another Cisco "spin-in" that is actually underwritten by the Jolly Green Networking Giant.

If you've got any useful G2 on Arista (or you're from Arista and want to chat,) please do drop me a line…


Categories: Cisco, Cloud Computing, Virtualization Tags: