Archive for January, 2009

Rational Security: This Site May Harm Your Computer (Damned Right It Will!)

January 31st, 2009 5 comments
HA!  Finally someone (Google) has recognized that my blog is harmful and not fit for either human or computational consumption:



Categories: Jackassery Tags:

Private Clouds: Your Definition Sucks

January 30th, 2009 24 comments

Archie_bunker I think we have a failure to communicate…or at least I do.

Tonight I was listening to David Linthicum’s podcast titled “The Harsh Realities Of Private Clouds” in which he referenced and lauded Dimitry Sotnikov’s blog of the same titled “No Real Private Clouds Yet?
I continue to scratch my head not because of David’s statements that he’s yet to find any “killer applications” for Private Clouds but rather the continued unappetizing use of the definition (quoting Dimitry) of a Private Cloud:

In a nutshell, private clouds are Amazon-like cost-effective and scalable infrastructures but run by companies themselves within their firewalls.

This seems to be inline with Gartner’s view of Private Clouds also:

The future of corporate IT is in private clouds, flexible computing networks modeled after public providers such as Google and Amazon yet built and managed internally for each business’s users

My issue is again that of the referenced location and perimeter.  It’s like we’ve gone back to the 80’s with our screened subnet architectural Maginot lines again!  “This is inside, that is outside.”

That makes absolutely zero sense given the ubiquity, mobility and transitivity of information and platforms today.  I understand the impetus to return back to the mainframe in the sky, but c’mon…

For me, I’d take a much more logical and measured approach to this definition. I think there’s a step missing in the definitions above and how Private Clouds really ought to be described and transitioned to.

I think that the definitions above are too narrow end exculpatory in definition when you consider that you are omitting solutions like GoGrid’s CloudCenter concepts — extending your datacenter via VPN onto a cloud IaaS provider whose infrastructure is not yours, but offers you the parity or acceptable similarity in platform, control, policy enforcement, compliance, security and support to your native datacenter.
In this scenario, the differentiator between the “public” and “private” is then simply a descriptor defining from whom and where the information and applications running on that cloud may be accessed:

From the “Internet” = Public Cloud.  From the “Intranet” (via a VPN connection between the internal datacenter and the “outsourced” infrastructure) = Private Cloud.
Check out James Urquhart’s thoughts along these lines in his post titled “The Argument For Private Clouds.”

Private clouds are about extending the enterprise to leverage infrastructure that makes use of cloud computing capabilities and is not (only) about internally locating the resources used to provide service.  It’s also not an all-or-nothing proposition.

It occurs to me that private clouds make a ton of sense as an enabler to enterprises who want to take advantage of cloud computing for any of the oft-cited reasons, but are loathe to (or unable to) surrender their infrastructure and applications without sufficient control.

Private clouds mean that an enterprise can decide how and how much of the infrastructure can/should be maintained as a non-cloud operational concern versus how much can benefit from the cloud.
Private clouds make a ton of sense; they provide the economic benefits of outsourced scaleable infrastructure that does not require capital outlay, the needed control over that infrastructure combined with the ability to replicate existing topologies and platforms and ultimately the portability of applications and workflow.

These capabilities may eliminate the re-write and/or re-engineering of applications like is often required when moving to typical IaaS (infrastructure as a Service) player such as Amazon.
From a security perspective — which is very much my focus — private clouds provide me with a way of articulating and expressing the value of cloud computing while still enabling me to manage risk to an acceptable level as chartered by my mandate.

So why wouldn’t a solution like GoGrid’s CloudCenter offering paired with CohesiveFT’s VPN Cubed and no direct “public” Internet originated access to my resources count as Private Cloud Computing?
I get all the benefits of elasticity, utility billing, storage, etc., don’t have to purchase the hardware, and I decide based upon risk what I am willing to yield to that infrastructure.
David brought up the notion of proprietary vendor lock-in, but yet we see GoGrid has also open sourced their CloudCenter API OpenSpec…
Clearly I’m mad because I simply don’t see why folks are painting Private Clouds into a corner only to say that we’re years away from recognizing their utility when in fact we have the technology, business need and capability to deliver them today.
Categories: Cloud Computing, Cloud Security Tags:

Cloud Computing Taxonomy & Ontology :: Please Review

January 28th, 2009 36 comments

NOTE: Please see the continued discussion in the post titled “Update on the Cloud (Ontology/Taxonomy) Model…

Updated: 3/28/09 v1.5

There have been some excellent discussions of late regarding how to classify and explain the relationships between the many Cloud Computing models floating about.

I was inspired by John Willis’ blog post this morning titled “Unified Ontology of Cloud Computing” in which he scraped together many ideas on the subject.
I’m building a number of presentations for discussing Cloud Security and I’ve also been working on how to show both the the taxonomy and ontology of various Cloud components and models.  I think it’s really a blind mash-up of many of the things John points to, but the others I’ve seen don’t serve my needs completely.  My goal is to gain consensus on the model and the explore each layer and its security requirements and impacts on the model as a whole.
Here’s my first second third draft based on the awesome feedback I’ve received so far.
I’m not going to explain the layers/levels or groupings because I want people’s reactions and feedback to what they get from the diagram without color from me first.  There will likely be things that aren’t clear enough or even inaccuracies and missing elements.
If you could kindly give me your feedback on your first (unabashed) impressions, I’d really appreciate it.

NOTE: TypePad’s comment subsystem is having problems.  I’m going to close the comments until it’s resolved as the excellent (16 or so) comments are not showing up and I don’t want people adding comments using the old system… Please send me comments via email (choff @ or via Twitter @beaker) in the meantime.  Thanks SO much.

The comments are working again.  I’ve had 30-40 comments via email/twitter, so if something you wanted to communicate isn’t addressed, fire away below in the comments!

Version 1.5 Diagram (click to expand):

In v1.5 I highlighted the Integration/Middleware layer in a separate color, removed Coghead from the PaaS offering example and made a few other cosmetic alignment changes.

In v1.4 I added the API layer above ‘Applications’ in the SaaS grouping. I split out “data, metadata and content” as three separate elements and added structured/unstructured to the right.  I also separated the presentation layer into “modality and platform.”  Added some examples of layers to the very right.

The v1.4 diagram is here.
The v1.3 diagram is here.
The v1.2 diagram is here.
The v1.1 diagram is here.
The original v1.0 diagram is here.

Cloud Security Link Love: Monk Style…

January 25th, 2009 1 comment

John Gerber from the Syetem Advancements at the Monastery blog compiled an awesome round-up of Cloud related news/postings.

The blog entry covers many areas of the cloud including security, which I greatly appreciate.

Check it out here.  Well worth the read and the perspective.


Categories: Cloud Computing, Cloud Security Tags:

PCI Security Standards Council to Form Virtualization SIG…

January 24th, 2009 1 comment

I'm happy to say that there appears to be some good news on the PCI DSS front with the promise of a SIG being formed this year for virtualization.  This is a good thing. 

You'll remember my calls for better guidance for both virtualization and ultimately cloud computing from the council given the proliferation of these technologies and the impact they will have on both security and compliance.

In that light, news comes from Troy Leach, technical director of the PCI Security Standards Council via a kind note to me from Michael Hoesing:

A PCI SSC Special Interest Group (SIG) for virtualization is most likely coming this year but we don't have any firm dates or objectives as of yet.  We will be soliciting feedback from our Participating Organizations which is comprised of more than 500 companies (which include Vmware, Microsoft, Dell, etc) as well as industry subject matter experts such as the 1,800+ security assessors that currently perform assessments as either a Qualified Security Assessor or Approved Scanning Vendor (ASV).

The PCI SSC Participating Organization program allows industry stakeholders an opportunity to provide feedback on all standards and supporting procedures.  Information to join as a Participating Organization can be found here on our website.

This is a good first step.  if you've got input, make sure to contribute!


Categories: Compliance, PCI, Virtualization, VMware Tags:

A Couple Of Follow-Ups On The EDoS (Economic Denial Of Sustainability) Concept…

January 23rd, 2009 25 comments

I wrote about the notion of EDoS (Economic Denial Of Sustainability) back in November.  You can find the original blog post here.

The basic premise of the concept was the following:

I had a thought about how the utility and agility of the cloud
computing models such as Amazon AWS (EC2/S3) and the pricing models
that go along with them can actually pose a very nasty risk to those
who use the cloud to provide service.

thought got me noodling about how the pay-as-you-go model could
be used for nefarious means.

Specifically, this
usage-based model potentially enables $evil_person who knows that a
service is cloud-based to manipulate service usage billing in orders of
magnitude that could be disguised easily as legitimate use of the
service but drive costs to unmanageable levels. 

If you take Amazon's AWS usage-based pricing model (check out the cost calculator here,) one might envision that instead of worrying about a lack of resources, the
elasticity of the cloud could actually provide a surplus of compute,
network and storage utility that could be just as bad as a deficit.

of worrying about Distributed Denial of Service (DDos) attacks from
botnets and the like, imagine having to worry about delicately
balancing forecasted need with capabilities like Cloudbursting to deal
with a botnet designed to make seemingly legitimate requests for
service to generate an economic denial of sustainability (EDoS) —
where the dyamicism of the infrastructure allows scaling of service
beyond the economic means of the vendor to pay their cloud-based
service bills.

At any rate, here are a couple of interesting related items:

  1. Wei Yan, a threat researcher for Trend Micro, recently submitted an IEEE journal submission titled "Anti-Virus In-the-Cloud Service: Are We Ready for the Security Evolution?" in which he discusses and interesting concept for cloud-based AV and also cites/references my EDoS concept.  Thanks, Wei!
  2. There is a tangential story making the rounds recently about how researcher Brett O'Connor has managed to harness Amazon's EC2 to harvest/host/seed BitTorrent files.

    The relevant quote from the story that relates to EDoS is really about the visibility (or lack thereof) as to how cloud networks in their abstraction are being used and how the costs associated with that use might impact the cloud providers themselves.  Remember, the providers have to pay for the infrastructure even if the "consumers" do not:

    "This means, says Hobson, that hackers and other interested parties can
    simply use a prepaid (and anonymous) debit card to pay the $75 a month
    fee to Amazon and harvest BitTorrent applications at high speed with
    little or no chance of detection…

    It's not clear that O'Connor's clever work-out represents anything new
    in principle, but it does raise the issue of how cloud computing
    providers plan to monitor and manage what their services are being used

It's likely we'll see additional topics that relate to EDoS soon.

UPDATE: Let me try and give a clear example that differentiates EDoS from DDoS in a cloud context, although ultimately the two concepts are related:

DDoS (and DoS for that matter) attacks are blunt force trauma. The goal, regardless of motive, is to overwhelm infrastructure and remove from service a networked target by employing a distributed number of $evil_doers.  Example: a botnet is activated to swarm/overwhelm an Internet connected website using an asynchronous attack which makes the site unavailable due to an exhaustion of resources (compute, network or storage.)

EDoS attacks are death by 1000 cuts.  EDoS can also utilize distributed $evil_doers as well as single entities, but works by making legitimate web requests at volumes that may appear to be "normal" but are done so to drive compute, network and storage utility billings in a cloud model abnormally high.  Example: a botnet is ativated to visit a website whose income results from ecommerce purchases.  The requests are all legitimate but the purchases never made.  The vendor has to pay the cloud provider for increased elastic use of resources where revenue was never recognized to offset them.

We have anti-DDoS capabilities today with tools that are quite mature.  DDoS is generally easy to spot given huge increases in traffic.  EDoS attacks are not necessarily easy to detect, because the instrumentation and busines logic is not present in most applications or stacks of applications and infrastructure to provide the correlation between "requests" and " successful transactions."  In the example above, increased requests may look like normal activity.

Given the attractiveness of startups and SME/SMB's to the cloud for cost and agility, this presents a problem  The SME/SMB customers do not generally invest in this sort of integration, the cloud computing platform providers generally do not have the intelligence and visibility into these applications which they do not own, and typical DDoS tools don't, either.

So DDoS and EDoS ultimately can end with the same outcome: the target whithers and ceases to be able to offer service, but I think that EDoS is something significant that should be discussed and investigated.


What To Do When Your “Core” Infrastructure Services Aren’t In Your “Core?”

January 21st, 2009 11 comments
Okay.  I am teh lam3r.  I'd be intellectually dishonest if I didn't post this, and it's likely I'll revise it once I get to think about it more, but I've got to get it down.  Thanks to an innocent tweet from @botchagalupe I had an aneurysm epiphany.  Sort of 😉

A little light went on in my head this morning regarding how the cloud, or more specifically layers of clouds and the functions they provide (a-la SOA,) dramatically impact the changing landscape of what we consider "core infrastructure services," our choices on architecture, service provisioning, and how and from whence they are provided.  

Specifically, the synapse fired on the connection between Infrastructure 2.0 as is usually talked about from the perspective of the evolution from the enterprise inside to out versus the deployment of services constructed from scratch to play in the cloud.

You've no doubt seen discussions from Greg Ness (InfoBlox) and Lori Mac Vittie (f5) regarding their interpretation of Infrastructure 2.0 and the notion that by decoupling infrastructure services from their physical affinity we can actually "…enable greater levels of integration between the disparate layers of infrastructure: network, application, the endpoint, and IP address management, necessary to achieve interconnectedness."

Totally agree.  Been there, done that, bought the T-Shirt, but something wasn't clicking as it relates to what this means relative to cloud.

I was slurping down some java this morning and three things popped into my head as I was flipping between Twitter and Google Reader wondering about how I might consider launching a cloud-based service architecture and what impact it would have on my choices for infrastructure and providers.

Here are the three things that I started to think about in regards to what "infrastructure 2.0" might mean to me in this process, beyond the normal criteria related to management, security, scalability, etc…
  1. I always looked at these discussions of Infrastructure 2.0 as ideation/marketing by vendors on how to take products that used to function in the "Infratructure 1.0" dominion, add a service control plane/channel and adapt them for the inside-out version of the new world order that is cloud. This is the same sort of thing we've dealt with for decades and was highlighted when one day we all discovered the Internet and had to connect to it — although in that case we had standards!
  2. Clouds are often discussed in either microcosmic vacuum or lofty, fluffy immensity and it makes it hard to see the stratosphere for the cirrocumulus.  Our "non-cloud" internal enterprises today are conglomerates of technology integration with pockets of core services which provide the underpinnings for much of what keeps the machinery running.  Cloud computing is similar in approach, but in this regard, it brings home again the point that there is no such thing as "THE Cloud" but rather that the overarching integration challenge lays in the notion of overlays or mash-ups of multiple clouds, their functions, and their associated platforms and API's. 
  3. Further, and as to my last blog post on private clouds and location independence, I really do believe that the notion of internal versus external clouds is moot, but that the definitional nuance of public versus private clouds — and their requisite control requirements — are quite important.  Where, why, how and by whom services are provided becomes challenging because the distinction between inside and out can be really, really fuzzy, even more so if you're entirely cloud based in the first place.
For some reason, my thinking never really coalesced on how what relevance these three points have as it relates to the delivery of a service (and thus layers of applications) in a purely cloud based architecture built from scratch without the encumbrance of legacy infrastructure solutions.  

I found this awesome blog post from Mike Brittain via a tweet from @botchagalupe titled "How we built a web hosting infrastructure on EC2" and even though the article is a fascinating read, the single diagram in the post hit me like a hammer in the head…and I don't know why it did, because it's not THAT profound, but it jiggled something loose that is probably obvious to everyone else already:

Do you see the first three layers?  Besides the "Internet," as the transport, you'll see two of the most important service delivery functions staring back at you: Akamai's "Site Accelerator Proxy" CDN/Caching/Optimization offering and Neustar's "UltraDNS" distributed, topologically intelligent DNS services

Both of these critical services (one might say "core infrastructure 2.0" services) are, themselves, cloud-based.  Of course, the entire EC2/S3 environment which hosts the web services is cloud-based, too.

The reason the light bulb went on for me is that I found that I was still caught in the old school infrastructure-as-a-box line of thought when it came to how I might provide the CDN/Caching and distributed DNS capabilities of my imaginary service.

It's likely I would have dropped right to the weeds and started thinking about which geographic load balancers (boxes) and/or proxies I might deploy somewhere and how (or if) they might integrate with the cloud "hosting/platform provider" to give me the resiliency and dynamic capabilities I wanted, let alone firewalls, IDP, etc.

Do I pick a provider that offers as part of the infrastructure a specific hardware-based load-balancing platform?  Do I pick on that can accommodate the integration of a software-based virtual appliances. Should I care?  With the cloud I'm not supposed to, but I find that I still, for many reasons — good and bad — do.

I never really thought about simply using a cloud-based service as a component in a mash-up of services that already does these things in ways that would be much cheaper, simpler, resilient and scalable than I could construct with "infrastructure 1.0" thinking.   Heck, I could pick 2 or 3 of them, perhaps. 

That being said, I've used outsourced "cloud-based" email filtering, vulnerability management, intrusion detection & prevention services, etc., but there are still some functions that for some reason appear to sacrosanct in the recesses of my mind?

I think I always just assumed that the stacking of outsourced (commoditized) services across multiple providers would be too complex but in reality, it's not very different from my internal enterprise that has taken decades to mature many of these functions (and consolidate them.)

Despite the relative immaturity of the cloud, it's instantly benefited from this evolution. Now, we're not quite all the way there yet.  We still are lacking standards and that service control plane shared amongst service layers doesn't really exist.

I think it's a huge step to recognize that it's time to get over the bias of applying so called "infrastructure 1.0" requirements to the rules of engagement in the cloud by recognizing that many of these capabilities don't exist in the enterprise, either.

Now, it's highly likely that the two players above (Neustar and Akamai) may very well use the same boxes that *I* might have chosen anyway, but it's irrelevant.  It's all about the service and engineering enough resiliency into the design (and choices of providers) such that I mitigate the risk of perhaps not having that "best of breed" name plate on a fancy set of equipment somewhere.

I can't believe the trap I fell into in terms of my first knee-jerk reaction regarding architecture, especially since I've spent so much of the last 5 years helping architect and implement "cloud" or "cloud-like" security services for outsourced capabilities.

So anyway, you're probably sitting here saying "hey, idiot, this is rather obvious and is the entire underlying premise of this cloud thing you supposedly understand inside and out."  That comment would be well deserved, but I had to be honest and tell you that it never really clicked until I saw this fantastic example from Mike.



Mixing Metaphors: Private Clouds Aren’t Defined By Their Location…

January 20th, 2009 3 comments

There's been a ton of back and forth recently debating the arguments — pro and con — of the need for and very existence of "private clouds."

Rather than play link ping-pong, go read James Urquhart's post on the topic titled "The argument FOR private clouds" which features the various positions on the matter.  

What's really confusing about many of these debates is how many of them distract from the core definition and proposition served by the concept of private clouds.

You will note that many of those involved in the debates subtley change the argument from discussing "private clouds" as a service model to instead focus on the location of the infrastructure used to provide service by using wording such as "internal clouds" or "in-house clouds."  I believe these are mutually exclusive topics.   

With the re-perimeterization of our enterprises, the notion of "internal" versus "external" is moot.  Why try and reintroduce the failed (imaginary) Maginot line back into the argument again?

These arguments are oxymoronic given the nature of cloud services; by definition cloud computing implies infrastructure you don't necessarily own, so to exclude that by suggesting private clouds are "in-house" defies logic.  Now, I suppose one might semantically suggest that a cloud service provider could co-locate infrastructure in an enterprise's existing datacenter to offer an "in-house private cloud," but that doesn't really make sense, does it?

Private clouds are about extending the enterprise to leverage infrastructure that makes use of cloud computing capabilities and is not about internally locating the resources used to provide service.  It's also not an all-or-nothing proposition.  

Remember also that cloud computing does NOT imply virtualization, so suggesting that using the latter gets you the former that you can brand as a "cloud" is a false dichotomy.  Enterprise modernization through virtualization is not cloud computing.  It can certainly be part of the process, but let's not mix metaphors further.

It occurs to me that private clouds make a ton of sense as an enabler to enterprises who want to take advantage of cloud computing for any of the oft-cited reasons, but are loathe to (or unable to) surrender their infrastructure and applications without sufficient control. 

Further, there are some compelling reasons that a methodical and measured approach migrating/evolving to cloud computing makes a lot of sense, not the least of which James has already mentioned: existing sunk costs in owned data center infrastructure.  It's unlikely that a large enterprise will simply be able to write off millions of dollars of non-depreciated assets they've already purchased.

Then there are the common sense issues like maturity of technology and service providers, regulatory issues, control, resiliency, etc.  

Private clouds mean that an enterprise can decide how and how much of the infrastructure can/should be maintained as a non-cloud operational concern versus how much can benefit from the cloud.

Private clouds make a ton of sense; they provide the economic benefits of outsourced scaleable infrastructure that does not require capital outlay, the needed control over that infrastructure combined with the ability to replicate existing topologies and platforms and ultimately the portability of applications and workflow.

These capabilities may eliminate the re-write and/or re-engineering of applications like is often required when moving to typical IaaS (infrastructure as a Service) player such as Amazon.

From a security perspective — which is very much my focus — private clouds provide me with a way of articulating and expressing the value of cloud computing while still enabling me to manage risk to an acceptable level as chartered by my mandate.

A model that makes sense to me is that of GoGrid's "CloudCenter" concept which I'll review under separate cover; there's definitely some creative marketing going on when discussing the blending of traditional co-location capabilities and the dynamic scalability and on-demand usage/billing of the cloud, but we'll weed through this soon enough.


P.S. I really liked Chuck Hollis' (EMC) post on the topic, here.
Categories: Cloud Computing, Cloud Security Tags:

The Cloud is to Managed Infrastructure as Guitar Hero is to Karaoke…

January 18th, 2009 2 comments

How many of your friends do you know that would never be caught dead at a karaoke bar belting out 80's hair band tunes and looking like complete tools? 

many of them are completely unafraid, however, to make complete idiots of themselves and rock out to the
same musical arrangements in front of total strangers because instead of "karaoke" it's
called "Guitar Hero" and runs on an XBox in the living room rather
than the "Tiki Room" on Wednesday nights?

With all the definitions of the Cloud and the vagaries associated with differentiated value propositions of each, folks have begun to use the phrases "jumping the shark" and "Cloud Computing" in the same breath.

For the sake of argument, if we boil down what Cloud Computing means in simpler and more familiar terms and agree to use rPath's definition (from Cloud Computing in Plain English) as an oversimplified example we get:


Where Cloud Computing is the convergence of 3 major trends:

Virtualization: Where applications are separated from infrastructure
Utility Computing: Server Capacity is accessed across a a grid as a variably priced shared service
SaaS: Applications are available on-demand on a subscription basis

Again, overly-simplified example notwithstanding, what's interesting to me — and the reason for the goofy title and metaphor associated with this post — is that with the popularity of "Cloud" becoming the umbrella terminology for the application of proven concepts (above) which harness technology and approaches we already have, we're basically re-branding a framework of existing capabilities and looking to integrate them better.

…oh, and make a buck, too.

That's not to diminsh the impact and even value of the macro-trends associated with Cloud such as re-perimeterization, outsourcing, taking cost of the business, economies of scale, etc., it's just a much more marketable way of describing them.

The cloud: a cooler version of Internet karaoke…


*Image of Triston McIntyre from ITKnowledgeExchange

BeanSec! Wednesday, January 21st, 2009 – 6PM to ?

January 16th, 2009 No comments

Yo!  BeanSec! is once again upon us.  Wednesday, January 21st, 2009.

Middlesex Lounge: 315 Massachusetts Ave, Cambridge 02139. 

BeanSec! is an informal meetup of information security
professionals, researchers and academics in the Greater Boston area
that meets the third Wednesday of each month.

I say again, BeanSec! is hosted the third Wednesday of every month.  Add it to your calendar.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join
up”, present a zero-day exploit, or defend your dissertation to attend.

Don't worry about being "late" because most people just show up when
they can. 6:30 is a good time to aim for. We'll try and save you a
seat. There is a plenty of parking around or take the T.

food selection is basically high-end finger-food appetizers and
the drinks are really good; an attentive staff and eclectic clientèle
make the joint fun for people watching. I'll generally annoy you into
participating somehow, even if it's just fetching napkins. 😉

Previously I had gracious sponsorship that allowed me to pick up the tab during BeanSec! but the prevailing economic conditions makes that not possible at this time.  If you or your company would like to offer to sponsor this excellent networking and knowledge base, please get in contact with me [choff @ packetfilter . com]

See you there.

/Hoff, /0Day, and /Weld

Categories: BeanSec! Tags: