Archive

Posts Tagged ‘Cloud Computing’

My Information Security Magazine Cover Story: “Virtualization security dynamics get old, changes ahead”

November 4th, 2013 2 comments

ISM_cover_1113This month’s Search Security (nee Information Security Magazine) cover story was penned by none other than your’s truly and titled “Virtualization security dynamics get old, changes ahead”

I hope you enjoy the story; its a retrospective regarding the beginnings of security in the virtual space, where we are now, and we we’re headed.

I tried very hard to make this a vendor-neutral look at the state of the union of virtual security.

I hope that it’s useful.

You can find the story here.

/Hoff

Enhanced by Zemanta

Wanna Be A Security Player? Deliver It In Software As A Service Layer…

January 9th, 2013 1 comment

As I continue to think about the opportunities that Software Defined Networking (SDN) and Network Function Virtualization (NFV) bring into focus, the capability to deliver security as a service layer is indeed exciting.

I wrote about how SDN and OpenFlow (as a functional example) and the security use cases provided by each will be a differentiating capability back in 2011: The Killer App For OpenFlow and SDN? SecurityOpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security, and Back To The Future: Network Segmentation & More Moaning About Zoning.

Recent activity in the space has done nothing but reinforce this opinion.  My day job isn’t exactly lacking in excitement, either 🙂

As many networking vendors begin to bring their SDN solutions to market — whether in the form of networking equipment or controllers designed to interact with them — one of the missing strategic components is security.  This isn’t a new phenomenon, unfortunately, and as such, predictably there are also now startups entering this space and/or retooling from the virtualization space and stealthily advertising themselves as “SDN Security” companies 🙂

Like we’ve seen many times before, security is often described (confused?) as a “simple” or “atomic” service and so SDN networking solutions are designed with the thought that security will simply be “bolted on” after the fact and deployed not unlike a network service such as “load balancing.”  The old “we’ll just fire up some VMs and TAMO (Then a Miracle Occurs) we’ve got security!” scenario.  Or worse yet, we’ll develop some proprietary protocol or insertion architecture that will magically get traffic to and from physical security controls (witness the “U-TURN” or “horseshoe” L2/L3 solutions of yesteryear.)

The challenge is that much of Security today is still very topologically sensitive and depends upon classical networking constructs to be either physically or logically plumbed between the “outside” and the asset under protection, or it’s very platform dependent and lacks the ability to truly define a policy that travels with the workload regardless of the virtualization, underlay OR overlay solutions.

Depending upon the type of control, security is often operationalized across multiple layers using wildly different constructs, APIs and context in terms of policy and disposition depending upon it’s desired effect.

Virtualization has certainly evolved our thinking about how we should think differently about security mostly due to the dynamism and mobility that virtualization has introduced, but it’s still incredibly nascent in terms of exposed security capabilities in the platforms themselves.  It’s been almost 5 years since I started raging about how we need(ed) platform providers to give us capabilities that function across stacks so we’d have a fighting chance.  To date, not only do we have perhaps ONE vendor doing some of this, but we’ve seen the emergence of others who are maniacally focused on providing as little of it as possible.

If you think about what virtualization offers us today from a security perspective, we have the following general solution options:

  1. Hypervisor-based security solutions which may apply policy as a function of the virtual-NIC card of the workloads it protects.
  2. Extensions of virtual-networking (i.e. switching) solutions that enable traffic steering and some policy enforcement that often depend upon…
  3. Virtual Appliance-based security solutions that require manual or automated provisioning, orchestration and policy application in user space that may or may not utilize APIs exposed by the virtual networking layer or hypervisor

There are tradeoffs across each of these solutions; scale, performance, manageability, statefulness, platform dependencies, etc.  There simply aren’t many platforms that natively offer security capabilities as a function of service delivery that allows arbitrary service definition with consistent and uniform ways of describing the outcome of the policies at these various layers.  I covered this back in 2008 (it’s a shame nothing has really changed) in my Four Horsemen Of the Virtual Security Apocalypse presentation.

As I’ve complained for years, we still have 20 different ways of defining how to instantiate a five-tupule ACL as a basic firewall function.

Out of the Darkness…

The promise of SDN truly realized — the ability to separate the control, forwarding, management and services planes — and deploy security as a function of available service components across overlays and underlays, means we will be able to take advantage of any of these models so long as we have a way to programmatically interface with the various strata regardless of whether we provision at the physical, virtual or overlay virtual layer.

It’s truly exciting.  We’re seeing some real effort to enable true security service delivery.

When I think about how to categorize the intersection of “SDN” and “Security,” I think about it the same way I have with virtualization and Cloud:

  • Securing SDN (Securing the SDN components)
  • SDN Security Services (How do I take security and use SDN to deliver security as a service)
  • Security via SDN (What NEW security capabilities can be derived from SDN)

There are numerous opportunities with each of these categories to really make a difference to security in the coming years.

The notion that many of our network and security capabilities are becoming programmatic means we *really* need to focus on securing SDN solutions, especially given the potential for abuse given the separation of the various channels. (See: Software Defined Networking (In)Security: All Your Control Plane Are Belong To Us…)

Delivering security as a service via SDN holds enormous promise for reasons I’ve already articulated and gives us an amazing foundation upon which to start building solutions we can’t imagine today given the lack of dynamism in our security architecture and design patterns.

Finally, the first two elements give rise to allow us to do things we can’t even imagine with today’s traditional physical and even virtual solutions.

I’ll be starting to highlight really interesting solutions I find (and am able to talk about) over the next few months.

Security enabled by SDN is going to be huge.

More soon.

/Hoff

Related articles

Enhanced by Zemanta

Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 15 comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself 🙂

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting 🙂

Related articles

 

Enhanced by Zemanta

The Cuban Cloud Missile Crisis…Weapons Of Mass Abstraction.

September 7th, 2012 2 comments
English: Coat of arms of Cuba. Español: Escudo...

English: Coat of arms of Cuba. Español: Escudo de Cuba. Русский: Герб Кубы. (Photo credit: Wikipedia)

In the midst of the Cold War in October of 1962, the United States and the Soviet Union stood periously on the brink of nuclear war as a small island some 90 miles off the coast of Florida became the focal point of intense foreign policy scrutiny, challenges to sovereignty and political arm wrestling the likes of which were never seen before.

Photographic evidence provided by a high altitude U.S. spy plane exposed the until-then secret construction of medium and intermediate ballistic nuclear missile silos, constructed by the Soviet Union, which were deliberately placed so as to be close enough to reach the continental United States.

The United States, alarmed by this unprecedented move by the Soviets and the already uneasy relations with communist Cuba, unsuccessfully attempted a CIA-led forceful invasion and overthrow of the Cuban regime at the Bay of Pigs.

This did not sit well with either the Cubans or Soviets.  A nightmare scenario ensued as the Soviets responded with threats of its own to defend its ally (and strategic missile sites) at any cost, declaring the American’s actions as unprovoked and unacceptable.

During an incredibly tense standoff, the U.S. mulled over plans to again attack Cuba both by air and sea to ensure the disarmament of the weapons that posed a dire threat to the country.

As posturing and threats continued to escalate from the Soviets, President Kennedy elected to pursue a less direct military action;  a naval blockade designed to prevent the shipment of supplies necessary for the completion and activation of launchable missiles.  Using this as a lever, the U.S. continued to demand that Russia dismantle and remove all nuclear weapons as they prevented any and all naval traffic to and from Cuba.

Soviet premier Krustchev protested such acts of “direct aggression” and communicated to president Kennedy that his tactics were plunging the world into the depths of potential nuclear war.

While both countries publicly traded threats of war, the bravado, posturing and defiance were actually a cover for secret backchannel negotiations involving the United Nations. The Soviets promised they would dismantle and remove nuclear weapons, support infrastructure and transports from Cuba, and the United States promised not to invade Cuba while also removing nuclear weapons from Turkey and Italy.

The Soviets made good on their commitment two weeks later.  Eleven months after the agreement, the United States complied and removed from service the weapons abroad.

The Cold War ultimately ended and the Soviet Union fell, but the political, economic and social impact remains even today — 40 years later we have uneasy relations with (now) Russia and the United States still enforces ridiculous economic and social embargoes on Cuba.

What does this have to do with Cloud?

Well, it’s a cute “movie of the week” analog desperately in need of a casting call for Nikita Khrushchev and JFK.  I hear Gary Busey and Aston Kutcher are free…

As John Furrier, Dave Vellante and I were discussing on theCUBE recently at VMworld 2012, there exists an uneasy standoff — a cold war — between the so-called “super powers” staking a claim in Cloud.  The posturing and threats currently in process don’t quite have the world-ending outcomes that nuclear war would bring, but it could have devastating technology outcomes nonetheless.

In this case, the characters of the Americans, Soviets, Cubans and the United Nations are played by networking vendors, SDN vendors, virtualization/abstraction vendors, cloud “stack” projects/efforts/products and underlying CPU/chipset vendors (not necessarily in that order…)  The rest of the world stands by as their fate is determined on the world’s stage.

If we squint hard enough at Cloud, we might find out very own version of the “Bay of Pigs,” with what’s going on with OpenStack.

The “community” effort behind OpenStack is one largely based on “industry” and if we think of OpenStack as Cuba, it’s being played as pawn in the much larger battle for global domination.  The munitions being stocked in this tiny little enclave threatens to disrupt relations of epic proportions.  That’s why we now see so much strategic movement around an initiative and technology that many outside of the navel gazers haven’t really paid much attention to.

Then there are players like Amazon Web Services who, like China of today, quietly amass their weapons of mass abstraction as the industry-jockeying and distractions play on (but that’s a topic for another post)

Cutting to the chase…if we step back for a minute

Intel is natively bundling more and more networking and virtualization capabilities into their CPU/Chipsets and a $7B investment in security company McAfee makes them a serious player there.  VMware is de-emphasizing the “hypervisor” and is instead positioning they are focused on end-to-end solutions which include everything from secure mobility, orchestration/provisioning and now, with Nicira, networking.  Networking companies like Cisco and Juniper continue to move up-stack to deeper integrate networking and security along with service overlays in order to remain relevant in light of virtualization and SDN.

…and OpenStack’s threat of disrupting all of those plays makes it important enough to pay attention to.  It’s a little island of technology that is causing huge behemoths to collide.  A molehill that has become a mountain.

If today’s announcements of VMware and Intel joining OpenStack as Gold Members along with the existing membership by other “super powers” doesn’t make it clear that we’re in the middle of an enormous power struggle, I’ve got a small Island to sell you 😉

Me?  I’m going to make some Lechon Asado, enjoy a mojito and a La Gloria Cubana.

Related articles

Enhanced by Zemanta

Incomplete Thought: On Horseshoes & Hand Grenades – Security In Enterprise Virt/Cloud Stacks

May 22nd, 2012 7 comments

It’s not really *that* incomplete of a thought, but I figure I’d get it down on vPaper anyway…be forewarned, it’s massively over-simplified.

Over the last five years or so, I’ve spent my time working with enterprises who are building and deploying large scale (relative to an Enterprise’s requirements, that is) virtualized data centers and private cloud environments.

For the purpose of this discussion, I am referring to VMware-based deployments given the audience and solutions I will reference.

To this day, I’m often shocked with regard to how many of these organizations that seek to provide contextualized security for intra- and inter-VM traffic seem to position an either-or decision with respect to the use of physical or virtual security solutions.

For the sake of example, I’ll reference the architectural designs which were taken verbatim from my 2008 presentationThe Four Horsemen of the Virtualization Security Apocalypse.

If you’ve seen/read the FHOTVA, you will recollect that there are many tradeoffs involved when considering the use of virtual security appliances and their integration with physical solutions.  Notably, an all-virtual or all-physical approach will constrain you in one form or another from the perspective of efficacy, agility, and the impact architecturally, operationally, or economically.

The topic that has a bunch of hair on it is where I see many enterprises trending: obviating virtual solutions and using physical appliances only:

 

…the bit that’s missing in the picture is the external physical firewall connected to that physical switch.  People are still, in this day and age, ONLY relying on horseshoeing all traffic between VMs (in the same or different VLANs) out of the physical cluster machine and to an external firewall.

Now, there are many physical firewalls that allow for virtualized contexts, zoning, etc., but that’s really dependent upon dumping trunked VLAN ports from the firewall/switches into the server and then “extending” virtual network contexts, policies, etc. upstream in an attempt to flatten the physical/virtual networks in order to force traffic through a physical firewall hop — sometimes at layer 2, sometimes at layer 3.

It’s important to realize that physical firewalls DO offer benefits over the virtual appliances in terms of functionality, performance, and some capabilities that depend on hardware acceleration, etc. but from an overall architectural positioning, they’re not sufficient, especially given the visibility and access to virtual networks that the physical firewalls often do not have if segregated.

Here’s a hint, physical-only firewall solutions alone will never scale with the agility required to service the virtualized workloads that they are designed to protect.  Further, a physical-only solution won’t satisfy the needs to dynamically provision and orchestrate security as close to the workload as possible, when the workloads move the policies will generally break, and it will most certainly add latency and ultimately hamper network designs (both physical and virtual.)

Virtual security solutions — especially those which integrate with the virtualization/cloud stack (in VMware’s case, vCenter & vCloud Director) — offer the ability to do the following:

…which is to say that there exists the capability to utilize  virtual solutions for “east-west” traffic and physical solutions for “north-south” traffic, regardless of whether these VMs are in the same or different VLAN boundaries or even across distributed virtual switches which exist across hypervisors on different physical cluster members.

For east-west traffic (and even north-south models depending upon network architecture) there’s no requirement to horseshoe traffic physically. 

It’s probably important to mention that while the next slide is out-of-date from the perspective of the advancement of VMsafe APIs, there’s not only the ability to inject a slow-path (user mode) virtual appliance between vSwitches, but also utilize a set of APIs to instantiate security policies at the hypervisor layer via a fast path kernel module/filter set…this means greater performance and the ability to scale better across physical clusters and distributed virtual switching:

Interestingly, there also exists the capability to actually integrate policies and zoning from physical firewalls and have them “flow through” to the virtual appliances to provide “micro-perimeterization” within the virtual environment, preserving policy and topology.

There are at least three choices for hypervisor management-integrated solutions on the market for these solutions today:

  • VMware vShield App,
  • Cisco VSG+Nexus 1000v and
  • Juniper vGW

Note that the solutions above can be thought of as “layer 2” solutions — it’s a poor way of describing them, but think “inter-VM” introspection for workloads in VLAN buckets.  All three vendors above also have, or are bringing to market, complementary “layer 3” solutions that function as virtual “edge” devices and act as a multi-function “next-hop” gateway between groups of VMs/applications (nee vDC.)  For the sake of brevity, I’m omitting those here (they are incredibly important, however.)

They (layer 2 solutions) are all reasonably mature and offer various performance, efficacy and feature set capabilities. There are also different methods for plumbing the solutions and steering traffic to them…and these have huge performance and scale implications.

It’s important to recognize that the lack of thinking about virtual solutions often seem to be based largely on ignorance of need and availability of solutions.

However, other reasons surface such as cost, operational concerns and compliance issues with security teams or assessors/auditors who don’t understand virtualized environments well enough.

From an engineering and architectural perspective, however, obviating them from design consideration is a disappointing concern.

Enterprises should consider a hybrid of the two models; virtual where you can, physical where you must.

If you’ve considered virtual solutions but chose not to deploy them, can you comment on why and share your thinking with us (even if it’s for the reasons above?)

/Hoff

Enhanced by Zemanta

AwkwardCloud: Here’s Hopin’ For Open

February 14th, 2012 3 comments

MAKING FRIENDS EVERYWHERE I GO…

There’s no way to write this without making it seem like I’m attacking the person whose words I am about to stare rudely at, squint and poke out my tongue.

No, it’s not @reillyusa, featured to the right.  But that expression about sums up my motivation.

Because this ugly game of “Words With Friends” is likely to be received as though I’m at odds with what represents the core marketing message of a company, I think I’m going to be voted off the island.

Wouldn’t be the first time.  Won’t be the last.  It’s not personal.  It’s just cloud, bro.

This week at Cloud Connect, @randybias announced that his company, Cloudscaling, is releasing a new suite of solutions branded under the marketing moniker of  “Open Cloud.”

I started to explore my allergy to some of these message snippets as they were strategically “leaked” last week in a most unfortunate Twitter exchange.  I promised I would wait until the actual launch to comment further.

This is my reaction to the website, press release and blog only.  I’ve not spoken to Randy.  This is simply my reaction to what is being placed in public.  It’s not someone else’s interpretation of what was said.  It’s straight from the Cloud Pony’s mouth. ;p

GET ON WITH IT THEN!

“Open Cloud” is described as a set of solutions for those looking to deploy clouds that provide “… better economics, greater flexibility, and less lock-in, while maintaining control and governance” than so-called Enterprise Clouds that are based on what Randy tags are more proprietary foundations.

The case is made where enterprises will really want to build two clouds: one to run legacy apps and one to run purpose-built cloud-ready applications.  I’d say that enterprises that have a strategy are likely looking forward to using clouds of both models…and probably a few more, such as SaaS and PaaS.

This is clearly a very targeted solution which looks to replicate AWS’ model for enterprises or SP’s who are looking to exercise more control over the fate over their infrastructure.  How much runway this serves against the onslaught of PaaS and SaaS will play out.

I think it’s a reasonable bet there’s quite a bit of shelf life left on IaaS and I wonder if we’ll see follow-on generations to focus on PaaS.

Yet I digress…

This is NOT going to be a rant about the core definition of “Open,” (that’s for Twitter) nor is this going to be one of those 40 pagers where I deconstruct an entire blog.  It would be fun, easy and rather useful, but I won’t.

No. Instead I  will suggest that the use of the word “Open” in this press release is nothing more than opportunistic marketing, capitalizing on other recent uses of the Open* suffix such as “OpenCompute, OpenFlow, Open vSwitch, OpenStack, etc.” and is a direct shot across the bow of other companies that have released similar solutions in the near past (Cloud.com, Piston, Nebula)

If we look at what makes up “Open Cloud,” we discover it is framed upon on four key solution areas and supported by design blueprints, support and services:

  1. Open Hardware
  2. Open Networking
  3. Open APIs
  4. Open Source Software

I’m not going to debate the veracity or usefulness of some of these terms directly, but we’ll come back to them as a reference in a second, especially the notion of “open hardware.”

The one thing that really stuck under my craw was the manufactured criteria that somehow defined the so-called “litmus tests” associated with “Enterprise” versus “Open” clouds.

Randy suggests that if you are doing more than 1/2 of the items in the left hand column you’re using a cloud built with “enterprise computing technology” versus “open” cloud should the same use hold true for the right hand column:

So here’s the thing.  Can you explain to me what spinning up 1000 VM’s in less than 5 minutes has to do with being “open?”  Can you tell me what competing with AWS on price has to do with being “open?” Can you tell me how Hadoop performance has anything to do with being “open?”  Why does using two third-party companies management services define “open?”

Why on earth does the complexity or simplicity of networking stacks define “openness?”

Can you tell me how, if Cloudscaling’s “Open Cloud” uses certified vendors from “name brand” vendors like Arista how this is any way more “open” than using an alternative solution using Cisco?

Can you tell me if “Open Cloud” is more “open” than Piston Cloud which is also based upon OpenStack but also uses specific name-brand hardware to run?  If “Open Cloud” is “open,” and utilizes open source, can I download all the source code?

These are simply manufactured constructs which do little service toward actually pointing out the real business value of the solution and instead cloaks the wolf in the “open” sheep’s clothing.  It’s really unfortunate.

The end of my rant here is that by co-opting the word “open,” this takes a perfectly reasonable approach of a company’s experience in building a well sorted, (supposedly more) economical and supportable set of cloud solutions and ruins it by letting its karma get run over by its dogma.

Instead of focusing on the merits of the solution as a capable building block for building plain better clouds, this reads like a manifesto which may very well turn people off.

Am I being unfair in calling this out?  I don’t think so.  Would some prefer a private conversation over a beer to discuss?  Most likely.  However, there’s a disconnect here and it stems from pushing public a message and marketing a set of solutions that I hope will withstand the scrutiny of this A-hole with a blog.

Maybe I’m making a mountain out of a molehill…

Again, I’m not looking to pick on Cloudscaling.  I think the business model and the plan is solid as is evidenced by their success to date.  I wish them nothing but success.

I just hope that what comes out the other end is being “open” to consider a better adjective and more useful set of criteria to define the merits of the solution.

/Hoff

Enhanced by Zemanta

Building/Bolting Security In/On – A Pox On the Audit Paradox!

January 31st, 2012 2 comments

My friend and skilled raconteur Chris Swan (@cpswan) wrote an excellent piece a few days ago titled “Building security in – the audit paradox.”

This thoughtful piece was constructed in order to point out the challenges involved in providing auditability, visibility, and transparency in service — specifically cloud computing — in which the notion of building in or bolting on security is debated.

I think this is timely.  I have thought about this a couple of times with one piece aligned heavily with Chris’ thoughts:

Chris’ discussion really contrasted the delivery/deployment models against the availability and operationalization of controls:

  1. If we’re building security in, then how do we audit the controls?
  2. Will platform as a service (PaaS) give us a way to build security in such that it can be evaluated independently of the custom code running on it?

Further, as part of some good examples, he points out the notion that with separation of duties, the ability to apply “defense in depth” (hate that term,) and the ability to respond to new threats, the “bolt-on” approach is useful — if not siloed:

There lies the issue – bolt on security is easy to audit. There’s a separate thing, with a separate bit of config (administered by a separate bunch of people) that stands alone from the application code.

…versus building secure applications:

Code security is hard. We know that from the constant stream of vulnerabilities that get found in the tools we use every day. Auditing that specific controls implemented in code are present and effective is a big problem, and that is why I think we’re still seeing so much bolting on rather than building in.

I don’t disagree with this at all.  Code security is hard.  People look for gap-fillers.  The notion that Chris finds limited options for bolting security on versus integrating security (building it in) programmatically as part of the security development lifecycle leaves me a bit puzzled.

This identifies both the skills and cultural gap between where we are with security and how cloud changes our process, technology, and operational approaches but there are many options we should discuss.

Thus what was interesting (read: I disagree with) is what came next wherein Chris maintained that one “can’t bolt on in the cloud”:

One of the challenges that cloud services present is an inability to bolt on extra functionality, including security, beyond that offered by the service provider. Amazon, Google etc. aren’t going to let me or you show up to their data centre and install an XML gateway, so if I want something like schema validation then I’m obliged to build it in rather than bolt it on, and I must confront the audit issue that goes with that.

While it’s true that CSP’s may not enable/allow you to show up to their DC and “…install and XML gateway,” they are pushing the security deployment model toward the virtual networking hooks, the guest based approach within the VMs and leveraging both the security and service models of cloud itself to solve these challenges.

I allude to this below, but as an example, there are now cloud services which can sit “in-line” or in conjunction with your cloud application deployments and deliver security as a service…application, information (and even XML) security as a service are here today and ramping!

While  immature and emerging in some areas, I offer the following suggestions that the “bolt-on” approach is very much alive and kicking.  Given that the “code security” is hard, this means that the cloud providers harden/secure their platforms, but the app stacks that get deployed by the customers…that’s the customers’ concerns and here are some options:

  1. Introspection APIs (VMsafe)
  2. Security as a Service (Cloudflare, Dome9, CloudPassage)
  3. Auditing frameworks (CloudAudit, STAR, etc)
  4. Virtual networking overlays & virtual appliances (vGW, VSG, Embrane)
  5. Software defined networking (Nicira, BigSwitch, etc.)

Yes, some of them are platform specific and I think Chris was mostly speaking about “Public Cloud,” but “bolt-on” options are most certainly available an are aggressively evolving.

I totally agree that from the PaaS/SaaS perspective, we are poised for many wins that can eliminate entire classes of vulnerabilities as the platforms themselves enforce better security hygiene and assurance BUILT IN.  This is just as emerging as the BOLT ON solutions I listed above.

In a prior post “Silent Lucidity: IaaS – Already a Dinosaur. Rise of PaaSasarus Rex

As I mention in my Cloudifornication presentation, I think that from a security perspective, PaaS offers the potential of eliminating entire classes of vulnerabilities in the application development lifecycle by enforcing sanitary programmatic practices across the derivate works built upon them.  I look forward also to APIs and standards that allow for consistency across providers. I think PaaS has the greatest potential to deliver this.

There are clearly trade-offs here, but as we start to move toward the two key differentiators (at least for public clouds) — management and security — I think the value of PaaS will really start to shine.

My opinion is that given the wide model of integration between various delivery and deployment models, we’re gonna need both for quite some time.

Back to Chris’ original point, the notion that auditors will in any way be able to easily audit code-based (built-in) security at the APPLICATION layer or the PLATFORM layer versus the bolt-on layer is really at the whim on the skillset of the auditors themselves and the checklists they use which call out how one is audited:

Infrastructure as a service shows us that this can be done e.g. the AWS firewall is very straightforward to configure and audit (without needing to reveal any details of how it’s actually implemented). What can we do with PaaS, and how quickly?

This is a very simplistic example (more infrastructure versus applistructure perspective) but represents the very interesting battleground we’ll be entrenched in for years to come.

In the related posts below, you’ll see I’ve written a bunch about this and am working toward ensuring that as really smart folks work to build it in, the ecosystem is encouraged to provide bolt ons to fill those gaps.

/Hoff

Related articles

Enhanced by Zemanta

QuickQuip: Don’t run your own data center if you’re a public IaaS < Sorta...

January 10th, 2012 7 comments

Patrick Baillie, the CEO of Swiss IaaS provider, CloudSigma, wrote a very interesting blog published on GigaOm titled “Don’t run your own data center if you’re a public IaaS.”

Baillie leads off by describing how AWS’ recent outage is evidence as to why the complexity of running facilities (data centers) versus the services running atop them is best segregated and outsourced to third parties in the business of such things:

Why public IaaS cloud providers should outsource their data centers

While there are some advantages for cloud providers operating data centers in-house, including greater control, capacity, power and security, the challenges, such as geographic expansion, connectivity, location, cost and lower-tier facilities can often outweigh the benefits. In response to many of these challenges, an increasing number of cloud providers are realizing the benefits of working with a third-party data center provider.

It’s  a very interesting blog, sprinkled throughout with pros and cons of rolling your own versus outsourcing but it falls down in being able to carry the burden in logic of some the assertions.

Perhaps I misunderstood, but the article seemed to focus on single DC availability as though (per my friend @CSOAndy’s excellent summarization) “…he missed the obvious reason: you can arbitrage across data centers” and “…was focused on single DC availability. Arbitrage means you just move your workloads automagically.”

I’ll let you read the setup in its entirety, but check out the conclusion:

In reality, taking a look at public cloud providers, those with legacy businesses in hosting, including Rackspace and GoGrid, tend to run their own facilities, whereas pure-play cloud providers, like my company CloudSigma, tend to let others run the data centers and host the infrastructure.

The business of operating a data center versus operating a cloud is very different, and it’s crucial for such providers to focus on their core competency. If a provider attempts to do both, there will be sacrifices and financial choices with regards to connectivity, capacity, supply, etc. By focusing on the cloud and not the data center, public cloud IaaS providers don’t need to make tradeoffs between investing in the data center over the cloud, thereby ensuring the cloud is continually operating at peak performance with the best resources available.

The points above were punctuated as part of a discussion on Twitter where @georgereese commented “IaaS is all about economies of scale. I don’t think you win at #cloud by borrowing someone else’s”

Fascinating.  It’s times like these that I invoke the widsom of the Intertubes and ask “WWWD” (or What Would Werner Do?)

If we weren’t artificially limited in this discussion to IaaS only, it would have been interesting to compare this to SaaS providers like Google or Salesforce or better yet folks like Zynga…or even add supporting examples like Heroku (who run atop AWS but are now a part of SalesForce o_O)

I found many of the points raised in the discussion intriguing and good food for thought but I think that if we’re talking about IaaS — and we leave out AWS which directly contradicts the model proposed — the logic breaks almost instantly…unless we change the title to “Don’t run your own data center if you’re a [small] public IaaS and need to compete with AWS.”

Interested in your views…

/Hoff

Enhanced by Zemanta

QuickQuip: “Networking Doesn’t Need a VMWare” < tl;dr

January 10th, 2012 1 comment

I admit I was enticed by the title of the blog and the introductory paragraph certainly reeled me in with the author creds:

This post was written with Andrew Lambeth.  Andrew has been virtualizing networking for long enough to have coined the term “vswitch”, and led the vDS distributed switching project at VMware

I can only assume that this is the same Andrew Lambeth who is currently employed at Nicira.  I had high expectations given the title,  so I sat down, strapped in and prepared for a fire hose.

Boy did I get one…

27 paragraphs amounting to 1,601 words worth that basically concluded that server virtualization is not the same thing as network virtualization, stateful L2 & L3 network virtualization at scale is difficult and ultimately virtualizing the data plane is the easy part while the hard part of getting the mackerel out of the tin is virtualizing the control plane – statefully.*

*[These are clearly *my* words as the only thing fishy here was the conclusion…]

It seems the main point here, besides that mentioned above, is to stealthily and diligently distance Nicira as far from the description of “…could be to networking something like what VMWare was to computer servers” as possible.

This is interesting given that this is how they were described in a NY Times blog some months ago.  Indeed, this is exactly the description I could have sworn *used* to appear on Nicira’s own about page…it certainly shows up in Google searches of their partners o_O

In his last section titled “This is all interesting … but why do I care?,” I had selfishly hoped for that very answer.

Sadly, at the end of the day, Lambeth’s potentially excellent post appears more concerned about culling marketing terms than hammering home an important engineering nuance:

Perhaps the confusion is harmless, but it does seem to effect how the solution space is viewed, and that may be drawing the conversation away from what really is important, scale (lots of it) and distributed state consistency. Worrying about the datapath , is worrying about a trivial component of an otherwise enormously challenging problem

This smacks of positioning against both OpenFlow (addressed here) as well as other network virtualization startups.

Bummer.

Enhanced by Zemanta

When A FAIL Is A WIN – How NIST Got Dissed As The Point Is Missed

January 2nd, 2012 4 comments

Randy Bias (@randybias) wrote a very interesting blog titled Cloud Computing Came To a Head In 2011, sharing his year-end perspective on the emergence of Cloud Computing and most interestingly the “…gap between ‘enterprise clouds’ and ‘web-scale clouds’”

Given that I very much agree with the dichotomy between “web-scale” and “enterprise” clouds and the very different sets of respective requirements and motivations, Randy’s post left me confused when he basically skewered the early works of NIST and their Definition of Cloud Computing:

This is why I think the NIST definition of cloud computing is such a huge FAIL. It’s focus is on the superficial aspects of ‘clouds’ without looking at the true underlying patterns of how large Internet businesses had to rethink the IT stack.  They essentially fall into the error of staying at my ‘Phase 2: VMs and VDCs’ (above).  No mention of CAP theorem, understanding the fallacies of distributed computing that lead to successful scale out architectures and strategies, the core socio-economics that are crucial to meeting certain capital and operational cost points, or really any acknowledgement of this very clear divide between clouds built using existing ‘enterprise computing’ techniques and those using emergent ‘cloud computing’ technologies and thinking.

Nope. No mention of CAP theorem or socio-economic drivers, yet strangely the context of what the document was designed to convey renders this rant moot.

Frankly, NIST’s early work did more to further the discussion of *WHAT* Cloud Computing meant than almost any person or group evangelizing Cloud Computing…especially to a world of users whose most difficult challenges is trying to understand the differences between traditional enterprise IT and Cloud Computing

As I reacted to this point on Twitter, Simon Wardley (@swardley) commented in agreement with Randy’s assertions, but strangely what I found odd again was the misplaced angst by which the criterion of “success” vs “FAIL” that both Simon and Randy were measuring the NIST document against:

Both Randy and Simon seem to be judging NIST’s efforts against their lack of extolling the virtues, or “WHY” versus the “WHAT” of Cloud, and as such, were basically doing a disservice by perpetuating aged concepts rooted in archaic enterprise practices rather than boundary stretch, trailblaze and promote the enlightened stance of “web-scale” cloud.

Well…

The thing is, as NIST stated in both the purpose and audience sections of their document, the “WHY” of Cloud was not the main intent (and frankly best left to those who make a living talking about it…)

From the NIST document preface:

1.2 Purpose and Scope

Cloud computing is an evolving paradigm. The NIST definition characterizes important aspects of cloud computing and is intended to serve as a means for broad comparisons of cloud services and deployment strategies, and to provide a baseline for discussion from what is cloud computing to how to best use cloud computing. The service and deployment models defined form a simple taxonomy that is not intended to prescribe or constrain any particular method of deployment, service delivery, or business operation.

1.3 Audience

The intended audience of this document is system planners, program managers, technologists, and others adopting cloud computing as consumers or providers of cloud services.

This was an early work (the initial draft was released in 2009, final in late 2011,) and when it was written, many people — Randy, Simon and myself included — we still finding the best route, words and methodology to verbalize the “Why.” And now it’s skewered as “mechanistic drivel.”*

At the time NIST was developing their document, I was working in parallel writing the “Architecture” chapter of the first edition of the Cloud Security Alliance’s Guidance for Cloud Computing.  I started out including my own definitional work in the document but later aligned to the NIST definitions because it was generally in line with mine and was well on the way to engendering a good deal of conversation around standard vocabulary.

This blog post summarized the work at the time (prior to the NIST inclusion).  I think you might find the author of the second comment on that post rather interesting, especially given how much of a FAIL this was all supposed to be… 🙂

It’s only now with the clarity of hindsight that it’s easier to take the “WHY,” and utilize the “WHAT” (from NIST and others, especially practitioners like Randy) in a manner that is complementary so we can talk less about “what and why” and rather “HOW.”

So while the NIST document wasn’t, isn’t and likely never will be “perfect,” and does not address every use case or even eloquently frame the “WHY,” it still serves as a very useful resource upon which many people can start a conversation regarding Cloud Computing.

It’s funny really…the first tenet for “web-scale” cloud that AWS — the “Kings of Cloud” Randy speaks about constantly —  is “PLAN FOR FAIL.”  So if the NIST document truly meets this seal of disapproval and is a FAIL, then I guess it’s a win ;p

Your opinion?

/Hoff

*N.B. I’m not suggesting that critiquing a document is somehow verboten or that NIST is somehow infallible or sacrosanct — far from it.  In fact, I’ve been quite critical and vocal in my feedback with regard to both this document and efforts like FedRAMP.  However, this is during the documents’ construction and with the intent to make it better within the context within which they were designed versus the rear view mirror.

Enhanced by Zemanta