Archive

Archive for the ‘Cloud Computing’ Category

QuickQuip: “Networking Doesn’t Need a VMWare” < tl;dr

January 10th, 2012 1 comment

I admit I was enticed by the title of the blog and the introductory paragraph certainly reeled me in with the author creds:

This post was written with Andrew Lambeth.  Andrew has been virtualizing networking for long enough to have coined the term “vswitch”, and led the vDS distributed switching project at VMware

I can only assume that this is the same Andrew Lambeth who is currently employed at Nicira.  I had high expectations given the title,  so I sat down, strapped in and prepared for a fire hose.

Boy did I get one…

27 paragraphs amounting to 1,601 words worth that basically concluded that server virtualization is not the same thing as network virtualization, stateful L2 & L3 network virtualization at scale is difficult and ultimately virtualizing the data plane is the easy part while the hard part of getting the mackerel out of the tin is virtualizing the control plane – statefully.*

*[These are clearly *my* words as the only thing fishy here was the conclusion…]

It seems the main point here, besides that mentioned above, is to stealthily and diligently distance Nicira as far from the description of “…could be to networking something like what VMWare was to computer servers” as possible.

This is interesting given that this is how they were described in a NY Times blog some months ago.  Indeed, this is exactly the description I could have sworn *used* to appear on Nicira’s own about page…it certainly shows up in Google searches of their partners o_O

In his last section titled “This is all interesting … but why do I care?,” I had selfishly hoped for that very answer.

Sadly, at the end of the day, Lambeth’s potentially excellent post appears more concerned about culling marketing terms than hammering home an important engineering nuance:

Perhaps the confusion is harmless, but it does seem to effect how the solution space is viewed, and that may be drawing the conversation away from what really is important, scale (lots of it) and distributed state consistency. Worrying about the datapath , is worrying about a trivial component of an otherwise enormously challenging problem

This smacks of positioning against both OpenFlow (addressed here) as well as other network virtualization startups.

Bummer.

Enhanced by Zemanta

When A FAIL Is A WIN – How NIST Got Dissed As The Point Is Missed

January 2nd, 2012 4 comments

Randy Bias (@randybias) wrote a very interesting blog titled Cloud Computing Came To a Head In 2011, sharing his year-end perspective on the emergence of Cloud Computing and most interestingly the “…gap between ‘enterprise clouds’ and ‘web-scale clouds’”

Given that I very much agree with the dichotomy between “web-scale” and “enterprise” clouds and the very different sets of respective requirements and motivations, Randy’s post left me confused when he basically skewered the early works of NIST and their Definition of Cloud Computing:

This is why I think the NIST definition of cloud computing is such a huge FAIL. It’s focus is on the superficial aspects of ‘clouds’ without looking at the true underlying patterns of how large Internet businesses had to rethink the IT stack.  They essentially fall into the error of staying at my ‘Phase 2: VMs and VDCs’ (above).  No mention of CAP theorem, understanding the fallacies of distributed computing that lead to successful scale out architectures and strategies, the core socio-economics that are crucial to meeting certain capital and operational cost points, or really any acknowledgement of this very clear divide between clouds built using existing ‘enterprise computing’ techniques and those using emergent ‘cloud computing’ technologies and thinking.

Nope. No mention of CAP theorem or socio-economic drivers, yet strangely the context of what the document was designed to convey renders this rant moot.

Frankly, NIST’s early work did more to further the discussion of *WHAT* Cloud Computing meant than almost any person or group evangelizing Cloud Computing…especially to a world of users whose most difficult challenges is trying to understand the differences between traditional enterprise IT and Cloud Computing

As I reacted to this point on Twitter, Simon Wardley (@swardley) commented in agreement with Randy’s assertions, but strangely what I found odd again was the misplaced angst by which the criterion of “success” vs “FAIL” that both Simon and Randy were measuring the NIST document against:

Both Randy and Simon seem to be judging NIST’s efforts against their lack of extolling the virtues, or “WHY” versus the “WHAT” of Cloud, and as such, were basically doing a disservice by perpetuating aged concepts rooted in archaic enterprise practices rather than boundary stretch, trailblaze and promote the enlightened stance of “web-scale” cloud.

Well…

The thing is, as NIST stated in both the purpose and audience sections of their document, the “WHY” of Cloud was not the main intent (and frankly best left to those who make a living talking about it…)

From the NIST document preface:

1.2 Purpose and Scope

Cloud computing is an evolving paradigm. The NIST definition characterizes important aspects of cloud computing and is intended to serve as a means for broad comparisons of cloud services and deployment strategies, and to provide a baseline for discussion from what is cloud computing to how to best use cloud computing. The service and deployment models defined form a simple taxonomy that is not intended to prescribe or constrain any particular method of deployment, service delivery, or business operation.

1.3 Audience

The intended audience of this document is system planners, program managers, technologists, and others adopting cloud computing as consumers or providers of cloud services.

This was an early work (the initial draft was released in 2009, final in late 2011,) and when it was written, many people — Randy, Simon and myself included — we still finding the best route, words and methodology to verbalize the “Why.” And now it’s skewered as “mechanistic drivel.”*

At the time NIST was developing their document, I was working in parallel writing the “Architecture” chapter of the first edition of the Cloud Security Alliance’s Guidance for Cloud Computing.  I started out including my own definitional work in the document but later aligned to the NIST definitions because it was generally in line with mine and was well on the way to engendering a good deal of conversation around standard vocabulary.

This blog post summarized the work at the time (prior to the NIST inclusion).  I think you might find the author of the second comment on that post rather interesting, especially given how much of a FAIL this was all supposed to be… 🙂

It’s only now with the clarity of hindsight that it’s easier to take the “WHY,” and utilize the “WHAT” (from NIST and others, especially practitioners like Randy) in a manner that is complementary so we can talk less about “what and why” and rather “HOW.”

So while the NIST document wasn’t, isn’t and likely never will be “perfect,” and does not address every use case or even eloquently frame the “WHY,” it still serves as a very useful resource upon which many people can start a conversation regarding Cloud Computing.

It’s funny really…the first tenet for “web-scale” cloud that AWS — the “Kings of Cloud” Randy speaks about constantly —  is “PLAN FOR FAIL.”  So if the NIST document truly meets this seal of disapproval and is a FAIL, then I guess it’s a win ;p

Your opinion?

/Hoff

*N.B. I’m not suggesting that critiquing a document is somehow verboten or that NIST is somehow infallible or sacrosanct — far from it.  In fact, I’ve been quite critical and vocal in my feedback with regard to both this document and efforts like FedRAMP.  However, this is during the documents’ construction and with the intent to make it better within the context within which they were designed versus the rear view mirror.

Enhanced by Zemanta

Enter the Data Huggers…

November 27th, 2011 6 comments
VM (operating system)

Image via Wikipedia

A tale of yore:

And from whence the light of the heavens did shimmer upon them from the clouds above, yea the Server Huggers did at first protest.

Yet, as the virtual supplanted the corporeal — these shells of the buzzing contrivance —  they did wilt wearily as the compression of the rings unyieldingly did set upon them.

And strangely, once shod of their vestigial confines, they learned once again to rejoice and verily did they, of their own accord, assume the mantle of the make-believe server clan, and march forward lofting their standards high  to readily bear their mark of the VM.

But as the once earth-bound, now ethereal, ascended their offers to the heavens and their clouds, the vessels of their capital dissolved their curtailment once more.

The VMs became vapor, their service, the yield.  This tribe, once coupled to the shrines of their wanton packaging, became unencumbered once more.

Yet when these offers were set free upon the land, their services — compliant, supple and malleable — were embraced as a newfound purse, and the Clan of the App guarded them jealously once again, hoarding their prize from the undeserving.

Their reign, alas, did not last; their bastions eroded as the platforms that once gained them allegiance crumbled with the surfeit of consumption — their dispersion widespread and resources taxed.

Thus became the rule of the Data Clan whose merit lay in wait as the pretenders of the Cloud were forced to kneel in subservience.

For their data was big and they clutched it to their bosom as they once did their apps, VMs and carnal heat pumps…

It’s interesting to see how in such a short time we’ve seen the following progression:

Server Huggers > VM Huggers > App Huggers > Data Huggers

I wonder what’s next in the lineage of hugs?

/Hoff

Enhanced by Zemanta

Cloud: The Turducken Of Computing? [Oh, and Happy Thanksgiving]

November 20th, 2011 3 comments

In these here United States Of ‘Merica, we’re closing in on our National Day of Gluttony, Thanksgiving.

As we approach #OccupyStuffedGullet, I again find myself quizzically introspective, however frightening the prospect of navel gazing combined with fatal doses of Tryptophan leaves me.

Reviewing the annual list of consumables for said celebration in conjunction with yet another interminable series of blog posts on “What Cloud *really* means for [insert target market here,]” I am compelled to suggest that what Cloud *really* means and *really* represents is…

A Turducken.

According to the Oracle of Mr. Wales, a Turducken is as follows:

turducken is a dish consisting of a de-boned chicken stuffed into a de-boned duck, which itself is stuffed into a de-boned turkey. The word turducken is a portmanteau of turkey, duck, and chicken or hen.

The thoracic cavity of the chicken/game hen and the rest of the gaps are stuffed, sometimes with a highly seasoned breadcrumb mixture or sausage meat, although some versions have a different stuffing for each bird.

I’ll leave it to you to map the visual to the…Oh, who am I kidding?

What does this *really* have to do with Cloud? Nothing, really, but I’m rather bloated with the resurgence of metaphors and analogs which seek to clarify new computing recipes in order to justify more gut-busting consumption, warranted or not, diets-be-damned.  But really…sometimes a dish is delicious in its simplicity and doesn’t need any garnish. 😉

In fact, this comparison isn’t really at all that accurate or interesting, but I found it funny nonetheless…

Frankly, it’s *really* just an excuse to wish you all a very merry ClouDucken Day 🙂

May your IT be stuffed and your waistline elastic.

/Hoff

Enhanced by Zemanta

Oh, c’mon…

October 28th, 2011 1 comment

Story here from Network World.

Frankly, XML Signature-Wrapping and XSS don’t represent “massive security flaws in cloud architectures.”

They represent unfortunate vulnerabilities in authentication mechanism and web app security, but “cloud architecture?”

These vulnerabilities were also fixed.  Quickly.

Further, while the attack vector will continue to play an important role when using Cloud (publicly) as a delivery model (that is, APIs,) this story is being over played.

Will this/could this/is this type of vulnerability pervasive? Certainly there are opportunities for abuse of Internet-facing APIs and authentication schemes, especially given the reliance on vulnerable protocols and security models?

Perhaps.

Is it scary?

Yes.

See: Cloudifornication and Cloudinomicon.

 

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

The Killer App For OpenFlow and SDN? Security.

October 27th, 2011 8 comments

I spent yesterday at the PacketPushers/TechFieldDay OpenFlow Symposium. The event provided a good overview of what OpenFlow [currently] means, how it fits into the overall context of software-defined networking (SDN) and where it might go from here.

I’d suggest reading Ethan Banks’ (@ecbanks) overview here.

Many of us left the event, however, still wondering about what the “killer app” for OpenFlow might be.

Chatting with Ivan Pepelnjak (@ioshints) and Derrick Winkworth (@CloudToad,) I reiterated that selfishly, I’m still thrilled about the potential that OpenFlow and SDN can bring to security.  This was a topic only briefly skirted during the symposium as the ACL-like capabilities of OpenFlow were discussed, but there’s so much more here.

I wrote about this back in May (OpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security):

… “security” needs to be as programmatic/programmable, agile, scaleable and flexible as the workloads (and stacks) it is designed to protect. “Security” in this context extends well beyond the network, but the network provides such a convenient way of defining templated containers against which we can construct and enforce policies across a wide variety of deployment and delivery models.

So as I watch OpenFlow (and Software Defined Networking) mature, I’m really, really excited to recognize the potential for a slew of innovative ways we can leverage and extend this approach to networking [monitoring and enforcement] in order to achieve greater visibility, scale, agility, performance, efficacy and reduced costs associated with security.  The more programmatic and instrumented the network becomes, the more capable our security options will become also.

I had to chuckle at a serendipitous tweet from a former Cisco co-worker (Stefan Avgoustakis, @savgoust) because it’s really quite apropos for this topic:

…I think he’s oddly right!

Frankly, if you look at what OpenFlow and SDN (and network programmability in general) gives an operator — the visibility and effective graph of connectivity as well as the multiple-tupule flow action capabilities, there are numerous opportunities to leverage the separation of control/data plane across both virtual and physical networks to provide better security capabilities in response to threats and at a pace/scale/radius commensurate with said threat.

To be able to leverage telemetry and flow tables in the controllers “centrally” and then “dispatch” the necessary security response on an as-needed basis to the network location ONLY that needs it, really does start to sound a lot like the old “immune system” analogy that SDN (self defending networks) promised.

The ability to distribute security capabilities more intelligently as a service layer which can be effected when needed — without the heavy shotgunned footprint of physical in-line devices or the sprawl of virtualized appliances — is truly attractive.  Automation for detection and ultimately prevention is FTW.

Bundling the capabilities delivered via programmatic interfaces and coupling that with ways of integrating the “network” and “applications” (of which security is one) produces some really neat opportunities.

Now, this isn’t just a classical “data center core” opportunity, either. How about the WAN/Edge?  Campus, branch…? Anywhere you have the need to deliver security as a service.

For non-security examples, check out Dave Ward’s (my Juniper colleague) presentation “Programmable Networks are SFW” where he details interesting use cases such as “service engineered paths,” “service appliance pooling,” “service specific topology,” “content request routing,” and “bandwidth calendaring” for example.

…think of the security ramifications and opportunities linked to those capabilities!

I’ve mocked up a couple of very interesting security prototypes using OpenFlow and some open source security components; from IDP to Anti-malware and the potential is exciting because OpenFlow — in conjunction with other protocols and solutions in the security ecosystem — could provide some of the missing glue necessary to deliver a constant,  but abstracted security command/control (nee API-like capability) across heterogenous infrastructure.

NOTE: I’m not suggesting that OpenFlow itself provide these security capabilities, but rather enable security solutions to take advantage of the control/data plane separation to provide for more agile and effective security.

If the potential exists for OpenFlow to effectively allow choice of packet forwarding engines and network virtualization across the spectrum of supporting vendors’ switches, it occurs to me that we could utilize it for firewalls, intrusion detection/prevention engines, WAFs, NAC, etc.

Thoughts?

Enhanced by Zemanta

A Contentious Question: The Value Proposition & Target Market Of Virtual Networking Solutions?

September 28th, 2011 26 comments

I have, what I think, is a simple question I’d like some feedback on:

Given the recent influx of virtual networking solutions, many of which are OpenFlow-based, what possible in-roads and value can they hope to offer in heavily virtualized enterprise environments wherein the virtual networking is owned and controlled by VMware?

Specifically, if the only third-party VMware virtual switch to date is Cisco’s and access to this platform is limited (if at all available) to startup players, how on Earth do BigSwitch, Nicira, vCider, etc. plan to insert themselves into an already contentious environment effectively doing mindshare and relevance battle with the likes of mainline infrastructure networking giants and VMware?

If you’re answer is “OpenFlow and OpenStack will enable this access,” I’ll follow along with a question that asks how long a runway these startups have hanging their shingle on relatively new efforts (mainly open source) that the enterprise is not a typically early adopter of.

I keep hearing notional references to the problems these startups hope to solve for the “Enterprise,” but just how (and who) do they think they’re going to get to consider their products at a level that gives them reasonable penetration?

Service providers, maybe?

Enterprises…?

It occurs to me that most of these startups are being built to be acquired by traditional networking vendors who will (or will not) adopt OpenFlow when significant enterprise dollars materialize in stacks that are not VMware-centric.

Not meaning to piss anyone off, but many of these startups’ business plans are shrouded in the mystical vail of “wait and see.”

So I do.

/Hoff

Ed: To be clear, this post isn’t about “OpenFlow” specifically (that’s only one of many protocols/approaches,) but rather the penetration of a virtual networking solution into a “closed” platform environment dominated by a single vendor.

If you want a relevant analog, look at the wasteland that represents the virtual security startups that tried to enter this space (and even the larger vendors’ solutions) and how long this has taken/fared.

If you read the comments below, you’ll see people start to accidentally tease out the real answer to the question I was asking…about the value of these virtual networking solutions providers.  The funny part is that despite the lack of comments from most of the startups I mention, it took Brad Hedlund (from Cisco) to recognize why I wrote the post, which is the following:

“The *real* reason I wrote this piece was to illustrate that really, these virtual networking startups are really trying to invade the physical network in virtual sheep’s clothing…”

…in short, the problem space they’re trying to solve is actually in the physical network, or more specifically bridge the gap between the two.

Enhanced by Zemanta

Flying Cars & Why The Hypervisor Is A Ride-On Lawnmower In Comparison

September 23rd, 2011 18 comments

I wrote a piece a while ago (in 2009) titled “Virtual Machines Are Part Of the Problem, Not the Solution…” in which I described the fact that hypervisors, virtualization and the packaging that supports them — Virtual Machines (VMs) — were actually kludges.

Specifically, VMs still contain the bloat (nee: cancer) that are operating systems and carry forward all of the issues and complexity (albeit now with more abstraction cowbell) that we already suffer.  Yes, it brings a lot of GOOD stuff, too, but tolerate the analog for a minute, m’kay.

Moreover, the move in operational models such as Cloud Computing (leveraging the virtualization theme) and the up-stack crawl from IaaS to PaaS (covered also in a blog I wrote titled: Silent Lucidity: IaaS – Already A Dinosaur?) seems to indicate a general trending toward a reduction in the number of layers in the overall compute stack.

Something I saw this morning reminded me of this and its relation to how the evolution and integration of various functions — such as virtualization and security — directly into CPUs themselves are going to dramatically disrupt how we perceive and value “virtualization” and “cloud” in the long run.

I’m not going to go into much detail because there’s a metric crapload of NDA type stuff associated with the details, but I offer you this as something you may have already thought about and the industry is gingerly crawling toward across multiple platforms.  You’ll have to divine and associate the rest:

Think “Microkernels”

…and in the immortal words of Forrest Gump “That’s all I’m gonna say ’bout that.”

/Hoff

* Ray DePena humorously quipped on Twitter that “…the flying car never materialized,” to which I retorted “Incorrect. It has just not been mass produced…” I believe this progression will — and must — materialize.

Enhanced by Zemanta

Cloud Security Start-Up: Dome9 – Firewall Management SaaS With a Twist

September 12th, 2011 No comments

Dome9 has peeked its head out from under the beta covers and officially launched their product today.  I got an advanced pre-brief last week and thought I’d summarize what I learned.

As it turns out I enjoy a storied past with Zohar Alon, Dome9’s CEO.  Back in the day, I was responsible for architecture and engineering of Infonet’s (now BT) global managed security services which included a four-continent deployment of Check Point Firewall-1 on Sun Sparcs.

Deploying thousands of managed firewall “appliances” (if I can even call them that now) and managing each of them individually with a small team posed quite a challenge for us.  It seems it posed a challenge for many others also.

Zohar was at Check Point and ultimately led the effort to deliver Provider-1 which formed the basis of their distributed firewall (and virtualized firewall) management solution which piggybacked on VSX.

Fast forward 15 years and here we are again — cloud and virtualization have taken the same set of security and device management issues and amplified them.  Zohar and his team looked at the challenge we face in managing the security of large “web-scale” cloud environments and brought Dome9 to life to help solve this problem.

Dome9’s premise is simple – use a centralized SaaS-based offering to help manage your distributed cloud access-control (read: firewall) management challenge using either an agent (in the guest) or agent-less (API) approach across multiple cloud IaaS platforms.

Their first iteration of the agent-based solution focuses on Windows and Linux-based OSes and can pretty much function anywhere.  The API version currently is limited to Amazon Web Services.

Dome9 seeks to fix the “open hole” access problem created when administrators create rules to allow system access and forget to close/remove them after the tasks are complete.  This can lead to security issues as open ports invite unwanted “guests.”  In their words:

  • Keep ALL administrative ports CLOSED on your servers without losing access and control.
  • Dynamically open any port On-Demand, any time, for anyone, and from anywhere.
  • Send time and location-based secure access invitations to third parties.
  • Close ports automatically, so you don’t have to manually reconfigure your firewall.
  • Securely access your cloud servers without fear of getting locked out.

The unique spin/value-proposition with Dome9 in it’s initial release is the role/VM/user focused and TIME-LIMIT based access policies you put in place to enable either static (always-open) or dynamic (time-limited) access control to authorized users.

Administrators can setup rules in advance for access or authorized users can request time-based access dynamically to previously-configured ports by clicking a button.  It quickly opens access and closes it once the time limit has been reached.

Basically Dome9 allows you to manage and reconcile “network” based ACLs and — where used — AWS security zones (across regions) with guest-based firewall rules.  With the agent installed, it’s clear you’ll be able to do more in both the short and long-term (think vulnerability management, configuration compliance, etc.) although they are quite focused on the access control problem today.

There are some workflow enhancements I suggested during the demo to enable requests from “users” to “administrators” to request access to ports not previously defined — imagine if port 443 is open to allow a user to install a plug-in that then needs a new TCP port to communicate.  If that port is not previously known/defined, there’s no automated way to open that port without an out-of-band process which makes the process clumsy.

We also discussed the issue of importing/supporting identity federation in order to define “users” from the Enterprise perspective across multiple clouds.  They could use your input if you have any.

There are other startups with similar models today such as CloudPassage (I’ve written about them before here) who look to leverage SaaS-based centralized security services to solve IaaS-based distributed security challenges.

In the long term, I see Cloud security services being chained together to form an overlay of sorts.  In fact, CloudFlare (another security SaaS offering) announced a partnership with Dome9 for this very thing.

Dome9 has a 14-day free trial two available pricing models:

  1. “Personal Server” – a FREE single protected server with a single administrator
  2. “Business Cloud” – Per-use pricing with 5 protected servers at $20 per month

If you’re dealing with trying to get a grip on your distributed firewall management problem, especially if you’re a big user of AWS, check out Dome9.

/Hoff

Enhanced by Zemanta

VMware vCloud Architecture ToolKit (vCAT) 2.0 – Get Some!

September 8th, 2011 No comments

Here’s a great resource for those of you trying to get your arms around VMware’s vCloud Architecture:

VMware vCloud Architecture ToolKit (vCAT) 2.0

This is a collection of really useful materials, clearly painting a picture of cloud rosiness, but valuable to understand how to approach the various deployment models and options for VMware’s cloud stack:

Enhanced by Zemanta