Archive

Archive for January, 2012

Building/Bolting Security In/On – A Pox On the Audit Paradox!

January 31st, 2012 2 comments

My friend and skilled raconteur Chris Swan (@cpswan) wrote an excellent piece a few days ago titled “Building security in – the audit paradox.”

This thoughtful piece was constructed in order to point out the challenges involved in providing auditability, visibility, and transparency in service — specifically cloud computing — in which the notion of building in or bolting on security is debated.

I think this is timely.  I have thought about this a couple of times with one piece aligned heavily with Chris’ thoughts:

Chris’ discussion really contrasted the delivery/deployment models against the availability and operationalization of controls:

  1. If we’re building security in, then how do we audit the controls?
  2. Will platform as a service (PaaS) give us a way to build security in such that it can be evaluated independently of the custom code running on it?

Further, as part of some good examples, he points out the notion that with separation of duties, the ability to apply “defense in depth” (hate that term,) and the ability to respond to new threats, the “bolt-on” approach is useful — if not siloed:

There lies the issue – bolt on security is easy to audit. There’s a separate thing, with a separate bit of config (administered by a separate bunch of people) that stands alone from the application code.

…versus building secure applications:

Code security is hard. We know that from the constant stream of vulnerabilities that get found in the tools we use every day. Auditing that specific controls implemented in code are present and effective is a big problem, and that is why I think we’re still seeing so much bolting on rather than building in.

I don’t disagree with this at all.  Code security is hard.  People look for gap-fillers.  The notion that Chris finds limited options for bolting security on versus integrating security (building it in) programmatically as part of the security development lifecycle leaves me a bit puzzled.

This identifies both the skills and cultural gap between where we are with security and how cloud changes our process, technology, and operational approaches but there are many options we should discuss.

Thus what was interesting (read: I disagree with) is what came next wherein Chris maintained that one “can’t bolt on in the cloud”:

One of the challenges that cloud services present is an inability to bolt on extra functionality, including security, beyond that offered by the service provider. Amazon, Google etc. aren’t going to let me or you show up to their data centre and install an XML gateway, so if I want something like schema validation then I’m obliged to build it in rather than bolt it on, and I must confront the audit issue that goes with that.

While it’s true that CSP’s may not enable/allow you to show up to their DC and “…install and XML gateway,” they are pushing the security deployment model toward the virtual networking hooks, the guest based approach within the VMs and leveraging both the security and service models of cloud itself to solve these challenges.

I allude to this below, but as an example, there are now cloud services which can sit “in-line” or in conjunction with your cloud application deployments and deliver security as a service…application, information (and even XML) security as a service are here today and ramping!

While  immature and emerging in some areas, I offer the following suggestions that the “bolt-on” approach is very much alive and kicking.  Given that the “code security” is hard, this means that the cloud providers harden/secure their platforms, but the app stacks that get deployed by the customers…that’s the customers’ concerns and here are some options:

  1. Introspection APIs (VMsafe)
  2. Security as a Service (Cloudflare, Dome9, CloudPassage)
  3. Auditing frameworks (CloudAudit, STAR, etc)
  4. Virtual networking overlays & virtual appliances (vGW, VSG, Embrane)
  5. Software defined networking (Nicira, BigSwitch, etc.)

Yes, some of them are platform specific and I think Chris was mostly speaking about “Public Cloud,” but “bolt-on” options are most certainly available an are aggressively evolving.

I totally agree that from the PaaS/SaaS perspective, we are poised for many wins that can eliminate entire classes of vulnerabilities as the platforms themselves enforce better security hygiene and assurance BUILT IN.  This is just as emerging as the BOLT ON solutions I listed above.

In a prior post “Silent Lucidity: IaaS – Already a Dinosaur. Rise of PaaSasarus Rex

As I mention in my Cloudifornication presentation, I think that from a security perspective, PaaS offers the potential of eliminating entire classes of vulnerabilities in the application development lifecycle by enforcing sanitary programmatic practices across the derivate works built upon them.  I look forward also to APIs and standards that allow for consistency across providers. I think PaaS has the greatest potential to deliver this.

There are clearly trade-offs here, but as we start to move toward the two key differentiators (at least for public clouds) — management and security — I think the value of PaaS will really start to shine.

My opinion is that given the wide model of integration between various delivery and deployment models, we’re gonna need both for quite some time.

Back to Chris’ original point, the notion that auditors will in any way be able to easily audit code-based (built-in) security at the APPLICATION layer or the PLATFORM layer versus the bolt-on layer is really at the whim on the skillset of the auditors themselves and the checklists they use which call out how one is audited:

Infrastructure as a service shows us that this can be done e.g. the AWS firewall is very straightforward to configure and audit (without needing to reveal any details of how it’s actually implemented). What can we do with PaaS, and how quickly?

This is a very simplistic example (more infrastructure versus applistructure perspective) but represents the very interesting battleground we’ll be entrenched in for years to come.

In the related posts below, you’ll see I’ve written a bunch about this and am working toward ensuring that as really smart folks work to build it in, the ecosystem is encouraged to provide bolt ons to fill those gaps.

/Hoff

Related articles

Enhanced by Zemanta

With Cloud, The PaaSibilities Are Endless…

January 26th, 2012 3 comments

I read a very interesting article from ZDNet UK this morning titled “Amazon Cuts Off Stack at the PaaS

The gist of the article is that according to Werner Vogels (@werner,) AWS’ CTO, they have no intention of delivering a PaaS service and instead expect to allow an ecosystem of PaaS providers, not unlike Heroku, to flourish atop their platform:

“We want 1,000 platforms to bloom,” said Vogels, before explaining Amazon has “no desire to go and really build a [PaaS].”

That’s all well and good, but it lead me to scratch my head, especially with regard to what I *thought* AWS already offered in terms of PaaS with BeanStalk, which is described thusly in their FAQ:

Q: What is AWS Elastic Beanstalk?
AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
Q: How is AWS Elastic Beanstalk different from existing application containers or platform-as-a-service solutions?
Most existing application containers or platform-as-a-service solutions, while reducing the amount of programming required, significantly diminish developers’ flexibility and control. Developers are forced to live with all the decisions pre-determined by the vendor – with little to no opportunity to take back control over various parts of their application’s infrastructure. However, with AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using AWS Elastic Beanstalk’s management capabilities.
While these snippets from the FAQ certainly seem to describe infrastructure components that enable PaaS (meta-PaaS?) when you combine the other elements of AWS’ offerings, it sure as heck sounds like PaaS regardless of what you call it.
In fact, a Twitter exchange with @GeorgeReese, @krishnan and @jamessaull well summarized the headscratching:
With all those components, AWS can certainly enable PaaS platforms like Heroku to “flourish.” 
However, suggesting that despite having all the raw components and not pointing to it and saying “PaaS” is like having all the components to assemble a bomb, not package it as such, and declaring it’s not dangerous because in that state it won’t go off.
I’d say the potential for going BOOM! are real.  It appears Marten Mickos was hinting at the same thing:

However, Mickos disputed Vogels’ claim that Amazon is going to let a thousand platforms bloom.

“He will always say that, and Amazon will slowly take a step higher and higher,” he said, before pointing to Beanstalk as an example. “[But] in my view PaaS has middleware components… and I could agree that it is okay to add [those] to an IaaS.”

In the long term, as I’ve stated prior, the value in platforms will be in how easy they make it for developers to create and deliver applications fluidly.

I may not be as good at marketing as some, but that sounds less like an infrastructure-centric business model and much more like an application-centric one.

Moving on up is where it’s at.  I saw the scratching on the cave walls when I wrote “Silent Lucidity: IaaS — Already A Dinosaur. The Evolution of PaaSasarus Rex” back in 2009.

What do you think?  Is AWS being coy?

Enhanced by Zemanta

QuickQuip: Vint Cerf “Internet Access Is Not a Human Right” < Agreed...

January 10th, 2012 6 comments

Wow, what a doozy of an OpEd!

Vint Cerf wrote an article for the NY Times with the title “Internet Access Is Not a Human Right.” wherein he suggests that Internet access and the technology that provides it is “…an enabler of rights, not a right itself” and “…it is a mistake to place any particular technology in this exalted category [human right,] since over time we will end up valuing the wrong things.”

This article is so rich in very interesting points that I could spend hours both highlighting points to both agree with as well as squint sternly at many of them.

It made me think and in conclusion, I find myself in overall agreement.  This topic inflames passionate debate — some really interesting debate — such as that from Rob Graham (@erratarob) here [although I’m not sure how a discussion on Human rights became anchored on U.S. centric constitutional elements which don’t, by definition, apply to all humans…only Americans…]

This ends up being much more of a complex moral issue than I expected in reviewing others’ arguments.

I’ve positioned this point for discussion in many forums without stating my position and have generally become fascinated by the results.

What do you think — is Internet access (not the Internet itself) a basic human right?

/Hoff

Enhanced by Zemanta

QuickQuip: Don’t run your own data center if you’re a public IaaS < Sorta...

January 10th, 2012 7 comments

Patrick Baillie, the CEO of Swiss IaaS provider, CloudSigma, wrote a very interesting blog published on GigaOm titled “Don’t run your own data center if you’re a public IaaS.”

Baillie leads off by describing how AWS’ recent outage is evidence as to why the complexity of running facilities (data centers) versus the services running atop them is best segregated and outsourced to third parties in the business of such things:

Why public IaaS cloud providers should outsource their data centers

While there are some advantages for cloud providers operating data centers in-house, including greater control, capacity, power and security, the challenges, such as geographic expansion, connectivity, location, cost and lower-tier facilities can often outweigh the benefits. In response to many of these challenges, an increasing number of cloud providers are realizing the benefits of working with a third-party data center provider.

It’s  a very interesting blog, sprinkled throughout with pros and cons of rolling your own versus outsourcing but it falls down in being able to carry the burden in logic of some the assertions.

Perhaps I misunderstood, but the article seemed to focus on single DC availability as though (per my friend @CSOAndy’s excellent summarization) “…he missed the obvious reason: you can arbitrage across data centers” and “…was focused on single DC availability. Arbitrage means you just move your workloads automagically.”

I’ll let you read the setup in its entirety, but check out the conclusion:

In reality, taking a look at public cloud providers, those with legacy businesses in hosting, including Rackspace and GoGrid, tend to run their own facilities, whereas pure-play cloud providers, like my company CloudSigma, tend to let others run the data centers and host the infrastructure.

The business of operating a data center versus operating a cloud is very different, and it’s crucial for such providers to focus on their core competency. If a provider attempts to do both, there will be sacrifices and financial choices with regards to connectivity, capacity, supply, etc. By focusing on the cloud and not the data center, public cloud IaaS providers don’t need to make tradeoffs between investing in the data center over the cloud, thereby ensuring the cloud is continually operating at peak performance with the best resources available.

The points above were punctuated as part of a discussion on Twitter where @georgereese commented “IaaS is all about economies of scale. I don’t think you win at #cloud by borrowing someone else’s”

Fascinating.  It’s times like these that I invoke the widsom of the Intertubes and ask “WWWD” (or What Would Werner Do?)

If we weren’t artificially limited in this discussion to IaaS only, it would have been interesting to compare this to SaaS providers like Google or Salesforce or better yet folks like Zynga…or even add supporting examples like Heroku (who run atop AWS but are now a part of SalesForce o_O)

I found many of the points raised in the discussion intriguing and good food for thought but I think that if we’re talking about IaaS — and we leave out AWS which directly contradicts the model proposed — the logic breaks almost instantly…unless we change the title to “Don’t run your own data center if you’re a [small] public IaaS and need to compete with AWS.”

Interested in your views…

/Hoff

Enhanced by Zemanta

QuickQuip: “Networking Doesn’t Need a VMWare” < tl;dr

January 10th, 2012 1 comment

I admit I was enticed by the title of the blog and the introductory paragraph certainly reeled me in with the author creds:

This post was written with Andrew Lambeth.  Andrew has been virtualizing networking for long enough to have coined the term “vswitch”, and led the vDS distributed switching project at VMware

I can only assume that this is the same Andrew Lambeth who is currently employed at Nicira.  I had high expectations given the title,  so I sat down, strapped in and prepared for a fire hose.

Boy did I get one…

27 paragraphs amounting to 1,601 words worth that basically concluded that server virtualization is not the same thing as network virtualization, stateful L2 & L3 network virtualization at scale is difficult and ultimately virtualizing the data plane is the easy part while the hard part of getting the mackerel out of the tin is virtualizing the control plane – statefully.*

*[These are clearly *my* words as the only thing fishy here was the conclusion…]

It seems the main point here, besides that mentioned above, is to stealthily and diligently distance Nicira as far from the description of “…could be to networking something like what VMWare was to computer servers” as possible.

This is interesting given that this is how they were described in a NY Times blog some months ago.  Indeed, this is exactly the description I could have sworn *used* to appear on Nicira’s own about page…it certainly shows up in Google searches of their partners o_O

In his last section titled “This is all interesting … but why do I care?,” I had selfishly hoped for that very answer.

Sadly, at the end of the day, Lambeth’s potentially excellent post appears more concerned about culling marketing terms than hammering home an important engineering nuance:

Perhaps the confusion is harmless, but it does seem to effect how the solution space is viewed, and that may be drawing the conversation away from what really is important, scale (lots of it) and distributed state consistency. Worrying about the datapath , is worrying about a trivial component of an otherwise enormously challenging problem

This smacks of positioning against both OpenFlow (addressed here) as well as other network virtualization startups.

Bummer.

Enhanced by Zemanta

When A FAIL Is A WIN – How NIST Got Dissed As The Point Is Missed

January 2nd, 2012 4 comments

Randy Bias (@randybias) wrote a very interesting blog titled Cloud Computing Came To a Head In 2011, sharing his year-end perspective on the emergence of Cloud Computing and most interestingly the “…gap between ‘enterprise clouds’ and ‘web-scale clouds’”

Given that I very much agree with the dichotomy between “web-scale” and “enterprise” clouds and the very different sets of respective requirements and motivations, Randy’s post left me confused when he basically skewered the early works of NIST and their Definition of Cloud Computing:

This is why I think the NIST definition of cloud computing is such a huge FAIL. It’s focus is on the superficial aspects of ‘clouds’ without looking at the true underlying patterns of how large Internet businesses had to rethink the IT stack.  They essentially fall into the error of staying at my ‘Phase 2: VMs and VDCs’ (above).  No mention of CAP theorem, understanding the fallacies of distributed computing that lead to successful scale out architectures and strategies, the core socio-economics that are crucial to meeting certain capital and operational cost points, or really any acknowledgement of this very clear divide between clouds built using existing ‘enterprise computing’ techniques and those using emergent ‘cloud computing’ technologies and thinking.

Nope. No mention of CAP theorem or socio-economic drivers, yet strangely the context of what the document was designed to convey renders this rant moot.

Frankly, NIST’s early work did more to further the discussion of *WHAT* Cloud Computing meant than almost any person or group evangelizing Cloud Computing…especially to a world of users whose most difficult challenges is trying to understand the differences between traditional enterprise IT and Cloud Computing

As I reacted to this point on Twitter, Simon Wardley (@swardley) commented in agreement with Randy’s assertions, but strangely what I found odd again was the misplaced angst by which the criterion of “success” vs “FAIL” that both Simon and Randy were measuring the NIST document against:

Both Randy and Simon seem to be judging NIST’s efforts against their lack of extolling the virtues, or “WHY” versus the “WHAT” of Cloud, and as such, were basically doing a disservice by perpetuating aged concepts rooted in archaic enterprise practices rather than boundary stretch, trailblaze and promote the enlightened stance of “web-scale” cloud.

Well…

The thing is, as NIST stated in both the purpose and audience sections of their document, the “WHY” of Cloud was not the main intent (and frankly best left to those who make a living talking about it…)

From the NIST document preface:

1.2 Purpose and Scope

Cloud computing is an evolving paradigm. The NIST definition characterizes important aspects of cloud computing and is intended to serve as a means for broad comparisons of cloud services and deployment strategies, and to provide a baseline for discussion from what is cloud computing to how to best use cloud computing. The service and deployment models defined form a simple taxonomy that is not intended to prescribe or constrain any particular method of deployment, service delivery, or business operation.

1.3 Audience

The intended audience of this document is system planners, program managers, technologists, and others adopting cloud computing as consumers or providers of cloud services.

This was an early work (the initial draft was released in 2009, final in late 2011,) and when it was written, many people — Randy, Simon and myself included — we still finding the best route, words and methodology to verbalize the “Why.” And now it’s skewered as “mechanistic drivel.”*

At the time NIST was developing their document, I was working in parallel writing the “Architecture” chapter of the first edition of the Cloud Security Alliance’s Guidance for Cloud Computing.  I started out including my own definitional work in the document but later aligned to the NIST definitions because it was generally in line with mine and was well on the way to engendering a good deal of conversation around standard vocabulary.

This blog post summarized the work at the time (prior to the NIST inclusion).  I think you might find the author of the second comment on that post rather interesting, especially given how much of a FAIL this was all supposed to be… 🙂

It’s only now with the clarity of hindsight that it’s easier to take the “WHY,” and utilize the “WHAT” (from NIST and others, especially practitioners like Randy) in a manner that is complementary so we can talk less about “what and why” and rather “HOW.”

So while the NIST document wasn’t, isn’t and likely never will be “perfect,” and does not address every use case or even eloquently frame the “WHY,” it still serves as a very useful resource upon which many people can start a conversation regarding Cloud Computing.

It’s funny really…the first tenet for “web-scale” cloud that AWS — the “Kings of Cloud” Randy speaks about constantly —  is “PLAN FOR FAIL.”  So if the NIST document truly meets this seal of disapproval and is a FAIL, then I guess it’s a win ;p

Your opinion?

/Hoff

*N.B. I’m not suggesting that critiquing a document is somehow verboten or that NIST is somehow infallible or sacrosanct — far from it.  In fact, I’ve been quite critical and vocal in my feedback with regard to both this document and efforts like FedRAMP.  However, this is during the documents’ construction and with the intent to make it better within the context within which they were designed versus the rear view mirror.

Enhanced by Zemanta