Archive

Archive for the ‘Cloud Computing’ Category

Is What We Need…An OpSec K/T Boundary Extinction-Level Event?

June 21st, 2012 1 comment

Tens of millions of Aons (a new quantification of time based on Amazon Web Services AMI spin-ups) from now, archeologists and technosophers will look back on the inevitable emergence of Cloud in the decade following the double-oughts and muse about the mysterious disappearance of the security operations species…

Or not.

The “Cloud Security, Meh!” crowd are an interesting bunch. They don’t seem to like change much.  To be fair, they’re not incentivized to.  However, while difficult, change is good…it just takes a lot to understand that some times.

It occurs to me that if we expect behavior to change in the way in which we approach “security,” it must start with a reset of expectations surrounding how we evaluate outcomes, how we’re measured, and most importantly the actual security leadership itself must change.

Most seasoned CxOs these days that have been in the business for 15+ years are in their late 30’s/early 40’s.  Most of “us” — from official scientifical research I have curated [at the bar] — came from System Administrator/Network Administrator roles back in the 80’s/90’s.

Now, what’s intriguing is that back then, “security” was just one functional component and responsibility of many duties slapped on the back of overworked and underfunded “router jockeys” or “unix neckbearders.”  Back in the day we did it all — we managed the network, massaged the Solaris/NT boxes, helped deploy and manage the apps and were responsible for “securing” it all as we connected stuff to the Internet.

You know, like, um, DevOps.

So today in larger organizations (notsomuch in smaller orgs/startups,) we have a raging rejection of this generalized approach to service delivery/IT by the VERY SAME individuals who arose phoenix-like from the crater left when the Internet exploded and the rampant adoption of technology and siloed operational models became “best practice.” Compliance didn’t help.  Then they got promoted.

In many cases then, the bristled reaction by security folks to things like virtualization, Cloud, Agile, DevOps, etc. is highly generational.  The up-and-coming rank-in-file digital natives who are starting to break into the industry will know these things as “normal,” much like a preschooler uses gestures on an iPad…it just…is.

However, their leadership — “us” — the 40+ year olds that are large and in charge are busy barking that youngsters should get off our IT lawn.  This is very much a generational issue.

So I think what that means is that ultimately we’re waiting for our own version of the K/T boundary extinction-level “opportunity,”  the horizon event at the boundary of the Cretacious/Tertiary periods 65 million years ago where almost all of the Earth’s large vertebrates — all dinosaurs, plesiosaurs, mosasaurs, and pterosaurs — suddenly became extinct.  Boom.  Gone.  Damned meteorites.

Now, unless the next great piece of malware can target, infect and destroy humans as we Bing/Google/click our way into stupidity (coming next week from Iran?) ala Stuxnet/Flame, we’re not going to see these stodgy C(I)SOs vanish instantly, but over the next two decades, we’ll see a new generation arise who think, act and believe differently than we do today…I just hope it doesn’t take that long.

This change…it’s natural. It’s evolution, and patterns like these repeat (see the theory of punctuated equilibrium) even in the face of revolution.  It’s messy.

More often than not, it’s not the technology that’s the problem with “security” when we hit one of these inflection points in computing. No, it’s the organizational, operational, cultural, fiscal, and (dare I say) religious issues that hold us back.  Innovation breeds more innovation unless it’s shackled by people who can’t think outside of the box.

That right there is what defines a dino/plesio/mosa/ptero-saur.

Come to think of it, maybe we do need an OpSec extinction-level event to move us forward instead of waiting 20 years for the AARP forced slide to Florida.

Or, in the words of Gunny Highway from Heartbreak Ridge, we must “Improvise, adapt and overcome.”

If that’s not a DevOps Darwinian double-entendre, I don’t know what is 😉

Don’t be a dinosaur.

/Hoff

 

 

PrivateCore: Another Virtualization-Enabled Security Solution Launches…

June 21st, 2012 No comments

On the heels of Bromium’s coming-out party yesterday at Gigamon’s Structure conference, PrivateCore — a company founded by VMware vets Oded Horovitz and Carl Waldspurger and Google’s Steve Weis — announced a round of financing and what I interpret as a more interesting and focused Raison d’être.

Previously in videos released by Oded, he described the company’s focus around protecting servers (cloud, otherwise) against physical incursion whilst extracting contents from memory, etc. where physical access is required.

From what I could glean, the PrivateCore solution utilizes encryption and CPU cache (need to confirm) to provide memory isolation to render these attack vectors moot.

What’s interesting is the way in which PrivateCore is now highlighting the vehicle for their solution; a “hardened hypervisor.”

It will be interesting to see how well they can market this approach/technology (and to whom,) what sort of API/management planes their VMM provides and how long they stand-alone before being snapped up — perhaps even by VMware or Citrix.

More good action (and $2.25M in funding) in the virtual security space.

/Hoff

Enhanced by Zemanta

Bridging the Gap Between Devs & Security – A Collaborative Suggestion…

May 23rd, 2012 3 comments

After my keynote at Gluecon (Shit My Cloud Evangelist Says…Just Not To My CSO,) I was asked by an attendee what things he could do within his organization to repair the damage and/or mistrust between developers and security organizations in enterprises.

Here’s what I suggested based on past experience:

  1. Reach out and have a bunch of “brown bag lunches” wherein you host-swap each week; devs and security folks present to one another with relevant, interesting or new solutions in their respective areas
  2. Pick a project that takes a yet-to-be-solved interesting business challenge that isn’t necessarily on the high priority project list and bring the dev and security teams together as if it were an actual engagement.

Option 1 starts the flow of information.  Option 2 treats the project as if it were high priority but allows security and dev to work together to talk about platform choices, management, security, etc. and because it’s not mission critical, mistakes can be made and learned from…together.

For example, pick something like building a new app service that uses node.js and MongoDB and figure out how to build, deploy and secure it…as if you were going to deploy to public cloud from day one (and maybe you will.)

You’ll be amazed to see the trust it builds, especially in light of developers enrolling security in their problem and letting them participate from the start versus being the speed bump later.

10 minutes later it’ll be a DevOps love-fest. 😉

/Hoff

 

Enhanced by Zemanta

Overlays: Wasting Away Again In Abstractionville…

May 5th, 2012 3 comments
IBM Cloud Computing

(Photo credit: IvanWalsh.com)

 

I’m about to get in a metal tube and spend 14 hours in the Clouds.  I figured I’d get something off my chest while I sit outside in the sun listening to some Jimmy Buffett.

[Network] overlays.  They bug me.  Let me tell you why.

The Enterprise, when considering “moving to the Cloud” generally takes one of two approaches depending upon culture, leadership, business goals, maturity and sophistication:

  1. Go whole-hog with an all-in Cloud strategy. 
    Put an expiration date on maintaining/investing in legacy apps/infrastructure and instead build an organizational structure, technology approach, culture, and operational model that is designed around building applications that are optimized for “cloud” — and that means SaaS, PaaS, and IaaS across public, private and hybrid models with a focus on how application delivery and information (including protecting) is very different than legacy deployments, or…
  2. Adopt a hedging strategy to get to Cloud…someday.
    This usually means opportunistically picking low risk, low impact, low-hanging fruit that can be tip-toed toward and scraping together the existing “rogue” projects already underway, sprinkling in some BYOD, pointing to a virtualized datacenter and calling a 3 day provisioning window with change control as “on-demand,” and “Cloud.”  Oh, and then deploying gateways, VPNs, data encryption and network overlays as an attempt to plug holes by paving over them, and calling that “Cloud,” also.

See that last bit?

This is where so-called “software defined networking (SDN),” the myriad of models that utilize “virtualization” and all sorts of new protocols and service delivery mechanisms are being conflated into the “will it blend” menagerie called “Cloud.”  It’s an “eyes wide shut” approach.

Now, before you think I’m being dismissive of “virtualization” or SDN, I’m not.  I believe. Wholesale. But within the context of option #2 above, it’s largely a waste of time, money, and effort.  It’s putting lipstick on a pig.

You either chirp or get off the twig.

Picking door #2 is where the Enterprise looks at shiny new things based on an article in the WSJ, Wired or via peer group golf outing and says “I bet if we added yet another layer of abstraction atop the piles of already rapidly abstracting piles of shite we already have, we would be more agile, nimble, efficient and secure.”  We would be “cloud” enabled.

[To a legacy-minded Enterprise,] Cloud is the revenge of VPN and PKI…

The problem is that just like the folks in Maine will advise: “You can’t get there from here.”  I mean, you can, but the notion that you’ll actually pull it off by stacking turtles, applying band-aids and squishing the tyranny of VLANs by surrounding them in layer 3 network overlays and calling this the next greatest thing since sliced bread is, well, bollocks.

Look, I think SDN, protocols like Openflow and VXLAN/NVGRE, etc. are swell.  I think the separation of control and data planes and the notion that I can programmatically operate my network is awesome.  I think companies like Nicira and Bigswitch are doing really interesting things.  I think that Cloudstack, Openstack and VMWare present real opportunity to make things “better.”

Hey, look, we’re just like Google and Amazon Web Services Now!

But to an Enterprise without a real plan as to what “Cloud” really means to their business, these are largely overlays within the context of #2.  Within the context of #1, they’re simply mom and apple pie and are, for the most part, invisible.  That’s not where the focus actually is.

That said, for a transitional Enterprise, these things give them pause, but should be looked upon as breadcrumbs that indicate a journey, not the destination.  They’re a crutch and another band-aid to solve legacy problems.  They’re really a means to an end.

These “innovations” *are* a step in the right direction.  They will let us do great things. They will let a whole new generation of operational models and a revitalized ecosystem flourish AND it will encourage folks to think differently.  But about what?  And to solve what problem(s)?

If you simply expect to layer them on your legacy infrastructure, operational models and people and call it “Cloud,” you’re being disingenuous.

Ultimately, to abuse an analogy, network overlays are a layover on the itinerary of our journey to the Cloud, but not where we should ultimately land. I see too many companies focusing on the transition…and by the time they get there, the target will have moved.  Again.  Just like it always does.

They’re hot now because they reflect something we should have done a long time ago, but like hypervisors, one day [soon] network overlays will become just a feature and not a focus.

/Hoff

 

Enhanced by Zemanta

Incomplete Thought: Will the Public Cloud Create a Generation Of Network Stupid?

March 26th, 2012 31 comments

Short and sweet…

With the continued network abstraction and “simplicity” presented by public cloud platforms like AWS EC2* wherein instances are singly-homed and the level of networking is so dumbed down so as to make deep networking knowledge “unnecessary,” will the skill sets of next generation operators become “network stupid?”

The platform operators will continue to hire skilled network architects, engineers and operators, but the ultimate consumers of these services are being sold on the fact that they won’t have to and in many cases this means that “networking” as a discipline may face a skills shortage.

The interesting implications here is that with all this abstraction and opaque stacks, resilient design is still dependent upon so much “networking” — although much of it is layer 4 and above.  Yep, it’s still TCP/IP, but the implications that the dumbing down of the stack will be profound, especially if one recognizes that ultimately these Public clouds will interconnect to Private clouds, and the two networking models are profoundly differentiated.

…think VMware versus AWS EC2…or check out the meet-in-the-middle approach with OpenStack and Quantum…

I’m concerned that we’re still so bifurcated in our discussions of networking and the Cloud.

One the one hand we’re yapping at one another about stretched L2 domains, fabrics and control/data plane separation or staring into the abyss of L7 proxies and DPI…all the while the implications of SDN and emergence of new protocols, the majority of which are irrelevant to the consumers deploying VMs and apps atop IaaS and PaaS (not to mention SaaS,) makes these discussions seem silly.

On the other hand, DevOps/NoOps folks push their code to platforms that rely less and less on needing to understand or care how the underlying “network” works.

Its’ hard to tell whether “networking” in the pure sense will be important in the long term.

Or as Kaminsky so (per usual) elegantly summarized:

What are your thoughts?

/Hoff

*…and yet we see more “complex” capabilities emerging in scenarios such as AWS VPC…

 

Enhanced by Zemanta
Categories: Cloud Computing, Networking Tags:

Security As A Service: “The Cloud” & Why It’s a Net Security Win

March 19th, 2012 3 comments
Cloud Computing Image

Cloud Computing Image (Photo credit: Wikipedia)

If you’ve been paying attention to the rash of security startups entering the market today, you will no doubt notice the theme wherein the majority of them are, from the get-go, organizing around deployment models which operate from “The Cloud.”

We can argue that “Security as a service” usually refers to security services provided by a third party using the SaaS (software as a service) model, but there’s a compelling set of capabilities that enables companies large and small to be both effective, efficient and cost-manageable as we embrace the “new” world of highly distributed applications, content and communications (cloud and mobility combined.)

As with virtualization, when one discusses “security” and “cloud computing,” any of the three perspectives often are conflated (from my post “Security: In the Cloud, For the Cloud & By the Cloud…“):

In the same way that I differentiated “Virtualizing Security, Securing Virtualization and Security via Virtualization” in my Four Horsemen presentation, I ask people to consider these three models when discussing security and Cloud:

  1. In the Cloud: Security (products, solutions, technology) instantiated as an operational capability deployed within Cloud Computing environments (up/down the stack.) Think virtualized firewalls, IDP, AV, DLP, DoS/DDoS, IAM, etc.
  2. For the Cloud: Security services that are specifically targeted toward securing OTHER Cloud Computing services, delivered by Cloud Computing providers (see next entry) . Think cloud-based Anti-spam, DDoS, DLP, WAF, etc.
  3. By the Cloud: Security services delivered by Cloud Computing services which are used by providers in option #2 which often rely on those features described in option #1.  Think, well…basically any service these days that brand themselves as Cloud… ;)

What I’m talking about here is really item #3; security “by the cloud,” wherein these services utilize any cloud-based platform (SaaS, PaaS or IaaS) to delivery security capabilities on behalf of the provider or ultimate consumer of services.

For the SMB/SME/Branch, one can expect a hybrid model of on-premises physical (multi-function) devices that also incorporate some sort of redirect or offload to these cloud-based services. Frankly, the same model works for the larger enterprise but in many cases regulatory issues of privacy/IP concerns arise.  This is where the capability of both “private” (or dedicated) versions of these services are requested (either on-premises or off, but dedicated.)

Service providers see a large opportunity to finally deliver value-added, scaleable and revenue-generating security services atop what they offer today.  This is the realized vision of the long-awaited “clean pipes” and “secure hosting” capabilities.  See this post from 2007 “Clean Pipes – Less Sewerage or More Potable Water?”

If you haven’t noticed your service providers dipping their toes here, you certainly have seen startups (and larger security players) do so.  Here are just a few examples:

  • Qualys
  • Trend Micro
  • Symantec
  • Cisco (Ironport/ScanSafe)
  • Juniper
  • CloudFlare
  • ZScaler
  • Incapsula
  • Dome9
  • CloudPassage
  • Porticor
  • …and many more

As many vendors “virtualize” their offers and start to realize that through basic networking, APIs, service chaining, traffic steering and security intelligence/analytics, these solutions become more scaleable, leveragable and interoperable, the services you’ll be able to consume will also increase…and they will become more application and information-centric in nature.

Again, this doesn’t mean the disappearance of on-premises or host-based security capabilities, but you should expect the cloud (and it’s derivative offshoots like Big Data) to deliver some really awesome hybrid security capabilities that make your life easier.  Rich Mogull (@rmogull) and I gave about 20 examples of this in our “Grilling Cloudicorns: Mythical CloudSec Tools You Can Use Today” at RSA last month.

Get ready because while security folks often eye “The Cloud” suspiciously, it also offers up a set of emerging solutions that will undoubtedly allow for more efficient, effective and affordable security capabilities that will allow us to focus more on the things that matter.

/Hoff

Related articles by Zemanta

Enhanced by Zemanta

AwkwardCloud: Here’s Hopin’ For Open

February 14th, 2012 3 comments

MAKING FRIENDS EVERYWHERE I GO…

There’s no way to write this without making it seem like I’m attacking the person whose words I am about to stare rudely at, squint and poke out my tongue.

No, it’s not @reillyusa, featured to the right.  But that expression about sums up my motivation.

Because this ugly game of “Words With Friends” is likely to be received as though I’m at odds with what represents the core marketing message of a company, I think I’m going to be voted off the island.

Wouldn’t be the first time.  Won’t be the last.  It’s not personal.  It’s just cloud, bro.

This week at Cloud Connect, @randybias announced that his company, Cloudscaling, is releasing a new suite of solutions branded under the marketing moniker of  “Open Cloud.”

I started to explore my allergy to some of these message snippets as they were strategically “leaked” last week in a most unfortunate Twitter exchange.  I promised I would wait until the actual launch to comment further.

This is my reaction to the website, press release and blog only.  I’ve not spoken to Randy.  This is simply my reaction to what is being placed in public.  It’s not someone else’s interpretation of what was said.  It’s straight from the Cloud Pony’s mouth. ;p

GET ON WITH IT THEN!

“Open Cloud” is described as a set of solutions for those looking to deploy clouds that provide “… better economics, greater flexibility, and less lock-in, while maintaining control and governance” than so-called Enterprise Clouds that are based on what Randy tags are more proprietary foundations.

The case is made where enterprises will really want to build two clouds: one to run legacy apps and one to run purpose-built cloud-ready applications.  I’d say that enterprises that have a strategy are likely looking forward to using clouds of both models…and probably a few more, such as SaaS and PaaS.

This is clearly a very targeted solution which looks to replicate AWS’ model for enterprises or SP’s who are looking to exercise more control over the fate over their infrastructure.  How much runway this serves against the onslaught of PaaS and SaaS will play out.

I think it’s a reasonable bet there’s quite a bit of shelf life left on IaaS and I wonder if we’ll see follow-on generations to focus on PaaS.

Yet I digress…

This is NOT going to be a rant about the core definition of “Open,” (that’s for Twitter) nor is this going to be one of those 40 pagers where I deconstruct an entire blog.  It would be fun, easy and rather useful, but I won’t.

No. Instead I  will suggest that the use of the word “Open” in this press release is nothing more than opportunistic marketing, capitalizing on other recent uses of the Open* suffix such as “OpenCompute, OpenFlow, Open vSwitch, OpenStack, etc.” and is a direct shot across the bow of other companies that have released similar solutions in the near past (Cloud.com, Piston, Nebula)

If we look at what makes up “Open Cloud,” we discover it is framed upon on four key solution areas and supported by design blueprints, support and services:

  1. Open Hardware
  2. Open Networking
  3. Open APIs
  4. Open Source Software

I’m not going to debate the veracity or usefulness of some of these terms directly, but we’ll come back to them as a reference in a second, especially the notion of “open hardware.”

The one thing that really stuck under my craw was the manufactured criteria that somehow defined the so-called “litmus tests” associated with “Enterprise” versus “Open” clouds.

Randy suggests that if you are doing more than 1/2 of the items in the left hand column you’re using a cloud built with “enterprise computing technology” versus “open” cloud should the same use hold true for the right hand column:

So here’s the thing.  Can you explain to me what spinning up 1000 VM’s in less than 5 minutes has to do with being “open?”  Can you tell me what competing with AWS on price has to do with being “open?” Can you tell me how Hadoop performance has anything to do with being “open?”  Why does using two third-party companies management services define “open?”

Why on earth does the complexity or simplicity of networking stacks define “openness?”

Can you tell me how, if Cloudscaling’s “Open Cloud” uses certified vendors from “name brand” vendors like Arista how this is any way more “open” than using an alternative solution using Cisco?

Can you tell me if “Open Cloud” is more “open” than Piston Cloud which is also based upon OpenStack but also uses specific name-brand hardware to run?  If “Open Cloud” is “open,” and utilizes open source, can I download all the source code?

These are simply manufactured constructs which do little service toward actually pointing out the real business value of the solution and instead cloaks the wolf in the “open” sheep’s clothing.  It’s really unfortunate.

The end of my rant here is that by co-opting the word “open,” this takes a perfectly reasonable approach of a company’s experience in building a well sorted, (supposedly more) economical and supportable set of cloud solutions and ruins it by letting its karma get run over by its dogma.

Instead of focusing on the merits of the solution as a capable building block for building plain better clouds, this reads like a manifesto which may very well turn people off.

Am I being unfair in calling this out?  I don’t think so.  Would some prefer a private conversation over a beer to discuss?  Most likely.  However, there’s a disconnect here and it stems from pushing public a message and marketing a set of solutions that I hope will withstand the scrutiny of this A-hole with a blog.

Maybe I’m making a mountain out of a molehill…

Again, I’m not looking to pick on Cloudscaling.  I think the business model and the plan is solid as is evidenced by their success to date.  I wish them nothing but success.

I just hope that what comes out the other end is being “open” to consider a better adjective and more useful set of criteria to define the merits of the solution.

/Hoff

Enhanced by Zemanta

Building/Bolting Security In/On – A Pox On the Audit Paradox!

January 31st, 2012 2 comments

My friend and skilled raconteur Chris Swan (@cpswan) wrote an excellent piece a few days ago titled “Building security in – the audit paradox.”

This thoughtful piece was constructed in order to point out the challenges involved in providing auditability, visibility, and transparency in service — specifically cloud computing — in which the notion of building in or bolting on security is debated.

I think this is timely.  I have thought about this a couple of times with one piece aligned heavily with Chris’ thoughts:

Chris’ discussion really contrasted the delivery/deployment models against the availability and operationalization of controls:

  1. If we’re building security in, then how do we audit the controls?
  2. Will platform as a service (PaaS) give us a way to build security in such that it can be evaluated independently of the custom code running on it?

Further, as part of some good examples, he points out the notion that with separation of duties, the ability to apply “defense in depth” (hate that term,) and the ability to respond to new threats, the “bolt-on” approach is useful — if not siloed:

There lies the issue – bolt on security is easy to audit. There’s a separate thing, with a separate bit of config (administered by a separate bunch of people) that stands alone from the application code.

…versus building secure applications:

Code security is hard. We know that from the constant stream of vulnerabilities that get found in the tools we use every day. Auditing that specific controls implemented in code are present and effective is a big problem, and that is why I think we’re still seeing so much bolting on rather than building in.

I don’t disagree with this at all.  Code security is hard.  People look for gap-fillers.  The notion that Chris finds limited options for bolting security on versus integrating security (building it in) programmatically as part of the security development lifecycle leaves me a bit puzzled.

This identifies both the skills and cultural gap between where we are with security and how cloud changes our process, technology, and operational approaches but there are many options we should discuss.

Thus what was interesting (read: I disagree with) is what came next wherein Chris maintained that one “can’t bolt on in the cloud”:

One of the challenges that cloud services present is an inability to bolt on extra functionality, including security, beyond that offered by the service provider. Amazon, Google etc. aren’t going to let me or you show up to their data centre and install an XML gateway, so if I want something like schema validation then I’m obliged to build it in rather than bolt it on, and I must confront the audit issue that goes with that.

While it’s true that CSP’s may not enable/allow you to show up to their DC and “…install and XML gateway,” they are pushing the security deployment model toward the virtual networking hooks, the guest based approach within the VMs and leveraging both the security and service models of cloud itself to solve these challenges.

I allude to this below, but as an example, there are now cloud services which can sit “in-line” or in conjunction with your cloud application deployments and deliver security as a service…application, information (and even XML) security as a service are here today and ramping!

While  immature and emerging in some areas, I offer the following suggestions that the “bolt-on” approach is very much alive and kicking.  Given that the “code security” is hard, this means that the cloud providers harden/secure their platforms, but the app stacks that get deployed by the customers…that’s the customers’ concerns and here are some options:

  1. Introspection APIs (VMsafe)
  2. Security as a Service (Cloudflare, Dome9, CloudPassage)
  3. Auditing frameworks (CloudAudit, STAR, etc)
  4. Virtual networking overlays & virtual appliances (vGW, VSG, Embrane)
  5. Software defined networking (Nicira, BigSwitch, etc.)

Yes, some of them are platform specific and I think Chris was mostly speaking about “Public Cloud,” but “bolt-on” options are most certainly available an are aggressively evolving.

I totally agree that from the PaaS/SaaS perspective, we are poised for many wins that can eliminate entire classes of vulnerabilities as the platforms themselves enforce better security hygiene and assurance BUILT IN.  This is just as emerging as the BOLT ON solutions I listed above.

In a prior post “Silent Lucidity: IaaS – Already a Dinosaur. Rise of PaaSasarus Rex

As I mention in my Cloudifornication presentation, I think that from a security perspective, PaaS offers the potential of eliminating entire classes of vulnerabilities in the application development lifecycle by enforcing sanitary programmatic practices across the derivate works built upon them.  I look forward also to APIs and standards that allow for consistency across providers. I think PaaS has the greatest potential to deliver this.

There are clearly trade-offs here, but as we start to move toward the two key differentiators (at least for public clouds) — management and security — I think the value of PaaS will really start to shine.

My opinion is that given the wide model of integration between various delivery and deployment models, we’re gonna need both for quite some time.

Back to Chris’ original point, the notion that auditors will in any way be able to easily audit code-based (built-in) security at the APPLICATION layer or the PLATFORM layer versus the bolt-on layer is really at the whim on the skillset of the auditors themselves and the checklists they use which call out how one is audited:

Infrastructure as a service shows us that this can be done e.g. the AWS firewall is very straightforward to configure and audit (without needing to reveal any details of how it’s actually implemented). What can we do with PaaS, and how quickly?

This is a very simplistic example (more infrastructure versus applistructure perspective) but represents the very interesting battleground we’ll be entrenched in for years to come.

In the related posts below, you’ll see I’ve written a bunch about this and am working toward ensuring that as really smart folks work to build it in, the ecosystem is encouraged to provide bolt ons to fill those gaps.

/Hoff

Related articles

Enhanced by Zemanta

With Cloud, The PaaSibilities Are Endless…

January 26th, 2012 3 comments

I read a very interesting article from ZDNet UK this morning titled “Amazon Cuts Off Stack at the PaaS

The gist of the article is that according to Werner Vogels (@werner,) AWS’ CTO, they have no intention of delivering a PaaS service and instead expect to allow an ecosystem of PaaS providers, not unlike Heroku, to flourish atop their platform:

“We want 1,000 platforms to bloom,” said Vogels, before explaining Amazon has “no desire to go and really build a [PaaS].”

That’s all well and good, but it lead me to scratch my head, especially with regard to what I *thought* AWS already offered in terms of PaaS with BeanStalk, which is described thusly in their FAQ:

Q: What is AWS Elastic Beanstalk?
AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
Q: How is AWS Elastic Beanstalk different from existing application containers or platform-as-a-service solutions?
Most existing application containers or platform-as-a-service solutions, while reducing the amount of programming required, significantly diminish developers’ flexibility and control. Developers are forced to live with all the decisions pre-determined by the vendor – with little to no opportunity to take back control over various parts of their application’s infrastructure. However, with AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using AWS Elastic Beanstalk’s management capabilities.
While these snippets from the FAQ certainly seem to describe infrastructure components that enable PaaS (meta-PaaS?) when you combine the other elements of AWS’ offerings, it sure as heck sounds like PaaS regardless of what you call it.
In fact, a Twitter exchange with @GeorgeReese, @krishnan and @jamessaull well summarized the headscratching:
With all those components, AWS can certainly enable PaaS platforms like Heroku to “flourish.” 
However, suggesting that despite having all the raw components and not pointing to it and saying “PaaS” is like having all the components to assemble a bomb, not package it as such, and declaring it’s not dangerous because in that state it won’t go off.
I’d say the potential for going BOOM! are real.  It appears Marten Mickos was hinting at the same thing:

However, Mickos disputed Vogels’ claim that Amazon is going to let a thousand platforms bloom.

“He will always say that, and Amazon will slowly take a step higher and higher,” he said, before pointing to Beanstalk as an example. “[But] in my view PaaS has middleware components… and I could agree that it is okay to add [those] to an IaaS.”

In the long term, as I’ve stated prior, the value in platforms will be in how easy they make it for developers to create and deliver applications fluidly.

I may not be as good at marketing as some, but that sounds less like an infrastructure-centric business model and much more like an application-centric one.

Moving on up is where it’s at.  I saw the scratching on the cave walls when I wrote “Silent Lucidity: IaaS — Already A Dinosaur. The Evolution of PaaSasarus Rex” back in 2009.

What do you think?  Is AWS being coy?

Enhanced by Zemanta

QuickQuip: Don’t run your own data center if you’re a public IaaS < Sorta...

January 10th, 2012 7 comments

Patrick Baillie, the CEO of Swiss IaaS provider, CloudSigma, wrote a very interesting blog published on GigaOm titled “Don’t run your own data center if you’re a public IaaS.”

Baillie leads off by describing how AWS’ recent outage is evidence as to why the complexity of running facilities (data centers) versus the services running atop them is best segregated and outsourced to third parties in the business of such things:

Why public IaaS cloud providers should outsource their data centers

While there are some advantages for cloud providers operating data centers in-house, including greater control, capacity, power and security, the challenges, such as geographic expansion, connectivity, location, cost and lower-tier facilities can often outweigh the benefits. In response to many of these challenges, an increasing number of cloud providers are realizing the benefits of working with a third-party data center provider.

It’s  a very interesting blog, sprinkled throughout with pros and cons of rolling your own versus outsourcing but it falls down in being able to carry the burden in logic of some the assertions.

Perhaps I misunderstood, but the article seemed to focus on single DC availability as though (per my friend @CSOAndy’s excellent summarization) “…he missed the obvious reason: you can arbitrage across data centers” and “…was focused on single DC availability. Arbitrage means you just move your workloads automagically.”

I’ll let you read the setup in its entirety, but check out the conclusion:

In reality, taking a look at public cloud providers, those with legacy businesses in hosting, including Rackspace and GoGrid, tend to run their own facilities, whereas pure-play cloud providers, like my company CloudSigma, tend to let others run the data centers and host the infrastructure.

The business of operating a data center versus operating a cloud is very different, and it’s crucial for such providers to focus on their core competency. If a provider attempts to do both, there will be sacrifices and financial choices with regards to connectivity, capacity, supply, etc. By focusing on the cloud and not the data center, public cloud IaaS providers don’t need to make tradeoffs between investing in the data center over the cloud, thereby ensuring the cloud is continually operating at peak performance with the best resources available.

The points above were punctuated as part of a discussion on Twitter where @georgereese commented “IaaS is all about economies of scale. I don’t think you win at #cloud by borrowing someone else’s”

Fascinating.  It’s times like these that I invoke the widsom of the Intertubes and ask “WWWD” (or What Would Werner Do?)

If we weren’t artificially limited in this discussion to IaaS only, it would have been interesting to compare this to SaaS providers like Google or Salesforce or better yet folks like Zynga…or even add supporting examples like Heroku (who run atop AWS but are now a part of SalesForce o_O)

I found many of the points raised in the discussion intriguing and good food for thought but I think that if we’re talking about IaaS — and we leave out AWS which directly contradicts the model proposed — the logic breaks almost instantly…unless we change the title to “Don’t run your own data center if you’re a [small] public IaaS and need to compete with AWS.”

Interested in your views…

/Hoff

Enhanced by Zemanta