Archive

Posts Tagged ‘Platform as a service’

Security As A Service: “The Cloud” & Why It’s a Net Security Win

March 19th, 2012 3 comments
Cloud Computing Image

Cloud Computing Image (Photo credit: Wikipedia)

If you’ve been paying attention to the rash of security startups entering the market today, you will no doubt notice the theme wherein the majority of them are, from the get-go, organizing around deployment models which operate from “The Cloud.”

We can argue that “Security as a service” usually refers to security services provided by a third party using the SaaS (software as a service) model, but there’s a compelling set of capabilities that enables companies large and small to be both effective, efficient and cost-manageable as we embrace the “new” world of highly distributed applications, content and communications (cloud and mobility combined.)

As with virtualization, when one discusses “security” and “cloud computing,” any of the three perspectives often are conflated (from my post “Security: In the Cloud, For the Cloud & By the Cloud…“):

In the same way that I differentiated “Virtualizing Security, Securing Virtualization and Security via Virtualization” in my Four Horsemen presentation, I ask people to consider these three models when discussing security and Cloud:

  1. In the Cloud: Security (products, solutions, technology) instantiated as an operational capability deployed within Cloud Computing environments (up/down the stack.) Think virtualized firewalls, IDP, AV, DLP, DoS/DDoS, IAM, etc.
  2. For the Cloud: Security services that are specifically targeted toward securing OTHER Cloud Computing services, delivered by Cloud Computing providers (see next entry) . Think cloud-based Anti-spam, DDoS, DLP, WAF, etc.
  3. By the Cloud: Security services delivered by Cloud Computing services which are used by providers in option #2 which often rely on those features described in option #1.  Think, well…basically any service these days that brand themselves as Cloud… ;)

What I’m talking about here is really item #3; security “by the cloud,” wherein these services utilize any cloud-based platform (SaaS, PaaS or IaaS) to delivery security capabilities on behalf of the provider or ultimate consumer of services.

For the SMB/SME/Branch, one can expect a hybrid model of on-premises physical (multi-function) devices that also incorporate some sort of redirect or offload to these cloud-based services. Frankly, the same model works for the larger enterprise but in many cases regulatory issues of privacy/IP concerns arise.  This is where the capability of both “private” (or dedicated) versions of these services are requested (either on-premises or off, but dedicated.)

Service providers see a large opportunity to finally deliver value-added, scaleable and revenue-generating security services atop what they offer today.  This is the realized vision of the long-awaited “clean pipes” and “secure hosting” capabilities.  See this post from 2007 “Clean Pipes – Less Sewerage or More Potable Water?”

If you haven’t noticed your service providers dipping their toes here, you certainly have seen startups (and larger security players) do so.  Here are just a few examples:

  • Qualys
  • Trend Micro
  • Symantec
  • Cisco (Ironport/ScanSafe)
  • Juniper
  • CloudFlare
  • ZScaler
  • Incapsula
  • Dome9
  • CloudPassage
  • Porticor
  • …and many more

As many vendors “virtualize” their offers and start to realize that through basic networking, APIs, service chaining, traffic steering and security intelligence/analytics, these solutions become more scaleable, leveragable and interoperable, the services you’ll be able to consume will also increase…and they will become more application and information-centric in nature.

Again, this doesn’t mean the disappearance of on-premises or host-based security capabilities, but you should expect the cloud (and it’s derivative offshoots like Big Data) to deliver some really awesome hybrid security capabilities that make your life easier.  Rich Mogull (@rmogull) and I gave about 20 examples of this in our “Grilling Cloudicorns: Mythical CloudSec Tools You Can Use Today” at RSA last month.

Get ready because while security folks often eye “The Cloud” suspiciously, it also offers up a set of emerging solutions that will undoubtedly allow for more efficient, effective and affordable security capabilities that will allow us to focus more on the things that matter.

/Hoff

Related articles by Zemanta

Enhanced by Zemanta

AwkwardCloud: Here’s Hopin’ For Open

February 14th, 2012 3 comments

MAKING FRIENDS EVERYWHERE I GO…

There’s no way to write this without making it seem like I’m attacking the person whose words I am about to stare rudely at, squint and poke out my tongue.

No, it’s not @reillyusa, featured to the right.  But that expression about sums up my motivation.

Because this ugly game of “Words With Friends” is likely to be received as though I’m at odds with what represents the core marketing message of a company, I think I’m going to be voted off the island.

Wouldn’t be the first time.  Won’t be the last.  It’s not personal.  It’s just cloud, bro.

This week at Cloud Connect, @randybias announced that his company, Cloudscaling, is releasing a new suite of solutions branded under the marketing moniker of  “Open Cloud.”

I started to explore my allergy to some of these message snippets as they were strategically “leaked” last week in a most unfortunate Twitter exchange.  I promised I would wait until the actual launch to comment further.

This is my reaction to the website, press release and blog only.  I’ve not spoken to Randy.  This is simply my reaction to what is being placed in public.  It’s not someone else’s interpretation of what was said.  It’s straight from the Cloud Pony’s mouth. ;p

GET ON WITH IT THEN!

“Open Cloud” is described as a set of solutions for those looking to deploy clouds that provide “… better economics, greater flexibility, and less lock-in, while maintaining control and governance” than so-called Enterprise Clouds that are based on what Randy tags are more proprietary foundations.

The case is made where enterprises will really want to build two clouds: one to run legacy apps and one to run purpose-built cloud-ready applications.  I’d say that enterprises that have a strategy are likely looking forward to using clouds of both models…and probably a few more, such as SaaS and PaaS.

This is clearly a very targeted solution which looks to replicate AWS’ model for enterprises or SP’s who are looking to exercise more control over the fate over their infrastructure.  How much runway this serves against the onslaught of PaaS and SaaS will play out.

I think it’s a reasonable bet there’s quite a bit of shelf life left on IaaS and I wonder if we’ll see follow-on generations to focus on PaaS.

Yet I digress…

This is NOT going to be a rant about the core definition of “Open,” (that’s for Twitter) nor is this going to be one of those 40 pagers where I deconstruct an entire blog.  It would be fun, easy and rather useful, but I won’t.

No. Instead I  will suggest that the use of the word “Open” in this press release is nothing more than opportunistic marketing, capitalizing on other recent uses of the Open* suffix such as “OpenCompute, OpenFlow, Open vSwitch, OpenStack, etc.” and is a direct shot across the bow of other companies that have released similar solutions in the near past (Cloud.com, Piston, Nebula)

If we look at what makes up “Open Cloud,” we discover it is framed upon on four key solution areas and supported by design blueprints, support and services:

  1. Open Hardware
  2. Open Networking
  3. Open APIs
  4. Open Source Software

I’m not going to debate the veracity or usefulness of some of these terms directly, but we’ll come back to them as a reference in a second, especially the notion of “open hardware.”

The one thing that really stuck under my craw was the manufactured criteria that somehow defined the so-called “litmus tests” associated with “Enterprise” versus “Open” clouds.

Randy suggests that if you are doing more than 1/2 of the items in the left hand column you’re using a cloud built with “enterprise computing technology” versus “open” cloud should the same use hold true for the right hand column:

So here’s the thing.  Can you explain to me what spinning up 1000 VM’s in less than 5 minutes has to do with being “open?”  Can you tell me what competing with AWS on price has to do with being “open?” Can you tell me how Hadoop performance has anything to do with being “open?”  Why does using two third-party companies management services define “open?”

Why on earth does the complexity or simplicity of networking stacks define “openness?”

Can you tell me how, if Cloudscaling’s “Open Cloud” uses certified vendors from “name brand” vendors like Arista how this is any way more “open” than using an alternative solution using Cisco?

Can you tell me if “Open Cloud” is more “open” than Piston Cloud which is also based upon OpenStack but also uses specific name-brand hardware to run?  If “Open Cloud” is “open,” and utilizes open source, can I download all the source code?

These are simply manufactured constructs which do little service toward actually pointing out the real business value of the solution and instead cloaks the wolf in the “open” sheep’s clothing.  It’s really unfortunate.

The end of my rant here is that by co-opting the word “open,” this takes a perfectly reasonable approach of a company’s experience in building a well sorted, (supposedly more) economical and supportable set of cloud solutions and ruins it by letting its karma get run over by its dogma.

Instead of focusing on the merits of the solution as a capable building block for building plain better clouds, this reads like a manifesto which may very well turn people off.

Am I being unfair in calling this out?  I don’t think so.  Would some prefer a private conversation over a beer to discuss?  Most likely.  However, there’s a disconnect here and it stems from pushing public a message and marketing a set of solutions that I hope will withstand the scrutiny of this A-hole with a blog.

Maybe I’m making a mountain out of a molehill…

Again, I’m not looking to pick on Cloudscaling.  I think the business model and the plan is solid as is evidenced by their success to date.  I wish them nothing but success.

I just hope that what comes out the other end is being “open” to consider a better adjective and more useful set of criteria to define the merits of the solution.

/Hoff

Enhanced by Zemanta

Building/Bolting Security In/On – A Pox On the Audit Paradox!

January 31st, 2012 2 comments

My friend and skilled raconteur Chris Swan (@cpswan) wrote an excellent piece a few days ago titled “Building security in – the audit paradox.”

This thoughtful piece was constructed in order to point out the challenges involved in providing auditability, visibility, and transparency in service — specifically cloud computing — in which the notion of building in or bolting on security is debated.

I think this is timely.  I have thought about this a couple of times with one piece aligned heavily with Chris’ thoughts:

Chris’ discussion really contrasted the delivery/deployment models against the availability and operationalization of controls:

  1. If we’re building security in, then how do we audit the controls?
  2. Will platform as a service (PaaS) give us a way to build security in such that it can be evaluated independently of the custom code running on it?

Further, as part of some good examples, he points out the notion that with separation of duties, the ability to apply “defense in depth” (hate that term,) and the ability to respond to new threats, the “bolt-on” approach is useful — if not siloed:

There lies the issue – bolt on security is easy to audit. There’s a separate thing, with a separate bit of config (administered by a separate bunch of people) that stands alone from the application code.

…versus building secure applications:

Code security is hard. We know that from the constant stream of vulnerabilities that get found in the tools we use every day. Auditing that specific controls implemented in code are present and effective is a big problem, and that is why I think we’re still seeing so much bolting on rather than building in.

I don’t disagree with this at all.  Code security is hard.  People look for gap-fillers.  The notion that Chris finds limited options for bolting security on versus integrating security (building it in) programmatically as part of the security development lifecycle leaves me a bit puzzled.

This identifies both the skills and cultural gap between where we are with security and how cloud changes our process, technology, and operational approaches but there are many options we should discuss.

Thus what was interesting (read: I disagree with) is what came next wherein Chris maintained that one “can’t bolt on in the cloud”:

One of the challenges that cloud services present is an inability to bolt on extra functionality, including security, beyond that offered by the service provider. Amazon, Google etc. aren’t going to let me or you show up to their data centre and install an XML gateway, so if I want something like schema validation then I’m obliged to build it in rather than bolt it on, and I must confront the audit issue that goes with that.

While it’s true that CSP’s may not enable/allow you to show up to their DC and “…install and XML gateway,” they are pushing the security deployment model toward the virtual networking hooks, the guest based approach within the VMs and leveraging both the security and service models of cloud itself to solve these challenges.

I allude to this below, but as an example, there are now cloud services which can sit “in-line” or in conjunction with your cloud application deployments and deliver security as a service…application, information (and even XML) security as a service are here today and ramping!

While  immature and emerging in some areas, I offer the following suggestions that the “bolt-on” approach is very much alive and kicking.  Given that the “code security” is hard, this means that the cloud providers harden/secure their platforms, but the app stacks that get deployed by the customers…that’s the customers’ concerns and here are some options:

  1. Introspection APIs (VMsafe)
  2. Security as a Service (Cloudflare, Dome9, CloudPassage)
  3. Auditing frameworks (CloudAudit, STAR, etc)
  4. Virtual networking overlays & virtual appliances (vGW, VSG, Embrane)
  5. Software defined networking (Nicira, BigSwitch, etc.)

Yes, some of them are platform specific and I think Chris was mostly speaking about “Public Cloud,” but “bolt-on” options are most certainly available an are aggressively evolving.

I totally agree that from the PaaS/SaaS perspective, we are poised for many wins that can eliminate entire classes of vulnerabilities as the platforms themselves enforce better security hygiene and assurance BUILT IN.  This is just as emerging as the BOLT ON solutions I listed above.

In a prior post “Silent Lucidity: IaaS – Already a Dinosaur. Rise of PaaSasarus Rex

As I mention in my Cloudifornication presentation, I think that from a security perspective, PaaS offers the potential of eliminating entire classes of vulnerabilities in the application development lifecycle by enforcing sanitary programmatic practices across the derivate works built upon them.  I look forward also to APIs and standards that allow for consistency across providers. I think PaaS has the greatest potential to deliver this.

There are clearly trade-offs here, but as we start to move toward the two key differentiators (at least for public clouds) — management and security — I think the value of PaaS will really start to shine.

My opinion is that given the wide model of integration between various delivery and deployment models, we’re gonna need both for quite some time.

Back to Chris’ original point, the notion that auditors will in any way be able to easily audit code-based (built-in) security at the APPLICATION layer or the PLATFORM layer versus the bolt-on layer is really at the whim on the skillset of the auditors themselves and the checklists they use which call out how one is audited:

Infrastructure as a service shows us that this can be done e.g. the AWS firewall is very straightforward to configure and audit (without needing to reveal any details of how it’s actually implemented). What can we do with PaaS, and how quickly?

This is a very simplistic example (more infrastructure versus applistructure perspective) but represents the very interesting battleground we’ll be entrenched in for years to come.

In the related posts below, you’ll see I’ve written a bunch about this and am working toward ensuring that as really smart folks work to build it in, the ecosystem is encouraged to provide bolt ons to fill those gaps.

/Hoff

Related articles

Enhanced by Zemanta

With Cloud, The PaaSibilities Are Endless…

January 26th, 2012 3 comments

I read a very interesting article from ZDNet UK this morning titled “Amazon Cuts Off Stack at the PaaS

The gist of the article is that according to Werner Vogels (@werner,) AWS’ CTO, they have no intention of delivering a PaaS service and instead expect to allow an ecosystem of PaaS providers, not unlike Heroku, to flourish atop their platform:

“We want 1,000 platforms to bloom,” said Vogels, before explaining Amazon has “no desire to go and really build a [PaaS].”

That’s all well and good, but it lead me to scratch my head, especially with regard to what I *thought* AWS already offered in terms of PaaS with BeanStalk, which is described thusly in their FAQ:

Q: What is AWS Elastic Beanstalk?
AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
Q: How is AWS Elastic Beanstalk different from existing application containers or platform-as-a-service solutions?
Most existing application containers or platform-as-a-service solutions, while reducing the amount of programming required, significantly diminish developers’ flexibility and control. Developers are forced to live with all the decisions pre-determined by the vendor – with little to no opportunity to take back control over various parts of their application’s infrastructure. However, with AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using AWS Elastic Beanstalk’s management capabilities.
While these snippets from the FAQ certainly seem to describe infrastructure components that enable PaaS (meta-PaaS?) when you combine the other elements of AWS’ offerings, it sure as heck sounds like PaaS regardless of what you call it.
In fact, a Twitter exchange with @GeorgeReese, @krishnan and @jamessaull well summarized the headscratching:
With all those components, AWS can certainly enable PaaS platforms like Heroku to “flourish.” 
However, suggesting that despite having all the raw components and not pointing to it and saying “PaaS” is like having all the components to assemble a bomb, not package it as such, and declaring it’s not dangerous because in that state it won’t go off.
I’d say the potential for going BOOM! are real.  It appears Marten Mickos was hinting at the same thing:

However, Mickos disputed Vogels’ claim that Amazon is going to let a thousand platforms bloom.

“He will always say that, and Amazon will slowly take a step higher and higher,” he said, before pointing to Beanstalk as an example. “[But] in my view PaaS has middleware components… and I could agree that it is okay to add [those] to an IaaS.”

In the long term, as I’ve stated prior, the value in platforms will be in how easy they make it for developers to create and deliver applications fluidly.

I may not be as good at marketing as some, but that sounds less like an infrastructure-centric business model and much more like an application-centric one.

Moving on up is where it’s at.  I saw the scratching on the cave walls when I wrote “Silent Lucidity: IaaS — Already A Dinosaur. The Evolution of PaaSasarus Rex” back in 2009.

What do you think?  Is AWS being coy?

Enhanced by Zemanta

Microsoft Azure Going “Down Stack,” Adding IaaS Capabilities. AWS/VMware WAR!

February 4th, 2010 4 comments

It’s very interesting to see that now that infrastructure-as-a-service (IaaS) players like Amazon Web Services are clawing their way “up the stack” and adding more platform-as-a-service (PaaS) capabilities, that Microsoft is going “down stack” and providing IaaS capabilities by way of adding RDP and VM capabilities to Azure.

From Carl Brooks’ (@eekygeeky) article today:

Microsoft is expected to add support for Remote Desktops and virtual machines (VMs) to Windows Azure by the end of March, and the company also says that prices for Azure, now a baseline $0.12 per hour, will be subject to change every so often.

Prashant Ketkar, marketing director for Azure, said that the service would be adding Remote Desktop capabilities as soon as possible, as well as the ability to load and run virtual machine images directly on the platform. Ketkar did not give a date for the new features, but said they were the two most requested items.

This move begins a definite trend away from the original concept for Azure in design and execution. It was originally thought of as a programming platform only: developers would write code directly into Azure, creating applications without even being aware of the underlying operating system or virtual instances. It will now become much closer in spirit to Amazon Web Services, where users control their machines directly. Microsoft still expects Azure customers to code for the platform and not always want hands on control, but it is bowing to pressure to cede control to users at deeper and deeper levels.

One major reason for the shift is that there are vast arrays of legacy Windows applications users expect to be able to run on a Windows platform, and Microsoft doesn’t want to lose potential customers because they can’t run applications they’ve already invested in on Azure. While some users will want to start fresh, most see cloud as a way to extend what they have, not discard it.

This sets the path to allow those enterprise customers running HyperV internally to take those VMs and run them on (or in conjunction with) Azure.

Besides the obvious competition with AWS in the public cloud space, there’s also a private cloud element. As it stands now, one of the primary differentiators for VMware from the private-to-public cloud migration/portability/interoperability perspective is the concept that if you run vSphere in your enterprise, you can take the same VMs without modification and move them to a service provider who runs vCloud (based on vSphere.)

This is a very interesting and smart move by Microsoft.

/Hoff

Reblog this post [with Zemanta]

Follow-On: The Audit, Assertion, Assessment, and Assurance API (A6)

August 16th, 2009 6 comments

Update 2/1/10: The A6 effort is in full-swing.  You can find out more about it at the Google Groups here.

A few weeks ago I penned a blog discussing an idea I presented at a recent Public Sector Cloud gathering that later inherited the name “Audit, Assertion, Assessment, and Assurance API (A6)”

The case for A6 is straightforward:

…take the capabilities of something like SCAP and embed a standardized and open API layer into each IaaS, PaaS and SaaS offering [Ed: At the API layer of each deployment model] to provide not only a standardized way of scanning for network vulnerabilities, but also configuration management, asset management, patch remediation, compliance, etc.

This way you win two ways: automated audit and security management capability for the customer/consumer and a a streamlined, cost effective, and responsive way of automating the validation of said controls in relation to compliance, SLA and legal requirements for service providers.

Much discussion ensued on Twitter and via email/blogs explaining A6 in better detail and with more specificity.

The idea has since grown legs and I’ve started to have some serious discussions with “people” (*wink wink*) who are very interested in making this a reality, especially in light of business and technical use cases bubbling to the surface of late.

To that end, Ben (@ironfog) has taken the conceptual mumblings and begun work on a RESTful interface for A6. You can find the draft documentation here.  You can find his blog and awesome work on making A6 a reality here.  Thank you so much, Ben.

NOTE: The documentation/definitions below are conceptual and stale. I’ve left them here because they are important and relevant but are likely not representative of the final work product.

A6 API Documentation – Draft 0.11

I’m thinking of pulling together a more formalized working group for A6 and push hard with some of those “people” above to get better definition around its operational realities as well as understand the best way to create an open and extensible standard going forward.

If you’re interested in participating, please contact me ( choff @ packetfilter . com ) and let’s capitalize on the momentum, need and fortuitous timing to make A6 work.

Thanks,

/Hoff

Reblog this post [with Zemanta]