Search Results

Keyword: ‘patching the cloud’

Redux: Patching the Cloud

September 23rd, 2009 3 comments

Back in 2008 I wrote a piece titled “Patching the Cloud” in which I highlighted the issues associated with the black box ubiquity of Cloud and what that means to patching/upgrading processes:

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  How does an enterprise that has no insight into the “black box” model of the cloud operator, setup a dev/test/staging environment that acceptably mimics the operating environment?

What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)

How does one negotiate the process for determining when and how a patch is deployed?  Where does the cloud operator draw the line?   If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service?  If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.

I followed this up with a practical example when Microsoft’s Azure services experienced a hiccup due to this very thing.  We see wholesale changes that can be instantiated on a whim by Cloud providers that could alter service functionality and service availability such as this one from Google (Published Google Documents to appear in Google search) — have you thought this through?

So now as we witness ISP’s starting to build Cloud service offerings from common Cloud OS platforms and espouse the portability of workloads (*ahem* VM’s) from “internal” Clouds to Cloud Providers — and potentially multiple Cloud providers — what happens when the enterprise is at v3.1 of Cloud OS, ISP A is at version 2.1a and ISP B is at v2.9? Portability is a cruel mistress.

Pair that little nugget with the fact that even “global” Cloud providers such as Amazon Web Services have not maintained parity in terms of functionality/services across their regions*. The US has long had features/functions that the european region has not.  Today, in fact, AWS announced bringing infrastructure capabilities to parity for things like elastic load balancing and auto-scale…

It’s important to understand what happens when we squeeze the balloon.

/Hoff

*corrected – I originally said “availability zones” which was in error as pointed out by Shlomo in the comments. Thanks!

Azure Users Seeing Red: When Patching the Cloud Causes Cracks

March 19th, 2009 4 comments

No, this isn’t one of those posts that suggests we can’t depend on the Cloud just because of one (ok, many) outages of note lately.  That’s so dystopic.  Besides, everyone else is already doing that.

I mean just because Azure was offline for 22 hours isn’t cause for that much concern, right?  It’s a beta community technology preview, anyway… 😉  Just like Google’s a beta.

azureWhat I found interesting was what Microsoft reported as the root cause for the outage, however:

 

The Windows Azure Malfunction This Weekend

First things first: we’re sorry.  As a result of a malfunction in Windows Azure, many participants in our Community Technology Preview (CTP) experienced degraded service or downtime.  Windows Azure storage was unaffected.

In the rest of this post, I’d like to explain what went wrong, who was affected, and what corrections we’re making.

What Happened?

During a routine operating system upgrade on Friday (March 13th), the deployment service within Windows Azure began to slow down due to networking issues.  This caused a large number of servers to time out and fail.

You catch that bit about “…a routine operating system upgrade?”  Sometimes we call those things “patches.”  Even if this wasn’t a patch, let’s call it one for argument’s sake, okay?

As such, I was reminded of a blog post that I wrote last year titled: “Patching the Cloud” in which I squawked about my concerns regarding patching and change management/roll-back in Cloud services.  It seems apropos:

 

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  …

Huh.  Go figure.  

/Hoff

 

Patching The Cloud?

October 25th, 2008 2 comments

PatchingJust to confuse you, as a lead-in on this topic, please first read my recent rant titled “Will You All Please Shut-Up About Securing THE Cloud…NO SUCH THING…”

Let’s say the grand vision comes to fruition where enterprises begin to outsource to a cloud operator the hosting of previously internal complex mission-critical enterprise applications.

After all, that’s what we’re being told is the Next Big Thing™

In this version of the universe, the enterprise no longer owns the operational elements involved in making the infrastructure tick — the lights blink, packets get delivered, data is served up and it costs less for what is advertised is the same if not better reliability, performance and resilience.

Oh yes, “Security” is magically provided as an integrated functional delivery of service.

Tastes great, less datacenter filling.

So, in a corner case example, what does a boundary condition like the out-of-cycle patch release of MS08-067 mean when your infrastructure and applications are no longer yours to manage and the ownership of the “stack” disintermediates you from being able to control how, when or even if vulnerability remediation anywhere in the stack (from the network on up to the app) is assessed, tested or deployed.

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  How does an enterprise that has no insight into the “black box” model of the cloud operator, setup a dev/test/staging environment that acceptably mimics the operating environment?

What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)

How does one negotiate the process for determining when and how a patch is deployed?  Where does the cloud operator draw the line?   If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service?  If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.

I hate to get all “Star Trek II: The Wrath of Khan” on you, but as Spock said, “The needs of the many outweigh the needs of the few.”  How, when and if a provider might roll a patch has a broad impact across the entire customer base — as it has had in the hosting markets for years — but again the types of applications we are talking about here are far different than what we we’re used to today where the applications and the infrastructure are inextricably joined at the hip.

Hosting/SaaS providers today can scale because of one thing: standardization.  Certainly COTS applications can be easily built on standardized tiered models for compute, storage and networking, but again, we’re being told that enterprises will move all their applications to the cloud, and that includes bespoke creations.

If that’s not the case, and we end up with still having to host some apps internally and some apps in the cloud, we’ve gained nothing (from a cost reduction perspective) because we won’t be able to eliminate the infrastructure needed to support either.

Taking it one step further, what happens if there is standardization on the underlying Cloud platform (CloudOS?) and one provider “patches” or updates their Cloud offering but another does or cannot? If we ultimately talk about VM portability between providers running the “same” platform, what will this mean?  Will things break horribly or be instantiated in an insecure manner?

What about it?  Do you see cloud computing as just an extension of SaaS and hosting of today?  Do you see dramatically different issues arise based upon the types of information and applications that are being described in this model?  We’ve seen issues such as data ownership, privacy and portability bubble up, but these are much more basic operational questions.

This is obviously a loaded set of questions for which I have much to say — some of which is obvious — but I’d like to start a discussion, not a rant.

/Hoff

*This little ditty was inspired by a Twitter exchange with Bob Rudis who was complaining that Amazon’s EC2 service did not have the MS08-067 patch built into the AMI…Check out this forum entry from Amazon, however, as it’s rather apropos regarding the very subject of this blog…

Cloud Security Will NOT Supplant Patching…Qualys Has Its Head Up Its SaaS

May 4th, 2009 4 comments

“Cloud Security Will  Supplant Patching…”

What a sexy-sounding claim in this Network World piece which is titled with the opposite suggestion from the title of my blog post.  We will still need patching.  I agree, however, that how it’s delivered needs to change.

Before we get to the issues I have, I do want to point out that the article — despite it’s title —  is focused on the newest release of Qualys’ Laws of Vulnerability 2.0 report (pdf,) which is the latest version of the Half Lives of Vulnerability study that my friend Gerhardt Eschelbeck started some years ago.

In the report, the new author, Qualys’ current CTO Wolfgang Kandek, delivers a really disappointing statistic:

In five years, the average time taken by companies to patch vulnerabilities had decreased by only one day, from 60 days to 59 days, at a time when the number of flaws and the speed at which they are being exploited has accelerated from weeks to, in some cases, days. During the same period, the number of IP scanned on an anonymous basis by the company from its customer base had increased from 3 million to a statistically significant 80 million, with the number of vulnerabilities uncovered rocketing from 3 million to 680 million. Of the latter, 72 million were rated by Qualys as being of ‘critical’ severity.

That lack of progress is sobering, right? So far I’m intrigued, but then that article goes off the reservation by quoting Wolfgang as saying:

Taken together, the statistics suggested that a new solution would be needed in order to make further improvement with the only likely candidate on the horizon being cloud computing. “We believe that cloud security providers can be held to a higher standard in terms of security,” said Kandek. “Cloud vendors can come in and do a much better job.”  Unlike corporate admins for whom patching was a sometimes complex burden, in a cloud environment, patching applications would be more technically predictable – the small risk of ‘breaking’ an application after patching it would be nearly removed, he said.

Qualys has its head up its SaaS.  I mean that in the most polite of ways… 😉

Let me make a couple of important observations on the heels of those I’ve already made and an excellent one Lori MacVittie made today in here post titled “The Real Meaning Of Cloud Security Revealed:

  1. I’d like a better definition of the context of “patching applications.”  I don’t know whether Kandek mean applications in an enterprise or those hosted by a Cloud Provider or both?
  2. There’s a difference between providing security services via the Cloud versus securing Cloud and its application/data.  The quotes above mix the issues.  A “Cloud Security” provider like Qualys can absolutely provide excellent solutions to many of the problems we have today associated with point product deployments of security functions across the enterprise. Anti-spam and vulnerability management are excellent examples.  What that does not mean is that the applications that run in an enterprise can be delivered and deployed more “securely” thanks to the efforts of the same providers.
  3. To that point, the Cloud is not all SaaS-based.  Not every application is going to be or can be moved to a SaaS.  Patching legacy applications (or hosting them for that matter) can be extremely difficult.  Virtualization certainly comes into play here, but by definition, that’s an IaaS/PaaS opportunity, not a SaaS one.
  4. While SaaS providers who do “own the entire stack” are in a better position through consolidated multi-tenancy to transfer the responsibility of patching “their” infrastructure and application(s) on your behalf, it doesn’t really mean they do it any better on an application-by-application basis.  If a SaaS provider only has 1-2 apps to manage (with lots of customers) versus an enterprise with hundreds (and lost of customers,) the “quality” measurements as it relates to management of defect (from any perspective) would likely look better were you the competent SaaS vendor mentioned in this article.  You can see my point here.
  5. If you add in PaaS and IaaS as opposed to simply SaaS (as managed by a third party.) then the statement that “…patching applications would be more technically predictable – the small risk of ‘breaking’ an application after patching it would be nearly removed” is false.

It’s really, really important to compare apples to apples here. Qualys is a fantastic company with a visionary leader in Phillipe Courtot.  I was an early adopter of his SaaS service.  I was on his Customer Advisory Board.  However, as I pointed out to him at the Jericho event where I was a panelist, delivering a security function via the Cloud is not the same thing as securing it and SaaS is merely one piece of the puzzle.

I wrote a couple of other blogs about this topic:

/Hoff

Incomplete Thought: Virtual/Cloud Security and The Potemkin Village Syndrome

August 16th, 2012 3 comments

Portrait of russian fieldmarshal Prince Potemk...A “Potemkin village” is a Russian expression derived from folklore from the 1700’s.  The story goes something like this: Grigory Potemkin, a military leader and  statesman, erected attractive but completely fake settlements constructed only of facades to impress Catherine the Great (empress of Russia) during a state visit in order to gain favor and otherwise hype the value of recently subjugated territories.

I’ll get to that (and probably irate comments from actual Russians who will chide me for my hatchet job on their culture…)

Innovation over the last decade in technology in general has brought fundamental shifts in the way in which we work, live, and play. In the last 4 years, the manner in which technology products and services that enabled by this “digital supply chain,” and the manner in which they are designed, built and brought to market have also pivoted.

Virtualization and Cloud computing — the technologies and operational models — have contributed greatly to this.

Interestingly enough, the faster technology evolves, the more lethargic, fragile and fractured security seems to be.

This can be explained in a few ways.

First, the trust models, architecture and operational models surrounding how we’ve “done” security simply are not designed to absorb this much disruption so quickly.  The fact that we’ve relied on physical segregation, static policies that combine locality and service definition, mobility and the (now) highly dynamic application deployment options means that we’re simply disconnected.

Secondly, fragmentation and specialization within security means that we have no cohesive, integrated or consistent approach in terms of how we define or instantiate “security,” and so customers are left to integrate disparate solutions at multiple layers (think physical and/or virtual firewalls, IDP, DLP, WAF, AppSec, etc.)  What services and “hooks” the operating systems, networks and provisioning/orchestration layers offers largely dictates what we can do using the skills and “best practices” we already have.

Lastly, the (un)natural market consolidation behavior wherein aspiring technology startups are acquired and absorbed into larger behemoth organizations means that innovation cycles in security quickly become victims of stunted periodicity, reduced focus on solving specific problems, cultural subduction and artificially constrained scope based on P&L models which are detached from reality, customers and out of step with trends that end up driving more disruption.

I’ve talked about this process as part of the “Security Hamster Sine Wave of Pain.”  It’s not a malicious or evil plan on behalf of vendors to conspire to not solve your problems, it’s an artifact of the way in which the market functions — and is allowed to function.

What this yields is that when new threat models, evolving vulnerabilities and advanced adversarial skill sets are paired with massively disruptive approaches and technology “conquests,” the security industry  basically erects facades of solutions, obscuring the fact that in many cases, there’s not only a lacking foundation for the house of cards we’ve built, but interestingly there’s not much more to it than that.

Again, this isn’t a plan masterminded by a consortium of industry “Dr. Evils.”  Actually, it’s quite simple: It’s inertial…if you keep buying it, they’ll keep making it.

We are suffering then from the security equivalent of the Potemkin Village syndrome; our efforts are largely built to impress people who are mesmerized by pretty facades but don’t take the time to recognize that there’s really nothing there.  Those building it, while complicit, find it quite hard to change.

Until the revolution comes.

To wit, we have hardworking members of the proletariat, toiling away behind the scenes struggling to add substance and drive change in the way in which we do what we do.

Adding to this is the good news that those two aforementioned “movements” — virtualization and cloud computing — are exposing the facades for what they are and we’re now busy shining the light on unstable foundations, knocking over walls and starting to build platforms that are fundamentally better suited to support security capabilities rather than simply “patching holes.”

Most virtualization and IaaS cloud platforms are still woefully lacking the native capabilities or interfaces to build security in, but that’s the beauty of platforms (as a service,) as you can encourage more “universally” the focus on the things that matter most: building resilient and survivable systems, deploying secure applications, and identifying and protecting information across its lifecycle.

Realistically this is a long view and it is going to take a few more cycles on the Hamster Wheel to drive true results.  It’s frankly less about technology and rather largely a generational concern with the current ruling party who governs operational security awaiting deposition, retirement or beheading.

I’m looking forward to more disruption, innovation and reconstruction.  Let’s fix the foundation and deal with hanging pictures later.  Redecorating security is for the birds…or dead Russian royalty.

/Hoff

Enhanced by Zemanta

Security As A Service: “The Cloud” & Why It’s a Net Security Win

March 19th, 2012 3 comments
Cloud Computing Image

Cloud Computing Image (Photo credit: Wikipedia)

If you’ve been paying attention to the rash of security startups entering the market today, you will no doubt notice the theme wherein the majority of them are, from the get-go, organizing around deployment models which operate from “The Cloud.”

We can argue that “Security as a service” usually refers to security services provided by a third party using the SaaS (software as a service) model, but there’s a compelling set of capabilities that enables companies large and small to be both effective, efficient and cost-manageable as we embrace the “new” world of highly distributed applications, content and communications (cloud and mobility combined.)

As with virtualization, when one discusses “security” and “cloud computing,” any of the three perspectives often are conflated (from my post “Security: In the Cloud, For the Cloud & By the Cloud…“):

In the same way that I differentiated “Virtualizing Security, Securing Virtualization and Security via Virtualization” in my Four Horsemen presentation, I ask people to consider these three models when discussing security and Cloud:

  1. In the Cloud: Security (products, solutions, technology) instantiated as an operational capability deployed within Cloud Computing environments (up/down the stack.) Think virtualized firewalls, IDP, AV, DLP, DoS/DDoS, IAM, etc.
  2. For the Cloud: Security services that are specifically targeted toward securing OTHER Cloud Computing services, delivered by Cloud Computing providers (see next entry) . Think cloud-based Anti-spam, DDoS, DLP, WAF, etc.
  3. By the Cloud: Security services delivered by Cloud Computing services which are used by providers in option #2 which often rely on those features described in option #1.  Think, well…basically any service these days that brand themselves as Cloud… ;)

What I’m talking about here is really item #3; security “by the cloud,” wherein these services utilize any cloud-based platform (SaaS, PaaS or IaaS) to delivery security capabilities on behalf of the provider or ultimate consumer of services.

For the SMB/SME/Branch, one can expect a hybrid model of on-premises physical (multi-function) devices that also incorporate some sort of redirect or offload to these cloud-based services. Frankly, the same model works for the larger enterprise but in many cases regulatory issues of privacy/IP concerns arise.  This is where the capability of both “private” (or dedicated) versions of these services are requested (either on-premises or off, but dedicated.)

Service providers see a large opportunity to finally deliver value-added, scaleable and revenue-generating security services atop what they offer today.  This is the realized vision of the long-awaited “clean pipes” and “secure hosting” capabilities.  See this post from 2007 “Clean Pipes – Less Sewerage or More Potable Water?”

If you haven’t noticed your service providers dipping their toes here, you certainly have seen startups (and larger security players) do so.  Here are just a few examples:

  • Qualys
  • Trend Micro
  • Symantec
  • Cisco (Ironport/ScanSafe)
  • Juniper
  • CloudFlare
  • ZScaler
  • Incapsula
  • Dome9
  • CloudPassage
  • Porticor
  • …and many more

As many vendors “virtualize” their offers and start to realize that through basic networking, APIs, service chaining, traffic steering and security intelligence/analytics, these solutions become more scaleable, leveragable and interoperable, the services you’ll be able to consume will also increase…and they will become more application and information-centric in nature.

Again, this doesn’t mean the disappearance of on-premises or host-based security capabilities, but you should expect the cloud (and it’s derivative offshoots like Big Data) to deliver some really awesome hybrid security capabilities that make your life easier.  Rich Mogull (@rmogull) and I gave about 20 examples of this in our “Grilling Cloudicorns: Mythical CloudSec Tools You Can Use Today” at RSA last month.

Get ready because while security folks often eye “The Cloud” suspiciously, it also offers up a set of emerging solutions that will undoubtedly allow for more efficient, effective and affordable security capabilities that will allow us to focus more on the things that matter.

/Hoff

Related articles by Zemanta

Enhanced by Zemanta

Patching the (Hypervisor) Platform: How Do You Manage Risk?

April 12th, 2010 7 comments

Hi. Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

Reblog this post [with Zemanta]

To Achieve True Cloud (X/Z)en, One Must Leverage Introspection

January 6th, 2010 No comments

Back in October 2008, I wrote a post detailing efforts around the Xen community to create a standard security introspection API (Xen.Org Launches Community Project To Bring VM Introspection to Xen🙂

The Xen Introspection Project is a community effort within Xen.org to leverage the existing research presented above with other work not yet public to create a standard API specification and methodology for virtual machine introspection.

That blog was focused on introspection for virtualization proper but since many of the larger cloud providers utilize Xen virtualization as an underpinning of their service architecture and as an industry we’re suffering from a lack of visibility and deployable security capabilities, the relevance of VM and VMM introspection to cloud computing is quite relevant.

I thought I’d double around and see where we are.

It looks as though there’s been quite a bit of recent activity from the folks at Georgia Tech (XenAccess Project) and the University of Alaska at Fairbanks (Virtual Introspection for Xen) referenced in my previous blog.  The vCloud API proffered via the DMTF seems to also leverage (at least some of) the VMsafe API capabilities present in VMware‘s vSphere virtualization platform.

While details are, for obvious reasons sketchy, I am encouraged in speaking to representatives from a few cloud providers who are keenly interested in including these capabilities in their offerings.  Wouldn’t that be cool?

Adoption and inclusion of introspection capabilities will overcome some of the inherent security and visibility limitations we face in highly-virtualized multi-tenant environments due to networking constraints for integrating security functionality that I wrote about here.

I plan a follow-on blog in more detail once I finish some interviews.

/Hoff

Reblog this post [with Zemanta]

The Six Worst Cloud Security Mistakes? I Can Do You One Better…

June 6th, 2009 2 comments

I recently read a story from Kelly Jackson Higgins of Dark Reading outlining what are described as the “Six Worst Cloud Security Mistakes:

  1. Assuming the cloud is less secure than your data
  2. Not verifying, testing, or auditing the security of your cloud-based service provider.
  3. Failing to vet your cloud provider’s viability as a business.
  4. Assuming you’re no longer responsible for securing data once it’s in the cloud.
  5. Putting insecure apps in the cloud and expecting that to make them more secure.
  6. Having no clue that your business units are already using some cloud-based services.

A very interesting list, for sure, and a reasonable set of potential “mistakes” to ponder, but I’m really having trouble with one in particular.

The one that’s getting my goose honking is #1: Assuming the cloud is less secure than your data.

Really? I maintain that this generalization about Cloud being more or less secure (in regards to one’s own capabilities) is a silly thing to argue; let’s see why.

We start off with what I think is a strange bit of contradiction:

It’s only natural for security pros to be control freaks. Being charged with securing a company’s data and intellectual property requires a healthy dose of paranoia and protectionism. But sometimes that leads to false impressions about cloud security. “One common mistake is that as soon as you talk about the cloud, [organizations] assume it’s less secure than their own IT security operation,” says Chenxi Wang, principal analyst at Forrester Research. “More control does not necessarily lead to more security.”

Assuming that one of the reasons a company might consider outsourcing their IT security operations to a third party [Cloud] provider IS the fact that they have more control or at least equal to what a company can provide themselves, it occurs to me this sort of statement can be interpreted many ways.  Here’s one, for example.

I find myself confused by the highlighted sentence regarding control and security within the context of what is written.  In fact, if you read the next paragraph, it seems to imply that the because a Cloud provider has more control they can offer better security:

In fact, with services such as Google’s SaaS, data loss is less likely because the information is accessible from anywhere and anytime without saving it to an easily lost or stolen USB stick or CD, according to Eran Feigenbaum, director of security for Google Apps. And Google’s security-patching process is more streamlined than a typical enterprise because its server architecture is homogeneous, he says. “Many attacks [come from a] lack of patch management and server misconfiguration…For Google, when the time comes to patch, we can do so across the entire platform in a uniform fashion,” he said.

I’ll say it again: SaaS is a convenient way of dumbing down “Cloud Computing” to a singular instance/application/service but it completely obviates Platform and Infrastructure as a Service offerings, which are wildly different animals, especially from a security perspective.  Please see my latest commentary about this in my response to Bruce Schneier’s equation of SaaS with Cloud Computing to the exclusion of PaaS/IaaS.

I’ve made the point before that comparing managing/patching a single application and its supporting infrastructure in a SaaS offering to an enterprise that would otherwise have to support not only that service but potentially hundreds more is a completely unfair comparison.  If you want to compare apples to apples, I’d maintain that any organization with a mature security program whose only charter was to support (securely) a single application could do it just as well as a SaaS provider, all other things being equal.

The differences here become scale and multi-tenancy in the case of the Cloud provider, I think these issues actually make a Cloud environment more difficult to secure.

Also, suggesting with the Google example that “data loss is less likely” because it’s “accessible from anywhere” and doesn’t involve “…lost or stolen USB stick(s) or CD(s)” seems an awfully arbitrary one given the fact that one of the most interesting data loss/leakage incidents in recent Cloud history came from Google’s Docs offering due to an operator (Google) system misconfiguration.  USB sticks and CDs are also a very narrow definition of data loss/leakage.

Then there’s the more global view SaaS and other cloud providers have, Feigenbaum says. “As an enterprise, you only see a small slice of what’s affecting you [threat-wise],” Feigenbaum said during a panel on cloud security at the RSA Conference in April. “A cloud provider can have the economy of scale for a holistic vision…the cloud shifts security and also makes it better,” he said.

I don’t have anything to argue about here; a wider perspective and better visibility is a good thing.  Again, however, this depends upon the type of service, what is being monitored and protected, on behalf of whom and from whom.

But that doesn’t mean you should blindly trust your cloud provider, though the larger ones do tend to have a better handle on threats due to their size, Forrester’s Wang says. “These people deal with security issues at more complex levels than your own IT team sees on a daily basis,” Wang says. “It’s a misconception to say cloud security is definitely less capable or more problematic.”

No, you shouldn’t blindly trust your providers but that last statement suggests we should similary trust that providers do a better job and deal with security issues at more complex levels?  What does that even mean? Please do NOT tell me that a SAS70 Type II is your answer.  Just as “It’s a misconception to say cloud security is definitely less capable or more problematic,” I can just as easily suggest the converse is true without evidence.

I would like to see the empirical data that backs that set of statements up and the common metrics I can use to measure across providers and enterprises alike.  Thought so.

Thus far, security has been one of the main hurdles to adoption of cloud-based services, says Michelle Dennedy, chief governance officer for cloud computing at Sun Microsystems. “Trust in the cloud, more than technical abilities, has been hindering adoption,” Dennedy says. “But the cloud can be more secure than a private environment in many cases.”

Michelle is definitely correct; trust represents a fundamental issue with Cloud adoption, and it rolls both ways.  Asking us to “trust but verify” when what we’re being asked to verify can’t easily be trusted poses a very difficult scenario indeed.

By the way, I think the worst Cloud Security mistake is not knowing what Cloud Security even means.

/Hoff

The Cloud Is a Fickle Mistress: DDoS&M…

April 2nd, 2009 6 comments

It’s interesting to see how people react when they are reminded that the “Cloud” still depends upon much of the same infrastructure and underlying protocols that we have been using for years.

BGP, DNS, VPNs, routers, swtiches, firewalls…

While it’s fun to talk about new attack vectors and sexy exploits, it’s the oldies and goodies that will come back to haunt us:

Simplexity

Building more and more of our business’ ability to remain an on-going concern on infrastructure that was never designed to support it is a scary proposition.  We’re certainly being afforded more opportunity to fix some of these problems as the technology improves, but it’s a patching solution to an endemic problem, I’m afraid.  We’ve got two ways to look at Cloud:

  • Skipping over the problems we have and “fixing” crappy infrastructure and applications by simply adding mobility and orchestration to move around an issue, or
  • Actually starting to use Cloud as a forcing function to fundamentally change the way we think about, architect, deploy and manage our computing capabilities in a more resilient, reliable and secure fashion

If I were a betting man…

Remember that just because it’s in the “Cloud” doesn’t mean someone’s sprinkled magic invincibility dust on your AppStack…

That web service still has IP addresses, open sockets. It still gets transported over MANY levels of shared infrastructure, from the telcos to the DNS infrastructure…you’re always at someone elses’ mercy.

Dan Kaminsky has done a fabulous job reminding us of that.

A more poignant reminder of our dependency on the Same Old Stuff™ is the recent DDoS attacks against Cloud provider Go-Grid:

ONGOING DDoS ATTACK

Our network is currently the target of a large, distributed DDoS attack that began on Monday afternoon.   We took action all day yesterday to mitigate the impact of the attack, and its targets, so that we could restore service to GoGrid customers.  Things were stabilized by 4 PM PDT and most customer servers were back online, although some of you continued to experience intermittent loss in network connectivity.

This is an unfortunate thing.  It’s also a good illustration of the sorts of things you ought to ask your Cloud service providers about.  With whom do they peer? What is their bandwidth? How many datacenters do they have and where? What DoS/DDoS countermeasures do you have in place? Have they actually dealt with this before?  Do they drill disaster scenarios like this?

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

This is where engineering, architecture and security meet the road.  Your provider’s ability to sustain an attack like this is critical.  Further, how you’ve designed your BCP/DR contingency plans is pretty important, too.  Until we get true portability/interoperability between Cloud providers, it’s still up to you to figure out how to make this all work.  Remember that when you’re assuming those TCO calculations accurately reflect reality.

Big providers like eBay, Amazon, and Microsoft invest huge sums of money and manpower to ensure they are as survivable as they can be during attacks like this.  Do you?  Does your Cloud Provider? How many do you have.

Again, even Amazon goes down.  At this point, it’s largely been operational issues on their end and not the result of a massive attack. Imagine, however, if someday it is.  What would that mean to you?

As more and more of our applications and information are moved from inside our networks to beyond the firewalls and exposed to a larger audience (or even co-mingled with others’ data) the need for innovation and advancement in security is only going to skyrocket to start to deal with many of these problems.

/Hoff

Categories: Cloud Computing, Cloud Security Tags: