Archive

Archive for the ‘Cloud Security’ Category

AWS’ New Networking Capabilities – Sucking Less ;)

March 15th, 2011 1 comment
A 6-node clique is a 5-component, structural c...

Image via Wikipedia

I still haven’t had my coffee and this is far from being complete analysis, but it’s pretty darned exciting…

One of the biggest challenges facing public Infrastructure-as-a-Service cloud providers has been balancing the flexibility and control of  datacenter networking capabilities against that present in traditional data center environments.

I’m not talking about complex L2/L3 configurations or converged data/storage networking topologies; I’m speaking of basic addressing and edge functionality (routing, VPN, firewall, etc.)  Furthermore, interconnecting public cloud compute/storage resources in a ‘private, non-Internet facing role) to a corporate datacenter has been less than easy.

Today Jeff Barr ahsploded another of his famous blog announcements which goes a long way solving not only these two issues, but clearly puts AWS on-track for continuing to press VMware on the overlay networking capabilities present in their vCloud Director vShield Edge/App model.

The press release (and Jeff’s blog) were a little confusing because they really focus on VPC, but the reality is that this runs much, much deeper.

I rather liked Shlomo Swidler’s response to that same comment to me on Twitter 🙂

This announcement is fundamentally about the underlying networking capabilities of EC2:

Today we are releasing a set of features that expand the power and value of the Virtual Private Cloud. You can think of this new collection of features as virtual networking for Amazon EC2. While I would hate to be innocently accused of hyperbole, I do think that today’s release legitimately qualifies as massive, one that may very well change that way that you think about EC2 and how it can be put to use in your environment.

The features include:

  • A new VPC Wizard to streamline the setup process for a new VPC.
  • Full control of network topology including subnets and routing.
  • Access controls at the subnet and instance level, including rules for outbound traffic.
  • Internet access via an Internet Gateway.
  • Elastic IP Addresses for EC2 instances within a VPC.
  • Support for Network Address Translation (NAT).
  • Option to create a VPC that does not have a VPC connection.

You can now create a network topology in the AWS cloud that closely resembles the one in your physical data center including public, private, and DMZ subnets. Instead of dealing with cables, routers, and switches you can design and instantiate your network programmatically. You can use the AWS Management Console (including a slick new wizard), the command line tools, or the APIs. This means that you could store your entire network layout in abstract form, and then realize it on demand.

That’s pretty bad-ass and goes along way toward giving enterprises a not-so-gentle kick in the butt regarding getting over the lack of network provisioning flexibility.  This will also shine whcn combined with the VMDK import capabilities — which are albeit limited today from a preservation of networking configuration.  Check out Christian Reilly’s great post “AWS – A Wonka Surprise” regarding how the VMDK-Import and overlay networking elements collide.  This gets right to the heart of what we were discussing.

Granted, I have not dug deeply into the routing capabilities (support for dynamic protocols, multiple next-hop gateways, etc.) or how this maps (if at all) to VLAN configurations and Shlomo did comment that there are limitations around VPC today that are pretty significant: “AWS VPC gotcha: No RDS, no ELB, no Route 53 in a VPC and “AWS VPC gotcha: multicast and broadcast still doesn’t work inside a VPC,” and “No Spot Instances, no Tiny Instances (t1.micro), and no Cluster Compute Instances (cc1.*)” but it’s an awesome first step that goes toward answering my pleas that I highlighted in my blog titled “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye

Thank you, Santa. 🙂

On Twitter, Dan Glass’ assessment was concise, more circumspect and slightly less enthusiastic — though I’m not exactly sure I’d describe my reaction as that bordering on fanboi:

…to which I responded that clearly there is room for improvement in L3+ and security.  I expect we’ll see some 😉

In the long term, regardless of how this was framed from an announcement perspective, AWS’ VPC as a standalone “offer” should just go away – it will just become another networking configuration option.

While many of these capabilities are basic in nature, it just shows that AWS is paying attention to the fact that if it wants enterprise business, it’s going to have to start offering service capabilities that make the transition to their platforms more like what enterprises are used to using.

Great first step.

Now, about security…while outbound filtering via ACLs is cool and all…call me.

/Hoff

P.S. As you’ll likely see emerging in the comments, there are other interesting solutions to this overlay networking/connectivity solution – CohesiveF/T and CloudSwitch come to mind…

Enhanced by Zemanta

Incomplete Thought: Cloud Capacity Clearinghouses & Marketplaces – A Security/Compliance/Privacy Minefield?

March 11th, 2011 2 comments
Advertisement for the automatic (dial) telepho...

Image via Wikipedia

With my focus on cloud security, I’m always fascinated when derivative business models arise that take the issues associated with “mainstream” cloud adoption and really bring issues of security, compliance and privacy into even sharper focus.

To wit, Enomaly recently launched SpotCloud – a Cloud Capacity Clearinghouse & Marketplace in which cloud providers can sell idle compute capacity and consumers may purchase said capacity based upon “…location, cost and quality.”

Got a VM-based workload?  Want to run it cheaply for a short period of time?

…Have any security/compliance/privacy requirements?

To me, “quality” means something different that simply availability…it means service levels, security, privacy, transparency and visibility.

Whilst one can select the geographic location where your VM will run, as part of offering an “opaque inventory,” the identity of the cloud provider is not disclosed.  This begs the question of how the suppliers are vetted and assessed for security, compliance and privacy.  According to the SpotCloud FAQ, the answer is only a vague “We fully vet all market participants.”

There are two very interesting question/answer pairings on the SpotCloud FAQ that relate to security and service availability:

How do I secure my SpotCloud VM?

User access to VM should be disabled for increased security. The VM package is typically configured to automatically boot, self configure itself and phone home without the need for direct OS access. VM examples available.

Are there any SLA’s, support or guarantees?

No, to keep the costs as low as possible the service is offered without any SLA, direct support or guarantees. We may offer support in the future. Although we do have a phone and are more than happy to talk to you…

:: shudder ::

For now, I would assume that this means that if your workloads are at all mission critical, sensitive, subject to compliance requirements or traffic in any sort of sensitive data, this sort of exchange option may not be for you. I don’t have data on the use cases for the workloads being run using SpotCloud, but perhaps we’ll see Enomaly make this information more available as time goes on.

I would further assume that the criteria for provider selection might be expanded to include certification, compliance and security capabilities — all the more reason for these providers to consider something like CloudAudit which would enable them to provide supporting materials related to their assertions. (*wink*)

To be clear, from a marketplace perspective, I think this is a really nifty idea — sort of the cloud-based SETI-for-cost version of the Mechanical Turk.  It takes the notion of “utility” and really makes one think of the options.  I remember thinking the same thing when Zimory launched their marketplace in 2009.

I think ultimately this further amplifies the message that we need to build survivable systems, write secure code and continue to place an emphasis on the security of information deployed using cloud services. Duh-ja vu.

This sort of use case also begs the interesting set of questions as to what these monolithic apps are intended to provide — surely they transit in some sort of information — information that comes from somewhere?  The oft-touted massively scaleable compute “front-end” overlay of public cloud often times means that the scale-out architectures leveraged to deliver service connect back to something else…

You likely see where this is going…

At any rate, I think these marketplace offerings will, for the foreseeable future, serve a specific type of consumer trafficking in specific types of information/service — it’s yet another vertical service offering that cloud can satisfy.

What do you think?

/Hoff

Enhanced by Zemanta

App Stores: From Mobile Platforms To VMs – Ripe For Abuse

March 2nd, 2011 4 comments
Android Market

Image via Wikipedia

This CNN article titled “Google pulls 21 apps in Android malware scare” describes an alarming trend in which malicious code is embedded in applications which are made available for download and use on mobile platforms:

Google has just pulled 21 popular free apps from the Android Market. According to the company, the apps are malware aimed at getting root access to the user’s device, gathering a wide range of available data, and downloading more code to it without the user’s knowledge.

Although Google has swiftly removed the apps after being notified (by the ever-vigilant “Android Police” bloggers), the apps in question have already been downloaded by at least 50,000 Android users.

The apps are particularly insidious because they look just like knockoff versions of already popular apps. For example, there’s an app called simply “Chess.” The user would download what he’d assume to be a chess game, only to be presented with a very different sort of app.

Wow, 50,000 downloads.  Most of those folks are likely blissfully unaware they are owned.

In my Cloudifornication presentation, I highlighted that the same potential for abuse exists for “virtual appliances” which can be uploaded for public consumption to app stores and VM repositories such as those from VMware and Amazon Web Services:

The feasibility for this vector was deftly demonstrated shortly afterward by the guys at SensePost (Clobbering the Cloud, Blackhat) who showed the experiment of uploading a non-malicious “phone home” VM to AWS which was promptly downloaded and launched…

This is going to be a big problem in the mobile space and potentially just as impacting in cloud/virtual datacenters as people routinely download and put into production virtual machines/virtual appliances, the provenance and integrity of which are questionable.  Who’s going to police these stores?

(update: I loved Christian Reilly’s comment on Twitter regarding this: “Using a public AMI is the equivalent of sharing a syringe”)

/Hoff

Enhanced by Zemanta

Video Of My CSA Presentation: “Commode Computing: Relevant Advances In Toiletry & I.T. – From Squat Pots to Cloud Bots – Waste Management Through Security Automation”

February 19th, 2011 13 comments

This is probably my most favorite presentation I have given.  It was really fun.  I got so much positive feedback on what amounts to a load of crap. 😉

This video is from the Cloud Security Alliance Summit at the 2011 RSA Security Conference in San Francisco.  I followed Mark Benioff from Salesforce and Vivek Kundra, CIO of the United States.

Here is a PDF of the slides if you are interested.

Part 1:

Part 2:

Enhanced by Zemanta

My Warm-Up Acts at the RSA/Cloud Security Alliance Summit Are Interesting…

February 8th, 2011 2 comments
Red and Yellow, two of M&M's

Besides a panel or two and another circus-act talk with Rich Mogull, I’m thrilled to be invited to present again at the Cloud Security Alliance Summit at RSA this year.

One of my previous keynotes at a CSA event was well received: Cloudersize – A cardio, strength & conditioning program for a firmer, more toned *aaS

Normally when negotiating to perform at such a venue, I have my people send my diva list over to the conference organizers.  You know, the normal stuff: only red M&M’s, Tupac walkout music, fuzzy blue cashmere slippers and Hoffaccinos on tap in the green room.

This year, understanding we are all under pressure given the tough economic climate, I relaxed my requirements and instead took a deal for a couple of ace warm-up speakers to goose the crowd prior to my arrival.

Here’s who Jim managed to scrape up:

9:00AM – 9:40AM // Keynote: “Cloud 2: Proven, Trusted and Secure”
Speaker: Marc Benioff, CEO, Salesforce.com

9:40AM – 10:10AM // Keynote: Vivek Kundra, CIO, White House

10:10AM – 10:30AM // Presentation: “Commode Computing: Relevant Advances In Toiletry – From Squat Pots to Cloud Bots – Waste Management Through Security Automation”
Presenting: Christofer Hoff, Director, Cloud & Virtualization Solutions, Cisco Systems

I guess I can’t complain 😉

See you there. Bring rose petals and Evian as token gifts to my awesomeness, won’t you?

/Hoff

Enhanced by Zemanta

CloudPassage & Why Guest-Based Footprints Matter Even More For Cloud Security

February 1st, 2011 4 comments
VM (operating system)

Image via Wikipedia

Every day for the last week or so after their launch, I’ve been asked left and right about whether I’d spoken to CloudPassage and what my opinion was of their offering.  In full disclosure, I spoke with them when they were in stealth almost a year ago and offered some guidance as well as the day before their launch last week.

Disappointing as it may be to some, this post isn’t really about my opinion of CloudPassage directly; it is, however, the reaffirmation of the deployment & delivery models for the security solution that CloudPassage has employed.  I’ll let you connect the dots…

Specifically, in public IaaS clouds where homogeneity of packaging, standardization of images and uniformity of configuration enables scale, security has lagged.  This is mostly due to the fact that for a variety of reasons, security itself does not scale (well.)

In an environment where the underlying platform cannot be counted upon to provide “hooks” to integrate security capabilities in at the “network” level, all that’s left is what lies inside the VM packaging:

  1. Harden and protect the operating system [and thus the stuff atop it,]
  2. Write secure applications and
  3. Enforce strict, policy-driven information-centric security.

My last presentation, “Cloudinomicon: Idempotent Infrastructure, Building Survivable Systems and Bringing Sexy Back to Information Centricity” addressed these very points. [This one is a version I delivered at the University of Michigan Security Summit]

If we focus on the first item in that list, you’ll notice that generally to effect policy in the guest, you must have a footprint on said guest — however thin — to provide the hooks that are needed to either directly effect policy or redirect back to some engine that offloads this functionality.  There’s a bit of marketing fluff associated with using the word “agentless” in many applications of this methodology today, but at some point, the endpoint needs some sort of “agent” to play*

So that’s where we are today.  The abstraction offered by virtualized public IaaS cloud platforms is pushing us back to the guest-centric-based models of yesteryear.

This will bring challenges with scale, management, efficacy, policy convergence between physical and virtual and the overall API-driven telemetry driven by true cloud solutions.

You can read more about this in some of my other posts on the topic:

Finally, since I used them for eyeballs, please do take a look at CloudPassage — their first (free) offerings are based upon leveraging small footprint Linux agents and a cloud-based SaaS “grid” to provide vulnerability management, and firewall/zoning in public cloud environments.

/Hoff

* There are exceptions to this rule depending upon *what* you’re trying to do, such as anti-malware offload via a hypervisor API, but this is not generally available to date in public cloud.  This will, I hope, one day soon change.

Enhanced by Zemanta

Revisiting Virtualization & Cloud Stack Security – Back to the Future (Baked In Or Bolted On?)

January 17th, 2011 No comments

[Like a good w[h]ine, this post goes especially well with a couple of other courses such as Hack The Stack Or Go On a Bender With a Vendor?, Incomplete Thought: Why Security Doesn’t Scale…Yet, What’s The Problem With Cloud Security? There’s Too Much Of It…, Incomplete Thought: The Other Side Of Cloud – Where The (Wild) Infrastructure Things Are… and Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…]

There are generally three dichotomies of thought when it comes to the notion of how much security should fall to the provider of the virtualization or cloud stack versus that of the consumer of their services or a set of third parties:

  1. The virtualization/cloud stack provider should provide a rich tapestry of robust security capabilities “baked in” to the platform itself, or
  2. The virtualization/cloud stack provider should provide security-enabling hooks to enable an ecosystem of security vendors to provide the bulk of security (beyond isolation) to be “bolted on,” or
  3. The virtualization/cloud stack provider should maximize the security of the underlying virtualization/cloud platform and focus on API security, isolation and availability of service only while pushing the bulk of security up into the higher-level programatic/application layers, or

So where are we today?  How much security does the stack, itself, need to provide. The answer, however polarized, is somewhere in the murkiness dictated by the delivery models, deployment models, who owns what part of the real estate and the use cases of both the virtualization/cloud stack provider and ultimately the consumer.

I’ve had a really interesting series of debates with the likes of Simon Crosby (of Xen/Citrix fame) on this topic and we even had a great debate at RSA with Steve Herrod from VMware.  These two “infrastructure” companies and their solutions typify the diametrically opposed first two approaches to answering this question while cloud providers who own their respective custom-rolled “stacks” at either end of IaaS and SaaS spectrums such as Amazon Web Services and Salesforce bringing up the third.

As with anything, this is about the tenuous balance of “security,” compliance, cost, core competence and maturity of solutions coupled with the sensitivity of the information that requires protection and the risk associated with the lopsided imbalance that occurs in the event of loss.

There’s no single best answer which explains why were have three very different approaches to what many, unfortunately, view as the same problem.

Today’s “baked in” security capabilities aren’t that altogether mature or differentiated, the hooks and APIs that allow for diversity and “defense in depth” provide for new and interesting ways to instantiate security, but also add to complexity, driving us back to an integration play.  The third is looked upon as proprietary and limiting in terms of visibility and transparency and don’t solve problems such as application and information security any more than the other two do.

Will security get “better” as we move forward with virtualization and cloud computing.  Certainly.  Perhaps because of it, perhaps in spite of it.

One thing’s for sure, it’s going to be messy, despite what the marketing says.

Related articles

Enhanced by Zemanta

The Cloud As a (Hax0r’s) Calculator. Yawn…

January 11th, 2011 2 comments
How a botnet works: 1. A botnet operator sends...
Image via Wikipedia

If I see another news story that talks about how a “hacker” has shown that by “utilizing the cloud” to harness compute on demand to do what otherwise one might use a botnet or specialized hardware to perform BUT otherwise suggest that it somehow compromises an entire branch of technology, I’m going to…

Meh.

Yeah, cloud makes this cheap and accessible…rainbow table cracking using IaaS images via a cloud provider…passwords, wifi creds, credit card numbers, pi…

Please.

See:

Researcher cracks Wi-Fi passwords with Amazon cloud

Using Cloud Computing To Crack Passwords – Amazon’s EC2

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

Incomplete Thought: Why Security Doesn’t Scale…Yet.

January 11th, 2011 1 comment
X-ray machines and metal detectors are used to...
Image via Wikipedia

There are lots of reasons one might use to illustrate why operationalizing security — both from the human and technology perspectives — doesn’t scale.

I’ve painted numerous pictures highlighting the cyclical nature of technology transitions, the supply/demand curve related to threats, vulnerabilities, technology and compensating controls and even relevant anecdotes involving the intersection of Moore’s and Metcalfe’s laws.  This really was a central theme in my Cloudinomicon presentation; “idempotent infrastructure, building survivable systems and bringing sexy back to information centricity.”

Here are some other examples of things I’ve written about in this realm.

Batting around how public “commodity” cloud solutions forces us to re-evaluate how, where, why and who “does” security was an interesting journey.  Ultimately, it comes down to architecture and poking at the sanctity of models hinged on an operational premise that may or may not be as relevant as it used to be.

However, I think the most poignant and yet potentially obvious answer to the “why doesn’t security scale?” question is the fact that security products, by design, don’t scale because they have not been created to allow for automation across almost every aspect of their architecture.

Automation and the interfaces (read: APIs) by which security products ought to be provisioned, orchestrated, and deployed are simply lacking in most security products.

Yes, there exist security products that are distributed but they are still managed, provisioned and deployed manually — generally using a management hub-spoke model that doesn’t lend itself to automated “anything” that does not otherwise rely upon bubble-gum and bailing wire scripting…

Sure, we’ve had things like SNMP as a “standard interface” for “management” for a long while 😉  We’ve had common ways of describing threats and vulnerabilities.  Recently we’ve seen the emergence of XML-based APIs emerge as a function of the latest generation of (mostly virtualized) firewall technologies, but most products still rely upon stand-alone GUIs, CLIs, element managers and a meat cloud of operators to push the go button (or reconfigure.)

Really annoying.

Alongside the lack of standard API-based management planes, control planes are largely proprietary and the output for correlated event-driven telemetry at all layers of the stack is equally lacking.  Of course the applications and security layers that run atop infrastructure are still largely discrete thus making the problem more difficult.

The good news is that virtualization in the enterprise and the emergence of the cultural and operational models predicated upon automation are starting to influence product roadmaps in ways that will positively affect the problem space described above but we’ve got a long haul as we make this transition.

Security vendors are starting to realize that they must retool many of their technology roadmaps to deal with the impact of dynamism and automation.  Some, not all, are discovering painfully the fact that simply creating a virtualized version of a physical appliance doesn’t make it a virtual security solution (or cloud security solution) in the same way that moving an application directly to cloud doesn’t necessarily make it a “cloud application.”

In the same way that one must often re-write or specifically design applications “designed” for cloud, we have to do the same for security.  Arguably there are things that can and should be preserved; the examples of the basic underpinnings such as firewalls that at their core don’t need to change but their “packaging” does.

I’m privy to lots of the underlying mechanics of these activities — from open source to highly-proprietary — and I’m heartened by the fact that we’re beginning to make progress.  We shouldn’t have to make a distinction between crafting and deploying security policies in physical or virtual environments.  We shouldn’t be held hostage by the separation of application logic from the underlying platforms.

In the long term, I’m optimistic we won’t have to.

/Hoff

Related articles

Enhanced by Zemanta

The Curious Case Of the MBO Cloud

December 23rd, 2010 1 comment

I was speaking to an enterprise account manager the other day regarding strategic engagements in Cloud Computing in very large enterprises.

He remarked on the non-surprising parallelism occurring as these companies build and execute on cloud strategies that involve both public and private cloud initiatives.

Many of them are still trying to leverage the value of virtualization and are thus often conservative about their path forward.  Many are blazing new trails.

We talked about the usual barriers to entry for even small PoC’s: compliance, security, lack of expertise, budget, etc., and then he shook his head solemnly, stared at the ground and mumbled something about a new threat to the overall progress toward enterprise cloud adoption.

MBO Cloud.

We’ve all heard of public, private, virtual private, hybrid, and community clouds, right? But “MBO Cloud?”

I asked. He clarified:

Cloud computing is such a hot topic, especially with its promise of huge cost savings, agility, and the reduced time-to-market for services and goods that many large companies who might otherwise be unable or unwilling to be able to pilot using a public cloud provider and also don’t want to risk much if any capital outlay for software and infrastructure to test private cloud are taking an interesting turn.

They’re trying to replicate Amazon or Google but not for the right reasons or workloads. They just look at “cloud” as some generic low-cost infrastructure platform that requires some open source and a couple of consultants — or even a full-time team of “developers” assigned to make it tick.

They rush out, buy 10 off-the-shelf white-label commodity multi-CPU/multi-core servers, acquire a plain vanilla NAS or low-end SAN storage appliance, sprinkle on some Xen or KVM, load on some unremarkable random set of open source software packages to test with a tidy web front-end and call it “cloud.”  No provisioning, no orchestration, no self-service portals, no chargeback, no security, no real scale, no operational re-alignment, no core applications…

It costs them next to nothing and it delivers about the same because they’re not designing for business cases that are at all relevant, they’re simply trying to copy Amazon and point to a shiny new rack as “cloud.”

Why do they do this? To gain experience and expertise? To dabble cautiously in an emerging set of technological and operational models?  To offload critical workloads that scale up and down?

Nope.  They do this for two reasons:

  1. Now that they have proven they can “successfully” spin up a “cloud” — however useless it may be — that costs next to nothing, it gives them leverage to squeeze vendors on pricing when and if they are able to move beyond this pile of junk, and
  2. Management By Objectives (MBO) — or a fancy way of saying, “bonus.”  Many C-levels and their ops staff are compensated via bonus on hitting certain objectives. One of them (for all the reasons stated above) is “deliver on the strategy and execution for cloud computing.” This half-hearted effort sadly qualifies.

So here’s the problem…when these efforts flame out and don’t deliver, they will impact the success of cloud in general — everywhere from a private cloud vendor to even potentially public cloud offers like AWS.  Why?  Because as we already know, *anything* that smells at all like failure gets reflexively blamed on cloud these days, and as these craptastic “cloud” PoC’s fail to deliver — even on minimal cash outlay — it’s going to be hard to gain a second choice given the bad taste left in the mouths of the business and management.

The opposite point could also be made in regard to public cloud services — that these truly “false cloud*” trials based on poorly architected and executed bubble gum and bailing wire will drive companies to public cloud (however longer that may take as compliance and security catch up.)

It will be interesting to see which happens first.

Either way, beware the actual “false cloud” but realize that the motivation behind many of them isn’t the betterment of the business or evolution of IT, it’s the fattening of wallets.

/Hoff

* I’m leveraging “false cloud” here to truly illustrate a point; despite actually useful private cloud initiatives, this is a term unfortunately levied on all private cloud initiatives by certain public cloud providers.

Enhanced by Zemanta