Archive

Posts Tagged ‘Amazon Elastic Compute Cloud’

On Stacked Turtles & the AWS Outage…

April 29th, 2011 2 comments

The best summary I could come up with:

Dedicated AWS VPC Compute Instances – Strategically Defensive or Offensive?

March 28th, 2011 9 comments

Chugging right along on the feature enhancement locomotive, following the extension of networking capabilities of their Virtual Private Cloud (VPC) offerings last week (see: AWS’ New Networking Capabilities – Sucking Less 😉 ,) Amazon Web Services today announced the availability of dedicated (both on-demand and dedicated) compute instances within a VPC:

Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer. Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level.

That’s interesting, isn’t it?  I remember writing this post ” Calling All Private Cloud Haters: Amazon Just Peed On Your Fire Hydrant… and chuckling when AWS announced VPC back in 2009 in which I suggested that VPC:

  • Legitimized Private Cloud as a reasonable, needed, and prudent step toward Cloud adoption for enterprises,
  • Substantiated the value proposition of Private Cloud as a way of removing a barrier to Cloud entry for enterprises, and
  • Validated the ultimate vision toward hybrid Clouds and Inter-Cloud

That got some hackles up.

So this morning, people immediately started squawking on Twitter about how this looked remarkably like (or didn’t) private cloud or dedicated hosting.  This is why, about two years ago, I generated this taxonomy that pointed out the gray area of “private cloud” — the notion of who manages it, who owns the infrastructure, where it’s located and who it’s consumed by:

I did a lot of this work well before I utilized it in the original Cloud Security Alliance Guidance architecture chapter I wrote, but that experienced refined what I meant a little more clearly and this version was produced PRIOR to the NIST guidance which is why you don’t see mention of “community cloud”:

  1. Private
    Private Clouds are provided by an organization or their designated service provider and offer a single-tenant (dedicated) operating environment with all the benefits and functionality of elasticity* and the accountability/utility model of Cloud.  The physical infrastructure may be owned by and/or physically located in the organization’s datacenters (on-premise) or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated service provider respectively.
    The consumers of the service are considered “trusted.”  Trusted consumers of service are those who are considered part of an organization’s legal/contractual umbrella including employees, contractors, & business partners.  Untrusted consumers are those that may be authorized to consume some/all services but are not logical extensions of the organization.
  2. Public
    Public Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.
    The physical infrastructure is generally owned by and managed by the designated service provider and located within the provider’s datacenters (off-premise.)  Consumers of Public Cloud services are considered to be untrusted.
  3. Managed
    Managed Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.The physical infrastructure is owned by and/or physically located in the organization’s datacenters with an extension of management and security control planes controlled by the designated service provider.  Consumers of Managed Clouds may be trusted or untrusted.
  4. Hybrid
    Hybrid Clouds are a combination of public and private cloud offerings that allow for transitive information exchange and possibly application compatibility and portability across disparate Cloud service offerings and providers utilizing standard or proprietary methodologies regardless of ownership or location.  This model provides for an extension of management and security control planes.  Consumers of Hybrid Clouds may be trusted or untrusted.

* Note: the benefits of elasticity don’t imply massive scale, which in many cases is not a relevant requirement for an enterprise.  Also, ultimately I deprecated the “managed” designation because it was a variation on a theme, but you can tell that ultimately the distinction I was going for between private and hybrid is the notion of OR versus AND designations in the various criteria.

AWS’ dedicated VPC options now give you another ‘OR’ option when thinking about who manages, owns the infrastructure your workloads run on, and more importantly where.  More specifically, the notion of ‘virtual’ cloud becomes less and less important as the hybrid nature of interconnectedness of resources starts to make more sense — regardless of whether you use overlay solutions like CloudSwitch, “integrated” solutions from vendors like VMware or Citrix or from AWS.  In the long term, the answer will probably be “D) all of the above.”

Providing dedicated compute atop a hypervisor for which you are the only tenant will be attractive to many enterprises who have trouble coming to terms with sharing memory/cpu resources with other customers.  This dedicated functionality costs a pretty penny – $87,600 a year, and as Simon Wardley pointed out that this has an interesting effect inasmuch as it puts a price tag on isolation:

Here’s the interesting thing that goes to the title of this post:

Is this a capability that AWS really expects to be utilized as they further blur the lines between public, private and hybrid cloud models OR is it a defensive strategy hinged on the exorbitant costs to further push enterprises into shared compute and overlay security models?

Specifically, one wonders if this is a strategically defensive or offensive move?

A single tenant atop a hypervisor atop dedicated hardware — that will go a long way toward addressing one concern: noisy (and nosy) neighbors.

Now, keep in mind that if an enterprise’s threat modeling and risk management frameworks are reasonably rational, they’ll realize that this is compute/memory isolation only.  Clearly the network and storage infrastructure is still shared, but the “state of the art” in today’s cloud of overlay encryption (file systems and SSL/IPSec VPNs) will likely address those issues.  Shared underlying cloud management/provisioning/orchestration is still an interesting area of concern.

So this will be an interesting play for AWS. Whether they’re using this to take a hammer to the existing private cloud models or just to add another dimension in service offering (logical, either way) I think in many cases enterprises will pay this tax to further satisfy compliance requirements by removing the compute multi-tenancy boogeyman.

/Hoff

Enhanced by Zemanta

The Security Hamster Sine Wave Of Pain: Public Cloud & The Return To Host-Based Protection…

July 7th, 2010 7 comments
Snort Intrusion Detection System Logo
Image via Wikipedia

This is a revisitation of a blog I wrote last year: Incomplete Thought: Cloud Security IS Host-Based…At The Moment

I use my ‘Security Hamster Sine Wave of Pain” to illustrate the cyclical nature of security investment and deployment models over time and how disruptive innovation and technology impacts the flip-flop across the horizon of choice.

To wit: most mass-market Public Cloud providers such as Amazon Web Services rely on highly-abstracted and limited exposure of networking capabilities.  This means that most traditional network-based security solutions are impractical or non-deployable in these environments.

Network-based virtual appliances which expect generally to be deployed in-line with the assets they protect are at a disadvantage given their topological dependency.

So what we see are security solution providers simply re-marketing their network-based solutions as host-based solutions instead…or confusing things with Barney announcements.

Take a press release today from SourceFire:

Snort and Sourcefire Vulnerability Research Team(TM) (VRT) rules are now available through the Amazon Elastic Compute Cloud (Amazon EC2) in the form of an Amazon Machine Image (AMI), enabling customers to proactively monitor network activity for malicious behavior and provide automated responses.

Leveraging Snort installed on the AMI, customers of Amazon Web Services can further secure their most critical cloud-based applications with Sourcefire’s leading protection. Snort and Sourcefire(R) VRT rules are also listed in the Amazon Web Services Solution Partner Directory, so that users can easily ensure that their AMI includes the latest updates.

As far as I can tell, this means you can install a ‘virtual appliance’ of Snort/Sourcefire as a standalone AMI, but there’s no real description on how one might actually implement it in an environment that isn’t topologically-friendly to this sort of network-based implementation constraint.*

Since you can’t easily “steer traffic” through an IPS in the model of AWS, can’t leverage promiscuous mode or taps, what does this packaging implementation actually mean?  Also, if  one has a few hundred AMI’s which contain applications spread out across multiple availability zones/regions, how does a solution like this scale (from both a performance or management perspective?)

I’ve spoken/written about this many times:

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where… and

Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye

Ultimately, expect that Public Cloud will force the return to host-based HIDS/HIPS deployments — the return to agent-based security models.  This poses just as many operational challenges as those I allude to above.  We *must* have better ways of tying together network and host-based security solutions in these Public Cloud environments that make sense from an operational, cost, and security perspective.

/Hoff

Related articles by Zemanta

* I “spoke” with Marty Roesch on the Twitter and he filled in the gaps associated with how this version of Snort works – there’s a host-based packet capture element with a “network” redirect to a stand-alone AMI:

@Beaker AWS->Snort implementation is IDS-only at the moment, uses software packet tap off customer app instance, not topology-dependent

and…

they install our soft-tap on their AMI and send the traffic to our AMI for inspection/detection/reporting.

It will be interesting to see how performance nets out using this redirect model.

Enhanced by Zemanta

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…

January 31st, 2010 15 comments

Allan Leinwand from GigaOm wrote a great article asking “Where are the network virtual appliances?” This was followed up by another excellent post by Rich Miller.

Allan sets up the discussion describing how we’ve typically plumbed disparate physical appliances into our network infrastructure to provide discrete network and security capabilities such as load balancers, VPNs, SSL termination, firewalls, etc.  He then goes on to describe the stunted evolution of virtual appliances:

To be sure, some networking devices and appliances are now available in virtual form.  Switches and routers have begun to move toward virtualization with VMware’s vSwitch, Cisco’s Nexus 1000v, the open source Open vSwitch and routers and firewalls running in various VMs from the company I helped found, Vyatta.  For load balancers, Citrix has released a version of its Netscaler VPX software that runs on top of its virtual machine, XenServer; and Zeus Systems has an application traffic controller that can be deployed as a virtual appliance on Amazon EC2, Joyent and other public clouds.

Ultimately I think it prudent for discussion’s sake to separate routing, switching and load balancing (connectivity) from functions such as DLP, firewalls, and IDS/IPS (security) as lumping them together actually abstracts the problem which is that the latter is completely dependent upon the capabilities and functionality of the former.  This is what Allan almost gets to when describing his lament with the virtual appliance ecosystem today:

Yet the fundamental problem remains: Most networking appliances are still stuck in physical hardware — hardware that may or may not be deployed where the applications need them, which means those applications and their associated VMs can be left with major gaps in their infrastructure needs. Without a full-featured and stateful firewall to protect an application, it’s susceptible to various Internet attacks.  A missing load balancer that operates at layers three through seven leaves a gap in the need to distribute load between multiple application servers. Meanwhile, the lack of an SSL accelerator to offload processing may lead to performance issues and without an IDS device present, malicious activities may occur.  Without some (or all) of these networking appliances available in a virtual environment, a VM may find itself constrained, unable to take full advantage of the possible economic benefits.

I’ve written about this many, many times. In fact almost three years ago I created a presentation called  “The Four Horsemen of the Virtualization Security Apocalypse” which described in excruciating detail how network virtual appliances were a big ball of fail and would be for some time. I further suggested that much of the “best-of-breed” products would ultimately become “good enough” features in virtualization vendor’s hypervisor platforms.

Why?  Because there are some very real problems with virtualization (and Cloud) as it relates to connectivity and security:

  1. Most of the virtual network appliances, especially those “ported” from the versions that usually run on dedicated physical hardware (COTS or proprietary) do not provide feature, performance, scale or high-availability parity; most are hobbled or require per-platform customization or re-engineering in order to function.
  2. The resilience and high availability options from today’s off-the-shelf virtual connectivity does not pair well with the mobility and dynamism of de-coupled virtual machines; VMs are ultimately temporal and networks don’t like topological instability due to key components moving or disappearing
  3. The performance and scale of virtual appliances still suffer when competing for I/O and resources on the same physical hosts as the guests they attempt to protect
  4. Virtual connectivity is a generally a function of the VMM (or a loadable module/domain therein.) The architecture of the VMM has dramatic impact upon the architecture of the software designed to provide the connectivity and vice versa.
  5. Security solutions are incredibly topology sensitive.  Given the scenario in #1 when a VM moves or is distributed across the pooled infrastructure, unless the security capabilities are already present on the physical host or the connectivity and security layers share a control plane (or at least can exchange telemetry,) things will simply break
  6. Many virtualization (and especially cloud) platforms do not support protocols or topologies that many connectivity and security virtual appliances require to function (such as multicast for load balancing)
  7. It’s very difficult to mimic the in-line path requirements in virtual networking environments that would otherwise force traffic passing through the connectivity layers (layers 2 through 7) up through various policy-driven security layers (virtual appliances)
  8. There is no common methodology to express what security requirements the connectivity fabrics should ensure are available prior to allowing a VM to spool up let alone move
  9. Virtualization vendors who provide solutions for the enterprise have rich networking capabilities natively as well as with third party connectivity partners, including VM and VMM introspection capabilities. As I wrote about here, mass-market Cloud providers such as Amazon Web Services or Rackspace Cloud have severely crippled networking.
  10. Virtualization and cloud vendors generally force many security vs. performance tradeoffs when implementing introspection capabilities in their platforms: third party code running in the kernel, scheduler prioritization issues, I/O limitations, etc.
  11. Much of the basic networking capabilities are being pushed lower into silicon (into the CPUs themselves) which makes virtual appliances even further removed from the guts that enable them
  12. Physical appliances (in the enterprise) exist en-mass.  Many of them provide highly scalable solutions to the specific functions that Alan refers to.  The need exists, given the limitations I describe above, to provide for integration/interaction between them, the VMM and any virtual appliances in order to offload certain functions as well as provide coverage between the physical and the logical.

What does this mean?  It means that ultimately to ensure their own survival, virtualization and cloud providers will depend less upon virtual appliances and add more of the basic connectivity AND security capabilities into the VMMs themselves as its the only way to guarantee performance, scalability, resilience and satisfy the security requirements of customers. There will be new generations of protocols, APIs and control planes that will emerge to provide for this capability, but this will drive the same old integration battles we’re supposed to be absolved from with virtualization and Cloud.

Connectivity and security vendors will offer virtual replicas of their physical appliances in order to gain a foothold in virtualized/cloud environments in order to intercept traffic (think basic traps/ACL’s) and then interact with higher-performing physical appliance security service overlays or embedded line cards in service chassis.  This is especially true in enterprises but poses many challenges in software-only, mass-market cloud environments where what you’ll continue to get is simply basic connectivity and security with limited networking functionality.  This implies more and more security will be pushed into the guest and application logic layers to deal with this disconnect.

This is exactly where we are today with Cloud providers like Amazon Web Services: basic ingress-only filtering with a very simplistic, limited and abstracted set of both connectivity and security capability.  See “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye”  Will they add more functionality?  Perhaps. The question is whether they can afford to in order to limit the impact that connecitivity and security variability/instability can bring to an environment.

That said, it’s certainly achievable, if you are willing and able to do so, to construct a completely software-based networking environment, but these environments require a complete approach and stack re-write with an operational expertise that will be hard to support for those who have spent the last 20 years working in a different paradigm and that’s a huge piece of this problem.

The connectivity layer — however integrated into the virtualized and cloud environments they seem — continues to limit how and what the security layers can do and will for some time, thus limiting the uptake of virtual network and security appliances.

Situation normal.

/Hoff

Reblog this post [with Zemanta]