Archive

Posts Tagged ‘Virtual private network’

The Curious Case Of Continuous and Consistently Contiguous Crypto…

August 8th, 2013 9 comments

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

/Hoff

P.S. Remember back in the 80’s/90’s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta

Quick Ping: VMware’s Horizon App Manager – A Big Bet That Will Pay Off…

May 17th, 2011 2 comments

It is so tempting to write about VMware‘s overarching strategy of enterprise and cloud domination, but this blog entry really speaks to an important foundational element in their stack of offerings which was released today: Horizon App Manager.

Check out @Scobleizer’s interview with Noel Wasmer (Dir. of Product Management for VMware) on the ins-and-outs of HAM.

Frankly, federated identity and application entitlement is not new.

Connecting and extending identities from inside the enterprise using native directory services to external applications (SaaS or otherwise) is also not new.

What’s “new” with VMware’s Horizon App Manager is that we see the convergence and well-sorted integration of a service-driven federated identity capability that ties together enterprise “web” and “cloud” (*cough*)-based SaaS applications with multi-platform device mobility powered by the underpinnings of freshly-architected virtualization and cloud architecture.  All delivered as a service (SaaS) by VMware for $30 per user/per year.

[Update: @reillyusa and I were tweeting back and forth about the inside -> out versus outside -> in integration capabilities of HAM.  The SAML Assertions/OAuth integration seems to suggest this is possible.  Moreover, as I alluded to above, solutions exist today which integrate classical VPN capabilities with SaaS offers that provide SAML assertions and SaaS identity proxying (access control) to well-known applications like SalesForce.  Here’s one, for example.  I simply don’t have any hands-on experience with HAM or any deeper knowledge than what’s publicly available to comment further — hence the “Quick Ping.”]

Horizon App Manager really is a foundational component that will tie together the various components of  VMware’s stack offers for seamless operation including such products/services as Zimbra, Mozy, SlideRocket, CloudFoundry, View, etc.  I predict even more interesting integration potential with components such as elements of the vShield suite — providing identity-enabled security policies and entitlement at the edge to provision services in vCloud Director deployments, for example (esp. now that they’ve acquired NeoAccel for SSL VPN integration with Edge.)

“Securely extending the enterprise to the Cloud” (and vice versa) is a theme we’ll hear more and more from VMware.  Whether this thin client, virtual machines, SaaS applications, PaaS capabilities, etc., fundamentally what we all know is that for the enterprise to be able to assert control to enable “security” and compliance, we need entitlement.

I think VMware — as a trusted component in most enterprises — has the traction to encourage the growth of their supported applications in their catalog ecosystem which will in turn make the enterprise excited about using it.

This may not seem like it’s huge — especially to vendors in the IAM space or even Microsoft — but given the footprint VMware has in the enterprise and where they want to go in the cloud, it’s going to be big.

/Hoff

(P.S. It *is* interesting to note that this is a SaaS offer with an enterprise virtual appliance connector.  It’s rumored this came from the TriCipher acquisition.  I’ll leave that little nugget as a tickle…)

(P.P.S. You know what I want? I want a consumer version of this service so I can use it in conjunction with or in lieu of 1Password. Please.  Don’t need AD integration, clearly)

Related articles

Enhanced by Zemanta

Dedicated AWS VPC Compute Instances – Strategically Defensive or Offensive?

March 28th, 2011 9 comments

Chugging right along on the feature enhancement locomotive, following the extension of networking capabilities of their Virtual Private Cloud (VPC) offerings last week (see: AWS’ New Networking Capabilities – Sucking Less 😉 ,) Amazon Web Services today announced the availability of dedicated (both on-demand and dedicated) compute instances within a VPC:

Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer. Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level.

That’s interesting, isn’t it?  I remember writing this post ” Calling All Private Cloud Haters: Amazon Just Peed On Your Fire Hydrant… and chuckling when AWS announced VPC back in 2009 in which I suggested that VPC:

  • Legitimized Private Cloud as a reasonable, needed, and prudent step toward Cloud adoption for enterprises,
  • Substantiated the value proposition of Private Cloud as a way of removing a barrier to Cloud entry for enterprises, and
  • Validated the ultimate vision toward hybrid Clouds and Inter-Cloud

That got some hackles up.

So this morning, people immediately started squawking on Twitter about how this looked remarkably like (or didn’t) private cloud or dedicated hosting.  This is why, about two years ago, I generated this taxonomy that pointed out the gray area of “private cloud” — the notion of who manages it, who owns the infrastructure, where it’s located and who it’s consumed by:

I did a lot of this work well before I utilized it in the original Cloud Security Alliance Guidance architecture chapter I wrote, but that experienced refined what I meant a little more clearly and this version was produced PRIOR to the NIST guidance which is why you don’t see mention of “community cloud”:

  1. Private
    Private Clouds are provided by an organization or their designated service provider and offer a single-tenant (dedicated) operating environment with all the benefits and functionality of elasticity* and the accountability/utility model of Cloud.  The physical infrastructure may be owned by and/or physically located in the organization’s datacenters (on-premise) or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated service provider respectively.
    The consumers of the service are considered “trusted.”  Trusted consumers of service are those who are considered part of an organization’s legal/contractual umbrella including employees, contractors, & business partners.  Untrusted consumers are those that may be authorized to consume some/all services but are not logical extensions of the organization.
  2. Public
    Public Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.
    The physical infrastructure is generally owned by and managed by the designated service provider and located within the provider’s datacenters (off-premise.)  Consumers of Public Cloud services are considered to be untrusted.
  3. Managed
    Managed Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.The physical infrastructure is owned by and/or physically located in the organization’s datacenters with an extension of management and security control planes controlled by the designated service provider.  Consumers of Managed Clouds may be trusted or untrusted.
  4. Hybrid
    Hybrid Clouds are a combination of public and private cloud offerings that allow for transitive information exchange and possibly application compatibility and portability across disparate Cloud service offerings and providers utilizing standard or proprietary methodologies regardless of ownership or location.  This model provides for an extension of management and security control planes.  Consumers of Hybrid Clouds may be trusted or untrusted.

* Note: the benefits of elasticity don’t imply massive scale, which in many cases is not a relevant requirement for an enterprise.  Also, ultimately I deprecated the “managed” designation because it was a variation on a theme, but you can tell that ultimately the distinction I was going for between private and hybrid is the notion of OR versus AND designations in the various criteria.

AWS’ dedicated VPC options now give you another ‘OR’ option when thinking about who manages, owns the infrastructure your workloads run on, and more importantly where.  More specifically, the notion of ‘virtual’ cloud becomes less and less important as the hybrid nature of interconnectedness of resources starts to make more sense — regardless of whether you use overlay solutions like CloudSwitch, “integrated” solutions from vendors like VMware or Citrix or from AWS.  In the long term, the answer will probably be “D) all of the above.”

Providing dedicated compute atop a hypervisor for which you are the only tenant will be attractive to many enterprises who have trouble coming to terms with sharing memory/cpu resources with other customers.  This dedicated functionality costs a pretty penny – $87,600 a year, and as Simon Wardley pointed out that this has an interesting effect inasmuch as it puts a price tag on isolation:

Here’s the interesting thing that goes to the title of this post:

Is this a capability that AWS really expects to be utilized as they further blur the lines between public, private and hybrid cloud models OR is it a defensive strategy hinged on the exorbitant costs to further push enterprises into shared compute and overlay security models?

Specifically, one wonders if this is a strategically defensive or offensive move?

A single tenant atop a hypervisor atop dedicated hardware — that will go a long way toward addressing one concern: noisy (and nosy) neighbors.

Now, keep in mind that if an enterprise’s threat modeling and risk management frameworks are reasonably rational, they’ll realize that this is compute/memory isolation only.  Clearly the network and storage infrastructure is still shared, but the “state of the art” in today’s cloud of overlay encryption (file systems and SSL/IPSec VPNs) will likely address those issues.  Shared underlying cloud management/provisioning/orchestration is still an interesting area of concern.

So this will be an interesting play for AWS. Whether they’re using this to take a hammer to the existing private cloud models or just to add another dimension in service offering (logical, either way) I think in many cases enterprises will pay this tax to further satisfy compliance requirements by removing the compute multi-tenancy boogeyman.

/Hoff

Enhanced by Zemanta

What’s The Problem With Cloud Security? There’s Too Much Of It…

October 17th, 2010 3 comments

Here’s the biggest challenge I see in Cloud deployment as the topic of security inevitably occurs in conversation:

There’s too much of it.

Huh?

More specifically, much like my points regarding networking in highly-virtualized multi-tenant environments — it’s everywhere — we’ve got the same problem with security.  Security is shot-gunned across the cloud landscape in a haphazard fashion…and the buck (pun intended) most definitely does not stop here.

The reality is that if you’re using IaaS, the lines of demarcation for the responsibility surrounding security may in one take seemed blurred but are in fact extremely well-delineated, and that’s the problem.  I’ve seen quite a few validated design documents outlining how to deploy “secure multi-tentant virtualized environments.”  One of them is 800 pages long.

Check out the diagram below.

I quickly mocked up an IaaS stack wherein you have the Cloud provider supplying, operating, managing and securing the underlying cloud hardware and software layers whilst the applications and information (contained within VM boundaries) are maintained by the consumer of these services.  The list of controls isn’t complete, but it gives you a rough idea of what gets focused on. Do you see some interesting overlaps?  How about gaps?

This is the issue; each one of those layers has security controls in it.  There is lots of duplication and there is lots of opportunity for things to be obscured or simply not accounted for at each layer.

Each of these layers and functional solutions is generally managed by different groups of people.  Each of them is generally managed by different methods and mechanisms.  In the case of IaaS, none of the controls at the hardware and software layers generally intercommunicate and given the abstraction provided as part of the service offering, all those security functions are made invisible to the things running in the VMs.

A practical issue is that the FW, VPN, IPS and LB functions at the hardware layer are completely separate from the FW, VPN, IPS and LB functions at the software layer which are in turn completely separate from the FW, VPN, IPS and LB functions which might be built into the VM’s (or virtual appliances) which sit stop them.

The security in the hardware is isolated from the security in the software which is isolated from the security in the workload.  You can, today, quite literally install the same capabilities up and down the stack without ever meeting in the middle.

That’s not only wasteful in terms of resources but incredibly prone to error in both construction, management and implementation (since at the core it’s all software, and software has defects.)

Keep in mind that at the provider level the majority of these security controls are focused on protecting the infrastructure, NOT the stuff atop it.  By design, these systems are blind to the workloads running atop them (which are often encrypted both at rest and in transit.)  In many cases this is why a provider may not be able to detect an “attack” beyond data such as flows/traffic.

To make things more interesting, in some cases the layer responsible for all that abstraction is now the most significant layer involved in securing the system as a whole and the fundamental security elements associated with the trust model we rely upon.

The hypervisor is an enormous liability; there’s no defense in depth when your primary security controls are provided by the (*ahem*) operating system provider.  How does one provide a compensating control when visibility/transparency [detective] are limited by design and there’s no easy way to provide preventative controls aside from the hooks the thing you’re trying to secure grants access to?

“Trust me” ain’t an appropriate answer.  We need better visibility and capabilities to robustly address this issue.  Unfortunately, there’s no standard for security ecosystem interoperability from a management, provisioning, orchestration or monitoring perspective even within a single stack layer.  There certainly isn’t across them.

In the case of Cloud providers who use commodity hardware with big, flat networks with little or no context for anything other than the flows/IP mappings running over them (thus the hardware layer is portrayed as truly commoditized,) how much better/worse do you think the overall security posture is of a consumer’s workload running atop this stack.  No, that’s not a rhetorical question.  I think the case could be argued either side of the line in the sand given the points I’ve made above.

This is the big suck.  Cloud security suffers from the exact same siloed security telemetry problems as legacy operational models…except now it does it at scale. This is why I’ve always made the case that one can’t “secure the Cloud” — at least not holistically — given this lego brick problem.   Everyone wants to make the claim that they’re technology is that which will be the first to solve this problem.  It ain’t going to happen. Not with the IaaS (or even PaaS) model, it won’t.

However, there is a big opportunity to move forward here.  How?  I’ll give you a hint.  It exists toward the left side of the diagram.

/Hoff

Enhanced by Zemanta

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…

January 31st, 2010 15 comments

Allan Leinwand from GigaOm wrote a great article asking “Where are the network virtual appliances?” This was followed up by another excellent post by Rich Miller.

Allan sets up the discussion describing how we’ve typically plumbed disparate physical appliances into our network infrastructure to provide discrete network and security capabilities such as load balancers, VPNs, SSL termination, firewalls, etc.  He then goes on to describe the stunted evolution of virtual appliances:

To be sure, some networking devices and appliances are now available in virtual form.  Switches and routers have begun to move toward virtualization with VMware’s vSwitch, Cisco’s Nexus 1000v, the open source Open vSwitch and routers and firewalls running in various VMs from the company I helped found, Vyatta.  For load balancers, Citrix has released a version of its Netscaler VPX software that runs on top of its virtual machine, XenServer; and Zeus Systems has an application traffic controller that can be deployed as a virtual appliance on Amazon EC2, Joyent and other public clouds.

Ultimately I think it prudent for discussion’s sake to separate routing, switching and load balancing (connectivity) from functions such as DLP, firewalls, and IDS/IPS (security) as lumping them together actually abstracts the problem which is that the latter is completely dependent upon the capabilities and functionality of the former.  This is what Allan almost gets to when describing his lament with the virtual appliance ecosystem today:

Yet the fundamental problem remains: Most networking appliances are still stuck in physical hardware — hardware that may or may not be deployed where the applications need them, which means those applications and their associated VMs can be left with major gaps in their infrastructure needs. Without a full-featured and stateful firewall to protect an application, it’s susceptible to various Internet attacks.  A missing load balancer that operates at layers three through seven leaves a gap in the need to distribute load between multiple application servers. Meanwhile, the lack of an SSL accelerator to offload processing may lead to performance issues and without an IDS device present, malicious activities may occur.  Without some (or all) of these networking appliances available in a virtual environment, a VM may find itself constrained, unable to take full advantage of the possible economic benefits.

I’ve written about this many, many times. In fact almost three years ago I created a presentation called  “The Four Horsemen of the Virtualization Security Apocalypse” which described in excruciating detail how network virtual appliances were a big ball of fail and would be for some time. I further suggested that much of the “best-of-breed” products would ultimately become “good enough” features in virtualization vendor’s hypervisor platforms.

Why?  Because there are some very real problems with virtualization (and Cloud) as it relates to connectivity and security:

  1. Most of the virtual network appliances, especially those “ported” from the versions that usually run on dedicated physical hardware (COTS or proprietary) do not provide feature, performance, scale or high-availability parity; most are hobbled or require per-platform customization or re-engineering in order to function.
  2. The resilience and high availability options from today’s off-the-shelf virtual connectivity does not pair well with the mobility and dynamism of de-coupled virtual machines; VMs are ultimately temporal and networks don’t like topological instability due to key components moving or disappearing
  3. The performance and scale of virtual appliances still suffer when competing for I/O and resources on the same physical hosts as the guests they attempt to protect
  4. Virtual connectivity is a generally a function of the VMM (or a loadable module/domain therein.) The architecture of the VMM has dramatic impact upon the architecture of the software designed to provide the connectivity and vice versa.
  5. Security solutions are incredibly topology sensitive.  Given the scenario in #1 when a VM moves or is distributed across the pooled infrastructure, unless the security capabilities are already present on the physical host or the connectivity and security layers share a control plane (or at least can exchange telemetry,) things will simply break
  6. Many virtualization (and especially cloud) platforms do not support protocols or topologies that many connectivity and security virtual appliances require to function (such as multicast for load balancing)
  7. It’s very difficult to mimic the in-line path requirements in virtual networking environments that would otherwise force traffic passing through the connectivity layers (layers 2 through 7) up through various policy-driven security layers (virtual appliances)
  8. There is no common methodology to express what security requirements the connectivity fabrics should ensure are available prior to allowing a VM to spool up let alone move
  9. Virtualization vendors who provide solutions for the enterprise have rich networking capabilities natively as well as with third party connectivity partners, including VM and VMM introspection capabilities. As I wrote about here, mass-market Cloud providers such as Amazon Web Services or Rackspace Cloud have severely crippled networking.
  10. Virtualization and cloud vendors generally force many security vs. performance tradeoffs when implementing introspection capabilities in their platforms: third party code running in the kernel, scheduler prioritization issues, I/O limitations, etc.
  11. Much of the basic networking capabilities are being pushed lower into silicon (into the CPUs themselves) which makes virtual appliances even further removed from the guts that enable them
  12. Physical appliances (in the enterprise) exist en-mass.  Many of them provide highly scalable solutions to the specific functions that Alan refers to.  The need exists, given the limitations I describe above, to provide for integration/interaction between them, the VMM and any virtual appliances in order to offload certain functions as well as provide coverage between the physical and the logical.

What does this mean?  It means that ultimately to ensure their own survival, virtualization and cloud providers will depend less upon virtual appliances and add more of the basic connectivity AND security capabilities into the VMMs themselves as its the only way to guarantee performance, scalability, resilience and satisfy the security requirements of customers. There will be new generations of protocols, APIs and control planes that will emerge to provide for this capability, but this will drive the same old integration battles we’re supposed to be absolved from with virtualization and Cloud.

Connectivity and security vendors will offer virtual replicas of their physical appliances in order to gain a foothold in virtualized/cloud environments in order to intercept traffic (think basic traps/ACL’s) and then interact with higher-performing physical appliance security service overlays or embedded line cards in service chassis.  This is especially true in enterprises but poses many challenges in software-only, mass-market cloud environments where what you’ll continue to get is simply basic connectivity and security with limited networking functionality.  This implies more and more security will be pushed into the guest and application logic layers to deal with this disconnect.

This is exactly where we are today with Cloud providers like Amazon Web Services: basic ingress-only filtering with a very simplistic, limited and abstracted set of both connectivity and security capability.  See “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye”  Will they add more functionality?  Perhaps. The question is whether they can afford to in order to limit the impact that connecitivity and security variability/instability can bring to an environment.

That said, it’s certainly achievable, if you are willing and able to do so, to construct a completely software-based networking environment, but these environments require a complete approach and stack re-write with an operational expertise that will be hard to support for those who have spent the last 20 years working in a different paradigm and that’s a huge piece of this problem.

The connectivity layer — however integrated into the virtualized and cloud environments they seem — continues to limit how and what the security layers can do and will for some time, thus limiting the uptake of virtual network and security appliances.

Situation normal.

/Hoff

Reblog this post [with Zemanta]