Back To The Future: Network Segmentation & More Moaning About Zoning

A Bit Of Context…

This image was selected as a picture of the we...

(Photo credit: Wikipedia)A Bit Of Context…

The last 3 years have been very interesting when engaging with large enterprises and service providers as they set about designing, selecting and deploying their “next generation” network architecture. These new networks are deployed in timescales that see them collide with disruptive innovation such as fabrics, cloud, big data and DevOps.

In most cases, these network platforms must account for the nuanced impact of virtualized design patterns, refreshes of programmatic architecture and languages, and the operational model differences these things introduce.  What’s often apparent is that no matter how diligent the review, by the time these platforms are chosen, many tradeoffs are made — especially when it comes to security and compliance — and we arrive at the old adage: “You can get fast, cheap or secure…pick two.”

…And In the Beginning, There Was Spanning Tree…

The juxtaposition of flatter and flatter physical networks, nee “fabrics” (compute, network and storage,) with what seems to be a flip-flop transition between belief systems and architects who push for either layer 2 or layer 3 (or encapsulated versions thereof) segmentation at the higher layers is again aggravated by continued push for security boundary definition that yields further segmentation based on policy at the application and information layer.

So what we end up with is the benefits of flatter, any-to-any connectivity at the physical networking layer with a “software defined” and virtualized networking context floating both alongside (Nicira, BigSwitch, OpenFlow) as well as atop it (VMware, Citrix, OpenStack Quantum, etc.) with a bunch of protocols ladled on like some protocol gravy blanketing the Chicken Fried Steak that represents the modern data center.

Oh!  You Mean the Cloud…

Now, there are many folks who don’t approach it this way, and instead abstract away much of what I just described.  In Amazon Web Services’ case as a service provider, they dumb down the network sufficiently and control the abstracted infrastructure to the point that “flatness” is the only thing customers get and if you’re going to run your applications atop, you must keep it simple and programmatic in nature else risk introducing unnecessary complexity into the “software stack.”

The customers who then depend upon these simplified networking services must then absorb the gaps introduced by a lack of features by architecturally engineering around them, becoming more automated, instrumented and programmatic in nature or add yet another layer of virtualized (and generally encrypted) transport and execution above them.

This works if you’re able to engineer your way around these gaps (or make them less relevant,) but generally this is where segmentation becomes an issue due to security and compliance design patterns which depend on the “complexity” introduced by the very flexible networking constructs available in most enterprise of SP networks.

It’s like a layered cake that keeps self-frosting.

Software Defined Architecture…

You can see the extreme opportunity for Software Defined *anything* then, can’t you? With SDN, let the physical networks NOT be complex but rather more simple and flat and then unify the orchestration, traffic steering, service insertion and (even) security capabilities of the physical and virtual networks AND the virtualization/cloud orchestration layers (from the networking perspective) into a single intelligent control plane…

That’s a big old self-frosting cake.

Basically, this is what AWS has done…but all that intelligence provided by the single pane of glass is currently left up to the app owner atop them.  That’s the downside.  Those sufficiently enlightened AWS’ customers are aware generally  of this and understand the balance of benefits and limitations of this path.

In an enterprise environment, however, it’s a timing game between the controller vendors, the virtualization/cloud stack providers, the networking vendors, and security vendors …each trying to offer up this capability either as an “integrated” capability or as an overlay…all under the watchful eye of the auditor who is generally unmotivated, uneducated and unnerved by all this new technology — especially since the compliance frameworks and regulatory elements aren’t designed to account for these dramatic shifts in architecture or operation (let alone the threat curve of advanced adversaries.)

Back To The Future…Hey, Look, It’s Token Ring and DMZs!

As I sit with these customers who build these nextgen networks, the moment segmentation comes up, the elegant network and application architectures rapidly crumble into piles of asset-based rubble as what happens next borders on the criminal…

Thanks to compliance initiatives — PCI is a good example — no matter how well scoped, those flat networks become more and more logically hierarchical.  Because SDN is still nascent and we’re lacking that unified virtualized network (and security) control plane, we end up resorting back to platform-specific “less flat” network architectures in both the physical and virtual layers to achieve “enclave” like segmentation.

But with virtualization the problem gets more complex as in an attempt to be agile, cost efficient and in order to bring data to the workloads to reduce heaving lifting of the opposite approach, out-of-scope assets can often (and suddenly) be co-resident with in-scope assets…traversing logical and physical constructs that makes it much more difficult to threat model since the level of virtualized context supports differs wildly across these layers.

Architects are then left to think how they can effectively take all the awesome performance, agility, scale and simplicity offered by the underlying fabrics (compute, network and storage) and then layer on — bolt on — security and compliance capabilities.

What they discover is that it’s very, very, very platform specific…which is why we see protocols such as VXLAN and NVGRE pop up to deal with them.

Lego Blocks and Pig Farms…

These architects then replicate the design patterns with which they are familiar and start to craft DMZs that are logically segmented in the physical network and then grafted on to the virtual.  So we end up with relying on what Gunnar Petersen and I refer to as the “SSL and Firewall” lego block…we front end collections of “layer 2 connected” assets based on criticality or function, many of which stretched across these fabrics, and locate them behind layer 3 “firewalls” which provide basic zone-based isolation and often VPN connectivity between “trusted” groups of other assets.

In short, rather than build applications that securely authenticate, communicate — or worse yet, even when they do — we pigpen our corralled assets and make our estate fatter instead of flatter.  It’s really a shame.

I’ve made the case in my “Commode Computing” presentation that one of the very first things that architects need to embrace is the following:

…by not artificially constraining the way in which we organize, segment and apply policy (i.e. “put it in a DMZ”) we can think about how design “anti-patterns” may actually benefit us…you can call them what you like, but we need to employ better methodology for “zoning.”

These trust zones or enclaves are reasonable in concept so long as we can ultimately further abstract their “segmentation” and abstract the security and compliance policy requirements by expressing policy programmatically and taking the logical business and functional use-case PROCESSES into consideration when defining, expressing and instantiating said policy.

You know…understand what talks to what and why…

A great way to think about this problem is to apply the notion of application mobility — without VM containers — and how one would instantiate a security “policy” in that context.  In many cases, as we march up the stack to distributed platform application architectures, we’re not able to depend upon the “crutch” that hypervisors or VM packages have begun to give us in legacy architectures that have virtualization grafted onto them.

Since many enterprises are now just starting to better leverage their virtualized infrastructure, there *are* some good solutions (again, platform specific) that unify the physical and virtual networks from a zoning perspective, but the all-up process-driven, asset-centric (app & information) view of “policy” is still woefully lacking, especially in heterogeneous environments.

Wrapping Up…

In enterprise and SP environments where we don’t have the opportunity to start anew, it often feels like we’re so far off from this sort of capability because it requires a shift that makes software defined networking look like child’s play.  Most enterprises don’t do risk-driven, asset-centric, process-mapped modelling, [and SP’s are disconnected from this,] so segmentation falls back to what we know: DMZs with VLANs, NAT, Firewalls, SSL and new protocol band-aids invented to cover gaping arterial wounds.

In environments lucky enough to think about and match the application use cases with the highly-differentiated operational models that virtualized *everything* brings to bear, it’s here today — but be prepared and honest that the vendor(s) you chose must be strategic and the interfaces between those platforms and external entities VERY well defined…else you risk software defined entropy.

I wish I had more than the 5 minutes it took to scratch this out because there’s SO much to talk about here…

…perhaps later.

Related articles

Enhanced by Zemanta
  1. Dmitri Kalintsev
    July 16th, 2012 at 13:38 | #1

    So, if I’m reading this right, you’re talking about a distributed “Little Snitch” (application firewall) with a centralised policy engine? 😉

  2. July 17th, 2012 at 17:52 | #2

    Lots to comment on, but specifically with respect to defining the boundaries through segmentation and the need to abstract those boundaries and allow the boundaries to adapt to changes in the network, applications and services. I agree this is a critical need that is difficult to meet with existing solutions; however, flexible segmentation can be accomplished by separating policy management and policy enforcement and providing a connectionless encryption+authentication network layer to connect the things that should be connected and blocking everything else. Connectedness becomes a matter of policy: group encryption (where members of the same group get their own shared key) connects that which should be connected while cryptographically separating everything else. Logical separation via VLAN, VXLAN, etc. attempts this, but while it may achieve segmentation if the policy is properly enforced throughout the network, security is missing or unnecessarily complex in environments where assets span servers, racks, availability zones or data centers. These connections and switching points are potentially vulnerable, even with logical separation and each case if you rely on network separation, your policy may need to be aware of all of these boundaries, so that you can define the right behavior for each boundary.

    Connectionless encryption turns this upside down:
    * Connectivity is defined centrally by policy, and policy enforcement can be separated from the enforcement point, which means “what talks to what” is centrally defined, but locally enforced. Because it is centrally defined, it can be connected via APIs to other automation and orchestration tools.
    * Abstracts the network – policies don’t need to be aware of boundaries between servers, racks, availability zones and data centers, and you don’t need to define specific policies for these boundaries, because everything in the group is reachable over a transparent encrypted mesh – policy specifies group membership, and encryption+authentication takes care of everything else.
    * Relies on cryptographic isolation so that a policy failure of one transit node in the network does not expose the information – with logical separation all of the nodes have access to unprotected information, so any misconfiguration or policy slip could result in a breach. While encryption must be enforced correctly at the edges, cryptographic isolation doesn’t rely on intermediate transit nodes to enforce isolation policy. So the problem is still to provide accurate policy enforcement, but it’s a smaller and simpler problem because the number of potential points of exposure is smaller.

  3. July 18th, 2012 at 05:38 | #3

    You missed any discussion of enforcement or compliance. That is, if I determine that an application is to receive a specific security posture or policy, how can I reliably enforce that ? Today, we use physical controls to prevent operational ignorance or wilful bypass but in a dynamic cloud, applying equivalent controls remains impossible. And the auditing is costly.

    These problems are contained in the current setup and not handled in software cloud platforms. Except as indistinct promises. Edification on overcoming these challenges?

  4. Donny Parrott
    July 23rd, 2012 at 12:35 | #4

    @EtherealMind

    Can you please identify these physical controls? Is it the seperate physical assets within the datacenter? I fail to see how physical controls can prevent a faulty firewall policy or port assignment.

    I do see a trend toward enforcement and compliance moving toward the end node and away from the core systems. These solutions are mobile and can travel with the “VM” while maintaining configuration and operational compliance. However, these solutions do not address the network layer due to the viewpoint that the network is becoming a bus/highway.

    Again, I would like to understand the specifics around the enforcement and compliance that are addressable through software solutions.

  5. beaker
    January 27th, 2013 at 20:38 | #5

    @EtherealMind

    I didn’t see this comment…and it’s embarrassing to respond to it months later, but I will 😉

    No, I didn’t “miss” the issues of enforcement or compliance, I simply left them out.

    Why? Because I’ve written about them hundreds of times and allude to both in the “Pig Farms” section, wherein I refer to my “Commode Computing” presentation that covers this ad nauseum.

    …and besides which, this is a “security” blog, so what else would I be referring to? 🙂

  1. No trackbacks yet.