The Curious Case Of Continuous and Consistently Contiguous Crypto…

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

/Hoff

P.S. Remember back in the 80’s/90’s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta
  1. David Mortman
    August 9th, 2013 at 06:20 | #1

    I like this trend. It addresses large chunks of the crunchy on the outside soft and squishy on the inside problem with lots of (most?) networks. There’s no good reason these days (where these days has been the last what 15 years at least?) to trust the internal network any more than the external network for anything other than quality of service right? It’s kind of cool to see the continuing progress of the predictions of the Jericho Forum from 10 years ago as well.

  2. Mike
    August 9th, 2013 at 06:27 | #2

    Chris, while the capability to tunnel all traffic back to the enterprise before forwarding to the ultimate destination (internal, external) has been a capability in all VPN products since the mid-90’s, the main reasons cited for not using (in survey research way back when and talks with IT folks) is that the performance hit for the user–having to wait while the traffic trombones through the VPN gateway–was unacceptable and when the decision was made for split-tunneling it was invariably driven by reducing help desk calls because of slow Internet access.

    Do you find that is the case or what reasons have you heard for split-tunneling?

    • beaker
      August 9th, 2013 at 07:58 | #3

      Hey Mike.

      That may very well have been the case in years past, but with generally decent high speed connectivity, caching and client-side performance increases thanks to faster processors, etc. it doesn’t seem to that much of an impediment any longer.

      Also, since so much infrastructure support traffic like DNS is now suspect and exfiltration and C&C channels across common protocols so pervasive, the only way folks often see to get a handle on this is using the method I discuss.

      It does place additional burden on staff for connectivity, troubleshooting, etc. (as I allude) but it also offers many benefits.

      Also, keep in mind that while the capability to essentially turn off split-tunneling from the outside has existed forever, the notion of internal VPNs is not something that has gained much traction. As David alludes in his comment, treating the “inside” as just as hostile as the “outside” is driving this model.

      Wait until you see how I map this to SDN…tunnels, tunnels everywhere…it’s the revenge of PKI and VPNs…again 🙂

      /Hoff

  3. August 9th, 2013 at 07:50 | #4

    We are very much in favor of this trend here in China. We feel that the need for policy implemented at the router level is very high, and encourage everyone to feel safe putting all their eggs in high quality routers, like ones made by Cisco or Huawei. It’s very hard to break into those, after all. Very, very tricky. Would require far more persistence than we have.

  4. Mike
    August 9th, 2013 at 08:31 | #5

    Yeah, one of the first use cases of SDN (encapsulation, TRILL/SPB, pick your poison) I thought of was client isolation and control. The idea of NAC without all those pesky appliances, whacky control methods like DHCP and VLAN scope steering, and what not. SDN in the campus can actually get network control down to a per user, per device, and perhaps per application (think integration NEMs are doing with Microsoft Lync to identify applications that would be otherwise opaque).

    Looking forward to your follow up.

  5. Kenn
    August 9th, 2013 at 14:18 | #6

    Hoff,
    Great post. I’m aware of DLP appliances like Palo Alto Networks that do silent MITM SSL inspection & logging, and your post implies this is a now fairly standard (?) option in the enterprise. But one thing I’ve never quite grokked is how orgs deal with watching the watchers.
    That is, how do people deal with, say, financial/banking traffic or other sensitive content? I get the need for certain AVC, but doesn’t having 100% visibility at certain (logged/reviewable) choke points, by definition break many of our assumptions about non-repudiation, etc. around encrypted architectures?
    In other words, having people in an organization able to see plaintext, of say, arbitrary TLS passwords, Controller wire transfers, insurance claims, M&A IP, and so on, just seems to imply all sorts of complications.
    (Forgive me if this is naîve/old hat in the F500 world – I’ve just never seen it addressed in discussions such as these).

    Cheers.

  6. TheDarkSide
    August 12th, 2013 at 06:18 | #7

    @Kenn

    Yes, this! Above the statement above, “There’s no good reason these days (where these days has been the last what 15 years at least?) to trust the internal network any more than the external network” is critical.

    Server/data administrators have challenged for years the network centric security model and specifically MITM implementations. Yet, when they mention private key pairs and encryption between servers, the network & security teams fight vehemently against this.

    So, if all networks are to be “untrusted”, those responsible for the data must develop a communications method whereby no one but the sender/receiver are privy to the contents.

    Conversely, what if the VPN was terminated at the host and signed by a user certificate? Web hosts, RDP, etc. were all to require encrypted communications signed by a trusted certificate. There risks (portability of the certificate), but it might allow better tracking of the individual activity.

    I think the bigger issue is the losing battle of technology against morality. Humankind is devious and nefarious. For every increase in technology to combat these security risks, another gap arises. The only remedy for that gap is morality. So, the law of diminishing returns applies. But, to this end, discipline would greatly cut down on these infractions. Firing, jail time, hangings (j/k) would cause many to steer clear “just in case”.

  7. Dave Walker
    August 12th, 2013 at 12:09 | #8

    This approach continues to be pushed by Sun^WOracle, and has been since the UltraSPARC T series got all those spiffy crypto accelerator cores.

    As other commenters mention, one of the issues is what NIDS can do in such an environment; some will take copies of keys and do the necessary decryption on the fly.

    Of course, the biggest issue is minting and managing all the necessary keys. To build and run this kind of architecture properly, you really need to have your own internal PKI. Back when infrastructure was (almost) all hosted internally, this would just be overhead; Cloud changes the rules, of course, especially if you have different bits of yor infrastructure being served up from different providers (such as having the DB which backs the web and app servers in a different cloud from a different provider).

    “Managing the necessary keys” also gets to be fun, when it comes to getting appropriate secrets to appropriate environments at provisioning time, or (the flipside of the same problem) ascribing trustworthiness to public keys in self-signed certs minted locally on the provisioned environment, in order to avoid having to ship private keys around in one form or another. I haven’t cracked this problem, yet.

  8. stefan avgoustakis
    August 13th, 2013 at 05:33 | #9

    I agree with the requirement that the inside network should not be trusted and confidentiality of traffic should be a requirement – however – its hard to see how this would work in any organisation today where the usage of QOS, VoIP, low latency communications etc. are required.

    I don’t really see VPN gateways scale to the number of connections required within large organisations, what about security enforcement ? Really going to direct all traffic through these gateways ?

    I would argue that a technology such as IEEE 802.1AE is much more scalable and provides the same outcome when it comes down to providing confidentiality – furthermore it also allows the network infrastructure to retain visibility and security policy enforcement capability as the “tunnels” are not client to gateway but “hop to hop”(http://en.wikipedia.org/wiki/IEEE_802.1AE).

  1. No trackbacks yet.