Archive

Archive for the ‘Identity Management’ Category

The Curious Case Of Continuous and Consistently Contiguous Crypto…

August 8th, 2013 9 comments

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

/Hoff

P.S. Remember back in the 80’s/90’s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta

The Soylent Green of “Epic Hacks” – It’s Made of PEOPLE!

August 7th, 2012 3 comments

Allow me to immediately state that I am, in no way, attempting to blame or shame the victim in my editorial below.

However, the recent rash of commentary from security wonks on Twitter and blogs regarding who is to “blame” in Mat Honan’s unfortunate experience leaves me confused and misses an important point.

Firstly, the title of the oft-referenced article documenting the series of events is at the root of my discontent:

How Apple and Amazon Security Flaws Led to My Epic Hacking

As I tweeted, my assessment and suggestion for a title would be:

How my poor behavior led to my epic hacking & flawed trust models & bad luck w/Apple and Amazon assisted

…especially when coupled with what is clearly an admission by Mr. Honan, that he is, fundamentally, responsible for enabling the chained series of events that took place:

In the space of one hour, my entire digital life was destroyed. First my Google account was taken over, then deleted. Next my Twitter account was compromised, and used as a platform to broadcast racist and homophobic messages. And worst of all, my AppleID account was broken into, and my hackers used it to remotely erase all of the data on my iPhone, iPad, and MacBook.

In many ways, this was all my fault. My accounts were daisy-chained together. Getting into Amazon let my hackers get into my Apple ID account, which helped them get into Gmail, which gave them access to Twitter. Had I used two-factor authentication for my Google account, it’s possible that none of this would have happened, because their ultimate goal was always to take over my Twitter account and wreak havoc. Lulz.

Had I been regularly backing up the data on my MacBook, I wouldn’t have had to worry about losing more than a year’s worth of photos, covering the entire lifespan of my daughter, or documents and e-mails that I had stored in no other location.

Those security lapses are my fault, and I deeply, deeply regret them.

The important highlighted snippets above are obscured by the salacious title and the bulk of the article which focuses on how services — which he enabled and relied upon — however flawed certain components of that trust and process may have been, are *really* at the center of the debate here.  Or ought to be.

There’s clearly a bit of emotional transference occurring.  It’s easier to associate causality with a faceless big corporate machine rather than swing the light toward the victim, even if he, himself, self-identifies.

Before you think I’m madly defending and/or suggesting that there weren’t breakdowns with any of the vendors — especially Apple — let me assure you I am not.  There are many things that can and should be addressed here, but leaving out the human element, the root of it all here, is dangerous.

I am concerned that as a community there is often an aire of suggestion that consumers are incapable and inculpable with respect to understanding the risks associated with the clicky-clicky-connect syndrome that all of these interconnected services brings.

People give third party applications and services unfettered access to services like Twitter and Facebook every day — even when messages surrounding the potential incursion of privacy and security are clearly stated.

When something does fail — and it does and always will — we vilify the suppliers (sometimes rightfully so for poor practices) but we never really look at what we need to do to prevent having to see this again: “Those security lapses are my fault, and I deeply, deeply regret them.”

The more interconnected things become, the more dependent upon flawed trust models and the expectations that users aren’t responsible we shall be.

This is the point I made in my presentations: Cloudifornication and Cloudinomicon.

There’s a lot of interesting discussion regarding the effectiveness of security awareness training.  Dave Aitel started a lively one here: “Why you shouldn’t train employees for security awareness

It’s unfortunate the the only real way people learn is through misfortune, and any way you look at it, that’s the thing that drives awareness.

There are many lessons we can learn from Mr. Honan’s unfortunate experience…I urge you to consider less focusing blame on one link in the chain and instead guide the people you can influence to reconsider decisions of convenience over the potential tradeoffs they incur.

/Hoff

P.S. For you youngsters who don’t get the Soylent Green reference, see here.  Better yet, watch it. It’s awesome. Charlton Heston, FTW.

P.P.S. (Check out the sentiment of all the articles below)

Enhanced by Zemanta

Quick Ping: VMware’s Horizon App Manager – A Big Bet That Will Pay Off…

May 17th, 2011 2 comments

It is so tempting to write about VMware‘s overarching strategy of enterprise and cloud domination, but this blog entry really speaks to an important foundational element in their stack of offerings which was released today: Horizon App Manager.

Check out @Scobleizer’s interview with Noel Wasmer (Dir. of Product Management for VMware) on the ins-and-outs of HAM.

Frankly, federated identity and application entitlement is not new.

Connecting and extending identities from inside the enterprise using native directory services to external applications (SaaS or otherwise) is also not new.

What’s “new” with VMware’s Horizon App Manager is that we see the convergence and well-sorted integration of a service-driven federated identity capability that ties together enterprise “web” and “cloud” (*cough*)-based SaaS applications with multi-platform device mobility powered by the underpinnings of freshly-architected virtualization and cloud architecture.  All delivered as a service (SaaS) by VMware for $30 per user/per year.

[Update: @reillyusa and I were tweeting back and forth about the inside -> out versus outside -> in integration capabilities of HAM.  The SAML Assertions/OAuth integration seems to suggest this is possible.  Moreover, as I alluded to above, solutions exist today which integrate classical VPN capabilities with SaaS offers that provide SAML assertions and SaaS identity proxying (access control) to well-known applications like SalesForce.  Here’s one, for example.  I simply don’t have any hands-on experience with HAM or any deeper knowledge than what’s publicly available to comment further — hence the “Quick Ping.”]

Horizon App Manager really is a foundational component that will tie together the various components of  VMware’s stack offers for seamless operation including such products/services as Zimbra, Mozy, SlideRocket, CloudFoundry, View, etc.  I predict even more interesting integration potential with components such as elements of the vShield suite — providing identity-enabled security policies and entitlement at the edge to provision services in vCloud Director deployments, for example (esp. now that they’ve acquired NeoAccel for SSL VPN integration with Edge.)

“Securely extending the enterprise to the Cloud” (and vice versa) is a theme we’ll hear more and more from VMware.  Whether this thin client, virtual machines, SaaS applications, PaaS capabilities, etc., fundamentally what we all know is that for the enterprise to be able to assert control to enable “security” and compliance, we need entitlement.

I think VMware — as a trusted component in most enterprises — has the traction to encourage the growth of their supported applications in their catalog ecosystem which will in turn make the enterprise excited about using it.

This may not seem like it’s huge — especially to vendors in the IAM space or even Microsoft — but given the footprint VMware has in the enterprise and where they want to go in the cloud, it’s going to be big.

/Hoff

(P.S. It *is* interesting to note that this is a SaaS offer with an enterprise virtual appliance connector.  It’s rumored this came from the TriCipher acquisition.  I’ll leave that little nugget as a tickle…)

(P.P.S. You know what I want? I want a consumer version of this service so I can use it in conjunction with or in lieu of 1Password. Please.  Don’t need AD integration, clearly)

Related articles

Enhanced by Zemanta

Endpoint Security vs. DLP? That’s Part Of the Problem…

March 31st, 2008 6 comments

Sandisk
Larry Walsh wrote something (Defining the Difference Between Endpoint Security and Data Loss Prevention) that sparked an interesting debate based upon a vendor presentation given to him on "endpoint security" by SanDisk.

SanDisk is bringing to market a set of high-capacity USB flash drives that feature built-in filesystem encryption as well as strong authentication and access control.  If the device gets lost with the data on it, it’s "safe and secure" because it’s encrypted.  They are positioning this as an "endpoint security" solution.

I’m not going to debate the merits/downsides of that approach because I haven’t seen their pitch, but suffice it to say, I think it’s missing a "couple" of pieces to solve anything other than a very specific set of business problems.

Larry’s dilemma stems from the fact that he maintains that this capability and functionality is really about data loss protection and doesn’t have much to do with "endpoint security" at all:

We debated that in my office for a few minutes. From my perspective, this solution seems more like a data loss prevention solution than endpoint security. Admittedly, there are many flavors of endpoint security. When I think of endpoint security, I think of network access control (NAC), configuration management, vulnerability management and security policy enforcement. While this solution is designed for the endpoint client, it doesn’t do any of the above tasks. Rather, it forces users to use one type of portable media and transparently applies security protection to the data. To me, that’s DLP.

In today’s market taxonomy, I would agree with Larry.  However, what Larry is struggling with is not really the current state of DLP versus "endpoint security," but rather the future state of converged information-centric governance.  He’s describing the problem that will drive the solution as well as the inevitable market consolidation to follow.

This is actually the whole reason Mogull and I are talking about the evolution of DLP as it exists today to a converged solution we call CMMP — Content Management, Monitoring and Protection. {Yes, I just added another M for Management in there…}

What CMMP represents is the evolved and converged end-state technology integration of solutions that today provide a point solution but "tomorrow" will be combined/converged into a larger suite of services.

Off the cuff, I’d expect that we will see at a minimum the following technologies being integrated to deliver CMMP as a pervasive function across the information lifecycle and across platforms in flight/motion and at rest:

  • Data leakage/loss protection (DLP)
  • Identity and access management (IAM)
  • Network Admission/Access Control (NAC)
  • Digital rights/Enterprise rights management (DRM/ERM)
  • Seamless encryption based upon "communities of interest"
  • Information classification and profiling
  • Metadata
  • Deep Packet Inspection (DPI)
  • Vulnerability Management
  • Configuration Management
  • Database Activity Monitoring (DAM)
  • Application and Database Monitoring and Protection (ADMP)
  • etc…

That’s not to say they’ll all end up as a single software install or network appliance, but rather a consolidated family of solutions from a few top-tier vendors who have coverage across the application, host and network space. 

If you were to look at any enterprise today struggling with this problem, they likely have or are planning to have most of the point solutions above anyway.  The difficulty is that they’re all from different vendors.  In the future, we’ll see larger suites from fewer vendors providing a more cohesive solution.

This really gives us the "cross domain information protection" that Rich talks about.

We may never achieve the end-state described above in its entirety, but it’s safe to say that the more we focus on the "endpoint" rather than the "information on the endpoint," the bigger the problem we will have.

/Hoff

For Data to Survive, It Must ADAPT…

June 1st, 2007 2 comments

Adapt

Now that I’ve annoyed you by suggesting that network security will over time become irrelevant given lost visibility due to advances in OS protocol transport and operation, allow me to give you another nudge towards the edge and further reinforce my theories with some additionally practical data-centric security perspectives.

If any form of network-centric security solution is to succeed in adding value over time, the mechanics of applying policy and effecting disposition on flows as they traverse the network must be made on content in context.  That means we must get to a point where we can make “security” decisions based upon information and its “value” and classification as it moves about.

It’s not good enough to only make decisions on how flows/data should be characterized and acted on with the criteria being focused on the 5-tupule (header,) signature-driven profiling or even behavioral analysis that doesn’t characterize the content in context of where it’s coming from, where it’s going and who (machine, virtual machine and “user”) or what (application, service) intends to access and consume it.

In the best of worlds, we like to be able to classify data before it makes its way though the IP stack and enters the network and use this metadata as an attached descriptor of the ‘type’ of content that this data represents.  We could do this as the data is created by applications (thick or thin, rich or basic) either using the application itself or by using an agent (client-side) that profiles the data prior to storage or transmission.

Since I’m on my Jericho Forum kick lately, here’s how they describe how data ought to be controlled:

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component.

You would probably need client-side software to provide this
functionality.  As an example, we do this today with email compliance solutions that have primitive versions of
this sort of capability that force users to declare the classification
of an email before you can hit the send button or even the document info that can be created when one authors a Word document.

There are a bunch of ERM/DRM solutions in play today that are bandied about being sold as “compliance” solutions, but there value goes much deeper than that.  IP Leakage/Extrusion prevention systems (with or without client-side tie-ins) try to do similar things also.

Ideally, this metadata would be used as a fixed descriptor of the content that permanently attaches itself and follows that content around so it can be used to decide what content should be “routed” based upon policy.

If we’re not able to use this file-oriented static metadata, we’d like then for the “network” (or something in/on it) to be able to dynamically profile content at wirespeed and characterize the data as it moves around the network from origin to destination in the same way.

So, this is where Applied Data & Application Policy Tagging (ADAPT) comes in.  ADAPT is an approach that can make use of existing and new technology to profile and characterize content (by using content matching, signatures, regular expressions and behavioral analysis in hardware or software) to then apply policy-driven information “routing” functionality as flows traverse the network by using an 802.1 q-in-q VLAN tags (open approach) or applying a proprietary ADAPT tag-header as a descriptor to each flow as it moves around the network.

Think of it like a VLAN tag the describes the data within the packet/flow which is defined as seen fit;

The ADAPT tag/VLAN is user defined and can use any taxonomy that best suits the types of content that is interesting; one might use asset classification such as “confidential” or uses taxonomies such as “HIPAA” or “PCI” to describe what is contained in the flows.  One could combine and/or stack the tags, too.  The tag maps to one of these arbitrary categories which could be fed by interpreting metadata attached to the data itself (if in file form) or dynamically by on-the-fly profiling at the network level.

As data moves across the network and across what we call boundaries (zones) of trust, the policy tags are parsed and disposition effected based upon the language governing the rules.  If you use the “open” version using the q-in-q VLAN’s, you have something on the order of 4096 VLAN IDs to choose from…more than enough to accomodate most asset classification and still leave room for VLAN usage.  Enforcing the ACL’s can be done by pretty much ANY modern switch that supports q-in-q, very quickly.

Just like an ACL for IP addresses or VLAN policies, ADAPT does the same thing for content routing, but using VLAN ID’s (or the proprietary ADAPT header) to enforce it.

To enable this sort of functionality, either every switch/router in the network would need to either be q-in-q capable (which is most switches these days) or ADAPT enabled (which would be difficult since you’d need every network vendor to support the protocols.)  You could use an overlay UTM security services switch sitting on top of the network plumbing through which all traffic moving from one zone to another would be subject to the ADAPT policy since each flow has to go through said device.

Since the only device that needs to be ADAPT aware is this UTM security service switch (see the example below,) you can let the network do what it does best and utilize this solution to enforce the policy for you across these boundary transitions.  Said UTM security service switch needs to have an extremely high-speed content security engine that is able to characterize the data at wirespeed and add a tag to the frame as it moves through the switching fabric and processed prior to popping out onto the network.

Clearly this switch would have to have coverage across every network segment.  It wouldn’t work well in virtualized server environments or any topology where zoned traffic is not subject to transit through the UTM switch.

I’m going to be self-serving here and demonstrate this “theoretical” solution using a Crossbeam X80 UTM security services switch plumbed into a very fast, reliable, and resilient L2/L3 Cisco infrastructure.  It just so happens to have a wire-speed content security engine installed in it.  The reason the X-Series can do this is because once the flow enters its switching fabric, I own the ultimate packet/frame/cell format and can prepend any header functionality I like onto the structure to determine how it gets “routed.”

Take the example below where the X80 is connected to the layer-3 switches using 802.1q VLAN trunked interfaces.  I’ve made this an intentionally simple network using VLANs and L3 routing; you could envision a much more complex segmentation and routing environment, obviously.

AdaptjpgThis network is chopped up into 4 VLAN segments:

  1. General Clients (VLAN A)
  2. Finance & Accounting Clients (VLAN B)
  3. Financial Servers (VLAN C)
  4. HR Servers (VLAN D)

Each of the clients/servers in the respective VLANs default routes out to an IP address which belongs to the firewall cluster IP addresses which is proffered by the firewall application modules providing service in the X80.

Thus, to get from one VLAN to another VLAN, one must pass through the X80 and profiled by this content security engine and whatever additional UTM services are installed in the chassis (such as firewall, IDP, AV, etc.)

Let’s say then that a user in VLAN A (General Clients) attempts to access one or more resources in the VLAN D (HR Servers.)

Using solely IP addresses and/or L2 VLANs, let’s say the firewall and IPS policies allow this behavior as the clients in that VLAN have a legitimate need to access the HR Intranet server.  However, let’s say that this user tries to access data that exists on the HR Intranet server but contains personally identifiable information that falls under the governance/compliance mandates of HIPAA.

Let us further suggest that the ADAPT policy states the following:

Rule  Source                Destination            ADAPT Descriptor           Action
==============================================================

1        VLAN A             VLAN D                    HIPAA, Confidential        Deny
IP.1.1               IP.3.1

2        VLAN B             VLAN C                    PCI                                 Allow
IP.2.1             IP.4.1

Using rule 1 above, as the client makes the request, he transits from VLAN A to VLAN D.  The reply containing the requested information is profiled by the content security engine which is able to  characterize the data as containing information that matches our definition of either “HIPAA or Confidential” (purely arbitrary for the sake of this example.)

This could be done by reading the metadata if it exists as an attachment to the content’s file structure, in cooperation with an extrusion prevention application running in the chassis, or in the case of ad-hoc web-based applications/services, done dynamically.

According to the ADAPT policy above, this data would then be either silently dropped, depending upon what “deny” means, or perhaps the user would be redirected to a webpage that informs them of a policy violation.

Rule 2 above would allow authorized IP’s in VLANs to access PCI-classified data.

You can imagine how one could integrate IAM and extend the policies to include pseudonymity/identity as a function of access, also.  Or, one could profile the requesting application (browser, for example) to define whether or not this is an authorized application.  You could extend the actions to lots of stuff, too.

In fact, I alluded to it in the first paragraph, but if we back up a step and look at where consolidation of functions/services are being driven with virtualization, one could also use the principles of ADAPT to extend the ACL functionality that exists in switching environments to control/segment/zone access to/from virtual machines (VMs) of different asset/data/classification/security zones.

What this translates to is a workflow/policy instantiation that would use the same logic to prevent VM1 from communicating with VM2 if there was a “zone” mis-match; as we add data classification in context, you could have various levels of granularity that defines access based not only on VM but VM and data trafficked by them.

Furthermore, assuming this service was deployed internally and you could establish a trusted CA with certs that would support transparent MITM SSL decrypts, you could do this (with appropriate scale) with encrypted traffic also.

This is data-centric security that uses the network when needed, the host when it can and the notion of both static and dynamic network-borne data classification to enforce policy in real-time.

/Hoff

[Comments/Blogs on this entry you might be interested in but have no trackbacks set:

MCWResearch Blog

Rob Newby’s Blog

Alex Hutton’s Blog

Security Retentive Blog

A Funny Thing Happened at the Museum Of Science…

February 21st, 2007 No comments

Mos_logo
One of the benefits of living near Boston is the abundance of amazing museums and historic sites available for visit within 50 miles from my homestead.

This weekend the family and I decided to go hit the Museum of Science for a day of learning and fun.

As we were about to leave, I spied an XP-based computer sitting in the corner of one of the wings and was intrigued by the sign on top of the monitor instructing any volunteers to login:

Img00225

 

Then I noticed the highlighted instruction sheet taped to the wall next to the machine:

Img00226

 

If you’re sharp enough, you’ll notice that the sheet instructs the volunteer how to remember their login credentials — and what their password is (‘1234’) unless they have changed it!

"So?" you say, "That’s not a risk.  You don’t have any usernames!"

Looking to the right I saw a very interesting plaque.  It contained the first and last names of the museum’s most diligent volunteers who had served hundreds of hours on behalf of the Museum.  You can guess where this is going…

I tried for 30 minutes to find someone (besides Megan Crosby on the bottom of the form) to whom I could suggest a more appropriate method of secure sign-on instructions.  The best I could do was one of the admission folks who stamped my hand upon entry and ended up with a manager’s phone number written on the back of a stroller rental slip.

(In)Security is everywhere…even at the Museum of Science.  Sigh.

/Hoff

More debate on SSO/Authentication

August 2nd, 2006 1 comment

Mike Farnum and I continue to debate the merits of single-sign-on and his opinion that deploying same makes you more secure. 

Rothman’s stirring the point saying this is a cat fight.  To me, it’s just two dudes having a reasonable debate…unless  you know something I don’t [but thanks, Mike R. because nobody would ever read my crap unless you linked to it! ;)]

Mike’s position is that SSO does make you more secure and when combined with multi-factor authentication adds to defense-in-depth.   

It’s the first part I have a problem with, not so much the second and I figured out why.  It’s the order of the things that got me bugged when Mike said the following:

But here’s s [a] caveat, no matter which way you go: you really need a
single-signon solution backing up a multi-factor authentication
implementation.
 

If he had suggested that multi-factor authentication should back up an SSO solution, I’d agree.  But he didn’t and he continues not to by maintaing (I think) that SSO itself is secure and SSO + multi-factor authentication is more secure.

My opinion is a little different.  I believe that strong authentication *does* add to defense-in-depth, but SSO adds only depth of complexity, obfuscation and more moving parts, but with a single password on the front end.  More on that in a minute.

Let me clarify a point which is that I think from a BUSINESS and USER EXPERIENCE perspective, SSO is a fantastic idea.  However, I still maintain that SSO by itself does not add to defense-in-depth (just the opposite, actually) and does not, quantifiably, make you more "secure."  SSO is about convenience, ease of use and streamlined efficiency.

You may cut down on password resets, sure.  If someone locks themselves out, however, most of the time resets/unlocks involve then self-service portals or telephone resets which are just as prone to brute force and social engineering as calling the helpdesk, but that’s a generalization and I would rather argue through analogy… 😉

Here’s the sticky part of why I think SSO does not make you more secure, it merely transfers the risks involved with passwords from one compartment to the next. 

While that’s a valid option, it is *very* important to recognize that managing risk does not, by definition, make you more secure…sometimes managing risk means you accept or transfer it.  It doesn’t mean you’ve solved the problem, just acknowledged it and chosen to accept the fact that the impact does not justify the cost involed in mitigating it. 😉

SSO just says "passwords are a pain in the ass to manage. I’m going to find a better solution for managing them that makes my life easier."  SSO Vendors claim it makes you more secure, but these systems can get very complex when implementing them across an Enterprise with 200 applications, multiple user repositories and the need to integrate or federate identities and it becomes difficult to quantify how much more secure you really are with all of these moving parts.

Again, SSO adds depth (of complexity, obfuscation and more moving parts) but with a single password on the front end.  Complex passwords on the back-end managed by the SSO system don’t do you a damned good when some monkey writes that single password that unlocks the entire enterprise down on a sticky note.

Let’s take the fancy "SSO" title out of the mix for a second and consder today’s Active Directory/LDAP proxy functions which more and more applications tie into.  This relies on a single password via your domain credentials to authenticate directly to an application.  This is a form of SSO, and the reality is that all we’re doing when adding on an SSO system is supporting web and legacy applications that can’t use AD and proxying that function through SSO.

It’s the same problem all over again except now you’ve just got an uber:proxy.

Now, if you separate SSO from the multi-factor/strong authentication argument, I will agree that strong authentication (not necessarily multi-factor — read George Ou’s blog) helps mitigate some of the password issue, but they are mutually exclusive.

Maybe we’re really saying the same thing, but I can’t tell.

Just to show how fair and balanced I am (ha!) I will let you know that prior to leaving my last employ, I was about to deploy an Enterprise-wide SSO solution.  The reason?  Convenience and cost.

Transference of risk from the AD password policies to the SSO vendor’s and transparency of process and metrics collection for justifying more heads.    It wasn’t going to make us any more secure, but would make the users and the helpdesk happy and let us go figure out how we were going to integrate strong authentication to make the damned thing secure.

Chris

On two-factor authentication and Single-Sign-On…

August 1st, 2006 2 comments

Computer_key1_1
I’ve been following with some hand-wringing the on-going debates regarding the value of two-factor and strong authentication systems in addition to, or supplementing, traditional passwords.

I am very intent on seeing where the use cases that best fit strong authentication ultimately surface in the long term.  We’ve seen where they are used today, but Icub wonder if we, in the U.S., will ever be able to satisfy the privacy concerns raised by something like a smart-card-based national ID system to recognize the benefits of this technology. 

Today, we see multi-factor authentication utilized for:  Remote-access VPN, disk encryption, federated/authenticated/encrypted identity management and access control, the convergence of physical and logical/information security…

[Editor’s Note: George Ou from ZDNet just posted a really intersting article on his blog relating how banks are "…cheating their way to [FFIEC] web security guidelines" by just using multiple instances of "something the user knows" and passing it off as "multifactor authentication."  His argument regarding multi-factor (supplemental) vs. strong authentication is also very interesting.]

I’ve owned/implemented/sold/evaluated/purchased every kind of two-factor / extended-factor / strong authentication system you can think of:

  • Tokens
  • SMS Messaging back to phones
  • Turing/image fuzzing
  • Smart Cards
  • RFID
  • Proximity
  • Biometrics
  • Passmark-like systems

…and there’s very little consistency in how they are deployed, managed and maintained.  Those pesky little users always seemed to screw something up…and it usually involved losing something, washing something, flushing something or forgetting something.

The technology’s great, but like Chandler Howell says there are a lot of issues that need reconsideration when it comes to their implementation that go well beyond what we think of today as simply the tenents of "strong" authentication and the models of trust we surround them with:

So here are some Real World goals I suggest we should be looking at.

  1. Improved authentication should focus on (cryptographically) strong
    Mutual Authentication, not just improved assertion of user Identity.
    This may mean shifts in protocols, it may mean new technology. Those
    are implementation details at this level.
  2. We need to break the relationship between location & security
    assumption, including authentication. Do we need to find a replacement
    for “somewhere you are?” And if so, is it another authentication factor?
  3. How does improved authentication get protection closer to the data?
    We’re still debating types of deadbolts for our screen door rather than
    answering this question.

All really good points, and ones that I think we’re just at the tip of discussing. 

Taking these first steps is an ugly and painful experience usually, and I’d say that the first footprints planted along this continuum do belong to the token authentication models of today.  They don’t work for every application and there’s a lack of cross-pollinization when you use one vendor’s token solution and wish to authenticate across boundaries (this is what OATH tries to solve.)

For some reaon, people tend to evaluate solutions and technology in a very discrete and binary modality: either it’s the "end-all, be-all, silver bullet" or it’s a complete failure.  It’s quite an odd assertion really, but I suppose folks always try to corral security into absolutes instead of relativity.

That explains a lot.

At any rate, there’s no reason to re-hash the fact that passwords suck and that two-factor authentication can provide challenges, because I’m not going to add any value there.  We all understand the problem.  It’s incomplete and it’s not the only answer. 

Defense in depth (or should it be wide and narrow?) is important and any DID strategy of today includes the use of some form of strong authentication — from the bowels of the Enterprise to the eCommerce applications used in finance — driven by perceived market need, "better security," regulations, or enhanced privacy.

However, I did read something on Michael Farnum’s blog here that disturbed me a little.  In his blog, Michael discusses the pros/cons of passwords and two-factor authentication and goes on to introduce another element in the Identity Management, Authentication and Access Control space: Single-Sign-On.

Michael states:

But here’s s [a] caveat, no matter which way you go: you really need a
single-signon solution backing up a multi-factor authentication
implementation.
  This scenario seems to make a lot of sense for a few
reasons:

  • It eases the administrative burdens for the IT department because,
    if implemented correctly, your password reset burden should go down to
    almost nil
  • It eases (possibly almost eliminates) password complaints and written down passwords
  • It has the bonus of actually easing the login process to the network and the applications

I know it is not the end-all-be-all, but multi-factor authentication
is definitely a strong layer in your defenses.  Think about it.

Okay, so I’ve thought about it and playing Devil’s Advocate, I have concluded that my answer is: "Why?"

How does Single-Sign-On contribute to defense-in-depth (besides adding another hyphenated industry slang) short of lending itself to convenience for the user and the help desk.  Security is usually 1/convenience, so by that algorithm it doesn’t.

Now instead of writing down 10 passwords, the users only need one sticky — they’ll write that one down too!

Does SSO make you more secure?  I’d argue that in fact it does not — not now that the user has a singular login to every resource on the network via one password. 

Yes, we can shore that up with a strong-authentication solution, and that’s a good idea, but I maintain that SA and SSO are mutually exclusive and not a must.  The complexity of these systems can be mind boggling, especially when you consider the types of priviledges these mechanisms often require in order to reconcile this ubiquitous access.  It becomes another attack surface.

There’s a LOT of "kludging" that often goes on with these SSO systems in order to support web and legacy applications and in many cases, there’s no direct link between the SSO system, the authentication mechanism/database/directory and ultimately the control(s) protecting as close to the data as you can.

This cumbersome process still relies on the underlying OS functionality and some additional add-ons to mate the authentication piece with the access control piece with the encryption piece with the DRM piece…

Yet I digress.

I’d like to see the RISKS of SSO presented along with the benefits if we’re going to consider the realities of the scenario in terms of this discussion.

That being said, just because it’s not the "end-all-be-all" (what the hell is with all these hyphens!?) doesn’t mean it’s not helpful… 😉

Chris