Archive

Archive for the ‘Endpoint Security’ Category

The Curious Case Of Continuous and Consistently Contiguous Crypto…

August 8th, 2013 9 comments

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

/Hoff

P.S. Remember back in the 80’s/90’s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta

Elemental: Leveraging Virtualization Technology For More Resilient & Survivable Systems

June 21st, 2012 Comments off

Yesterday saw the successful launch of Bromium at Gigamon’s Structure conference in San Francisco.

I was privileged to spend some stage time with Stacey Higginbotham and Simon Crosby (co-founder, CTO, mentor and good friend) on stage after Simon’s big reveal of Bromium‘s operating model and technology approach.

While product specifics weren’t disclosed, we spent some time chatting about Bromium’s approach to solving a particularly tough set of security challenges with a focus on realistic outcomes given the advanced adversaries and attack methodologies in use today.

At the heart of our discussion* was the notion that in many cases one cannot detect let alone prevent specific types of attacks and this requires a new way of containing the impact of exploiting vulnerabilities (known or otherwise) that are as much targeting the human factor as they are weaknesses in underlying operating systems and application technologies.

I think Kurt Marko did a good job summarizing Bromium in his article here, so if you’re interested in learning more check it out. I can tell you that as a technology advisor to Bromium and someone who is using the technology preview, it lives up to the hype and gives me hope that we’ll see even more novel approaches of usable security leveraging technology like this.  More will be revealed as time goes on.

That said, with productization details purposely left vague, Bromium’s leveraged implementation of Intel’s VT technology and its “microvisor” approach brought about comments yesterday from many folks that reminded them of what they called “similar approaches” (however right/wrong they may be) to use virtualization technology and/or “sandboxing” to provide more “secure” systems.  I recall the following in passing conversation yesterday:

  • Determina (VMware acquired)
  • Green Borders (Google acquired)
  • Trusteer
  • Invincea
  • DeepSafe (Intel/McAfee)
  • Intel TXT w/MLE & hypervisors
  • Self Cleansing Intrusion Tolerance (SCIT)
  • PrivateCore (Newly launched by Oded Horovitz)
  • etc…

I don’t think Simon would argue that the underlying approach of utilizing virtualization for security (even for an “endpoint” application) is new, but the approach toward making it invisible and transparent from a user experience perspective certainly is.  Operational simplicity and not making security the user’s problem is a beautiful thing.

Here is a video of Simon and my session “Secure Everything.

What’s truly of interest to me — and based on what Simon said yesterday — the application of this approach could be just at home in a “server,” cloud or mobile application as it is on a classical desktop environment.  There are certainly dependencies (such as VT) today, but the notion that we can leverage virtualization for better resilience, survivability and assurance for more “trustworthy” systems is exciting.

I for one am very excited to see how we’re progressing from “bolt on” to more integrated approaches in our security models. This will bear fruit as we become more platform and application-centric in our approach to security, allowing us to leverage fundamentally “elemental” security components to allow for more meaningfully trustworthy computing.

/Hoff

* The range of topics was rather hysterical; from the Byzantine General’s problem to K/T Boundary extinction-class events to the Mexican/U.S. border fence, it was chock full of analogs 😉

 

Enhanced by Zemanta

CloudPassage & Why Guest-Based Footprints Matter Even More For Cloud Security

February 1st, 2011 4 comments
VM (operating system)

Image via Wikipedia

Every day for the last week or so after their launch, I’ve been asked left and right about whether I’d spoken to CloudPassage and what my opinion was of their offering.  In full disclosure, I spoke with them when they were in stealth almost a year ago and offered some guidance as well as the day before their launch last week.

Disappointing as it may be to some, this post isn’t really about my opinion of CloudPassage directly; it is, however, the reaffirmation of the deployment & delivery models for the security solution that CloudPassage has employed.  I’ll let you connect the dots…

Specifically, in public IaaS clouds where homogeneity of packaging, standardization of images and uniformity of configuration enables scale, security has lagged.  This is mostly due to the fact that for a variety of reasons, security itself does not scale (well.)

In an environment where the underlying platform cannot be counted upon to provide “hooks” to integrate security capabilities in at the “network” level, all that’s left is what lies inside the VM packaging:

  1. Harden and protect the operating system [and thus the stuff atop it,]
  2. Write secure applications and
  3. Enforce strict, policy-driven information-centric security.

My last presentation, “Cloudinomicon: Idempotent Infrastructure, Building Survivable Systems and Bringing Sexy Back to Information Centricity” addressed these very points. [This one is a version I delivered at the University of Michigan Security Summit]

If we focus on the first item in that list, you’ll notice that generally to effect policy in the guest, you must have a footprint on said guest — however thin — to provide the hooks that are needed to either directly effect policy or redirect back to some engine that offloads this functionality.  There’s a bit of marketing fluff associated with using the word “agentless” in many applications of this methodology today, but at some point, the endpoint needs some sort of “agent” to play*

So that’s where we are today.  The abstraction offered by virtualized public IaaS cloud platforms is pushing us back to the guest-centric-based models of yesteryear.

This will bring challenges with scale, management, efficacy, policy convergence between physical and virtual and the overall API-driven telemetry driven by true cloud solutions.

You can read more about this in some of my other posts on the topic:

Finally, since I used them for eyeballs, please do take a look at CloudPassage — their first (free) offerings are based upon leveraging small footprint Linux agents and a cloud-based SaaS “grid” to provide vulnerability management, and firewall/zoning in public cloud environments.

/Hoff

* There are exceptions to this rule depending upon *what* you’re trying to do, such as anti-malware offload via a hypervisor API, but this is not generally available to date in public cloud.  This will, I hope, one day soon change.

Enhanced by Zemanta

Endpoint Security vs. DLP? That’s Part Of the Problem…

March 31st, 2008 6 comments

Sandisk
Larry Walsh wrote something (Defining the Difference Between Endpoint Security and Data Loss Prevention) that sparked an interesting debate based upon a vendor presentation given to him on "endpoint security" by SanDisk.

SanDisk is bringing to market a set of high-capacity USB flash drives that feature built-in filesystem encryption as well as strong authentication and access control.  If the device gets lost with the data on it, it’s "safe and secure" because it’s encrypted.  They are positioning this as an "endpoint security" solution.

I’m not going to debate the merits/downsides of that approach because I haven’t seen their pitch, but suffice it to say, I think it’s missing a "couple" of pieces to solve anything other than a very specific set of business problems.

Larry’s dilemma stems from the fact that he maintains that this capability and functionality is really about data loss protection and doesn’t have much to do with "endpoint security" at all:

We debated that in my office for a few minutes. From my perspective, this solution seems more like a data loss prevention solution than endpoint security. Admittedly, there are many flavors of endpoint security. When I think of endpoint security, I think of network access control (NAC), configuration management, vulnerability management and security policy enforcement. While this solution is designed for the endpoint client, it doesn’t do any of the above tasks. Rather, it forces users to use one type of portable media and transparently applies security protection to the data. To me, that’s DLP.

In today’s market taxonomy, I would agree with Larry.  However, what Larry is struggling with is not really the current state of DLP versus "endpoint security," but rather the future state of converged information-centric governance.  He’s describing the problem that will drive the solution as well as the inevitable market consolidation to follow.

This is actually the whole reason Mogull and I are talking about the evolution of DLP as it exists today to a converged solution we call CMMP — Content Management, Monitoring and Protection. {Yes, I just added another M for Management in there…}

What CMMP represents is the evolved and converged end-state technology integration of solutions that today provide a point solution but "tomorrow" will be combined/converged into a larger suite of services.

Off the cuff, I’d expect that we will see at a minimum the following technologies being integrated to deliver CMMP as a pervasive function across the information lifecycle and across platforms in flight/motion and at rest:

  • Data leakage/loss protection (DLP)
  • Identity and access management (IAM)
  • Network Admission/Access Control (NAC)
  • Digital rights/Enterprise rights management (DRM/ERM)
  • Seamless encryption based upon "communities of interest"
  • Information classification and profiling
  • Metadata
  • Deep Packet Inspection (DPI)
  • Vulnerability Management
  • Configuration Management
  • Database Activity Monitoring (DAM)
  • Application and Database Monitoring and Protection (ADMP)
  • etc…

That’s not to say they’ll all end up as a single software install or network appliance, but rather a consolidated family of solutions from a few top-tier vendors who have coverage across the application, host and network space. 

If you were to look at any enterprise today struggling with this problem, they likely have or are planning to have most of the point solutions above anyway.  The difficulty is that they’re all from different vendors.  In the future, we’ll see larger suites from fewer vendors providing a more cohesive solution.

This really gives us the "cross domain information protection" that Rich talks about.

We may never achieve the end-state described above in its entirety, but it’s safe to say that the more we focus on the "endpoint" rather than the "information on the endpoint," the bigger the problem we will have.

/Hoff

Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

January 10th, 2008 11 comments

Phatburger_2
Juicy Fat Assets, Ripe For the Picking…

So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

  1. Assume a company that:
    …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
  2. Take the following:
    Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
  3. Combine With:
    Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
  4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
  5. You Get:
    Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

…I mean besides the fact that an entire industry has been leeching off this mess for decades…


I’ll Gladly Pay You Tuesday For A Secure Solution Today…

The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

Relax, Dude…Keep Your Firewalls…

In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

This goes for internal corporate users who are chained to their desks and not just mobile users.

What’s preventing you from doing this today?

/Hoff

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

On Bandwidth and Botnets…

October 3rd, 2007 No comments

An interesting story in this morning’s New York Times titled "Unlike U.S., Japanese Push Fiber Over Profit" talked about Japan’s long term investment efforts to build the world’s first all-fiber national network and how Japan leads the world’s other industrialized nations, including the U.S., in low-cost, high speed services centered around Internet access.  Check out this illustration:

2007broadbandgraphic_2
The article states that approximately 8 million Japanese subscribe to the fiber-enabled service offerings that provides performance at roughly 30 times that of a corresponding xDSL offering.

For about $55 a month, subscribers have access to up to 100Mb/s download capacity.

France Telecom is rumored to be rolling out services that offer 2.5Gb/s downloads!

I have Verizon FIOS which is delivered via fiber to my home and subscribe at a 20Mb/s download tier.

What I find very interesting about the emergence of this sort of service is that if you look at a typical consumer’s machine, it’s not well hardened, not monitored and usually easily compromised.  At this rate, the bandwidth of some of these compromise-ready consumer’s home connectivity is eclipsing that of mid-tier ISP’s!

This is even more true, through anecdotal evidence gathering, of online gamers who are typically also P2P filesharing participants and early adopters of new shiny kit — it’s a Bot Herder’s dream come true.

At xDSL speeds of a few Mb/s, a couple of infected machines as participants in a targeted synchronized fanning DDoS attack can easily take down a corporate network connected to the Internet via a DS3 (45Mb/s.)  Imagine what a botnet of a couple of 60Mb/s connected endpoints could do — how about a couple of thousand?  Hundreds of thousands?

This is great news for some as this sort of capacity will be economically beneficial to cyber-criminals as it reduces the exposure risk of Botnet Herders; they don’t have to infect nearly the same amount of machines to deliver exponentially higher attack yields given the size of the pipes.  Scary.

I’d suggest that using the lovely reverse DNS entries that service providers use to annotate logical hop connectivity will be even more freely used to target these high-speed users; you know, like (fictional):

bigass20MbpsPipe.vzFIOS-05.bstnma.verizon-gni.net (7x.y4.9z.1)

As an interesting anecdote from the service provider perspective, the need for "Clean Pipes" becomes even more important and the providers will be even more so financially motivated to prevent abuse of their backbone long-hauls by infected machines.

This, in turn, will drive the need for much more intelligent, higher throughput infrastructure and security service layers to mitigate the threat which is forcing folks to take a very hard look about how they architect their networks and apply security.

/Hoff

Security Interoperability: Standards Are Great, Especially When They’re Yours…

September 19th, 2007 6 comments

Agentmaxwell Wow, this is a rant and a half…grab a beer, you’re going to need it…

Jon Robinson pens a lovely summary of the endpoint security software sprawl discussion we’ve been chatting about lately.

My original post on the matter is here. 

Specifically, he isolates what might appear to be diametrically-opposed perspectives on the matter; mine and Amrit Williams’ from BigFix.

No good story flows without a schism-inducing polarizing galvanic component, so Jon graciously obliges by proposing to slice the issue in half with the introduction of what amounts to a discussion of open versus proprietary approaches to security interoperability between components. 

I’m not sure that this is the right starting point to frame this discussion, and I’m not convinced that Amrit and I are actually at polar ends of the discussion.  I think we’re actually both describing the same behavior in the market, and whilst Amrit works for a company that produces endpoint agents, I think he’s discussing the issue at hand in a reasonably objective manner. 

We’ll get back to this in a second.  First, let’s peel back the skin from the skeleton a little.

Dissect_crazy_frog Dissecting the Frog
Just like in high school, this is the messy part of science class where people either reveal their darksides as they take deep lung-fulls of formaldehyde vapor and hack the little amphibian victim to bits…or run shrieking from the room.

Jon comments on Amrit’s description of the "birth of the endpoint protection platform" while I care to describe it as the unnatural (but predictable) abortive by-product of industrial economic consolidation. The notion here — emotionally blanketed by the almost-unilateral hatred for anti-virus — is that we’ll see a:

"…convergence of desktop security functionality into a single product that delivers antivirus, antispyware, personal firewall and other styles of host intrusion prevention (for example, behavioral blocking) capabilities into a single and cohesive policy-managed solution."

I acknowledge this and agree that it’s happening.  I’m not very happy about *how* it’s manifesting itself, however.  We’re just ending up with endpoint oligopolies that still fail to provide a truly integrated and holistic security solution, and when a new class of threat or vulnerability arises, we get another agent — or chunky bit grafted onto the Super Agent from some acquisition that clumsily  ends up as a product roadmap feature due to market opportunism. 

You know, like DLP, NAC, WAF… 😉

One might suggest that if the "platform" as described was an open, standards-based framework that defined how to operate and communicate, acted as a skeleton upon which to hang the muscular offerings of any vendor, and provided a methodology and communications protocol that allowed them all to work together and intercommunicate using a common nervous system, that would be excellent.

We would end up with a much lighter-weight intelligent threat defense mechanism.  Adaptive and open, flexible and accommodating.  Modular and a cause of little overhead.

But it isn’t, and it won’t be.

Unfortunately, all the "Endpoint Protection Platform" illustrates, as I pointed out previously, is that the same consolidation issues pervasive in the network world are happening now at the endpoint.  All those network-based firewalls, IPS’s, AV gateways, IDS’s, etc. are smooshing into UTM platforms (you can call it whatever you want) and what we’re ending up with is the software equivalent of UTM on the endpoint.

SuperAgents or "Endpoint Protection Platforms" represent the desperately selfish grasping by security vendors (large and small) to remain relevant in an ever-shrinking marketspace.  Just like most UTM offerings at the network level.  Since piling up individual endpoint software hasn’t solved the problem, it must hold true that one is better than many, right?

Each of these vendors producing "Super Agent" frameworks, all have their own standards.  Each of them are battling furiously to be "THE" standard, and we’re still not solving the problem.

Man, that stinks
Jon added some color to my point that the failure to interoperate is really an economic issue, not a technical one, by my describing "greed" as the cause.  I got a chuckle out of his response:

Hoff goes on to say that he doesn’t think we will ever see this type of interoperability among vendors because of greed. I wouldn’t blame greed though, unless by greed he means an unwillingness to collaborate because they believe their value lies in their micro-monopoly patents and their ability to lock customers in their solution. (Little do they know, that they are making themselves less valuable by doing so.) No, there isn’t any interoperability because customers aren’t demanding it.

Some might suggest that my logic is flawed and the market demonstrates it with an example like where GE booted out Symantec in favor of 350,000 seats of Sophos:

Seeking to improve manageability and reduce costs which arise from managing multiple solutions, GE will introduce Network Access Control (NAC) as well as antivirus and client firewall protection which forms part of the Sophos Security and Control solution.

Sophos CEO, Steve Munford, said companies want a single integrated agent that handles all aspects of endpoint security on each PC.                     

"Other vendors offer security suites that are little more than a bunch of separate applications bundled together, all vying for resources on the user’s computer," he said.    

"Enterprises tell us that the tide has turned, and the place for NAC and integrated security is at the endpoint."

While I philosophically don’t agree with the CEO’s comment relating the need for a Super Agent,  the last line is the most important "…the place for…integrated security is at the endpoint."  They didn’t say Super Agent, they said "integrated."  If we had integration and interoperability, the customer wouldn’t care about how many "components" it would take so long as it was cost-effective and easily managed.  That’s the rub because we don’t. 

So I get the point here.  Super Agents are our only logical choice, right?  No!

I suggest that while we make progress toward secure OS’s and applications, instead of moving from tons of agents to a Super Agent, the more intelligent approach would be a graceful introduction of an interoperable framework of open-standards based protocols that allow these components to work together as the "natural" consolidation effect takes its course and markets become features.  Don’t go from one extreme to the other.

I have yet to find anyone that actually believes that deploying a monolithic magnum malware mediator that maintains a modality of mediocrity masking a monoculture  is a good idea.

…unless, of course, all you care about is taking the cost out of operationalizing security and not actually reducing risk.  For some reason, these are being positioned by people as mutually-exclusive.  The same argument holds true in the network space; in some regards we’re settling for "good enough" instead of pushing to fix the problem and not the symptoms.

If people would vote with the wallets (which *is* what the Jericho Forum does, Rich) we wouldn’t waste our time yapping about this, we’d be busy solving issues relevant to the business, not the sales quotas of security company sales reps. I guess that’s what GE did, but they had a choice.  As the biggest IT consumer on the planet (so I’ve been told,) they could have driven their vendors together instead of apart.

People are loathe to think that progress can be made in this regard.  That’s a shame, because it can, and it has.   It may not be as public as you think, but there are people really working hard behind the scenes to make the operating systems, applications and protocols more secure. 

As Jon points out, and many others like Ranum have said thousands of times before, we wouldn’t need half of these individual agents — or even Super Agents — if the operating systems and software were secure in the first place. 

Run, Forrest, Run!
This is where people roll their eyes and suggest that I’m copping out because I’m describing a problem that’s not going to be fixed anytime soon.  This is where they stop reading.  This is where they just keep plodding along on the Hamster Wheel of Pain and add that line item for either more endpoint agents or a roll-up to a Super Agent.

I suggest that those of you who subscribe to this theory are wrong (and probably have huge calves from all that running.)  The first evidence of this is already showing up on shelves.  It’s not perfect, but it’s a start. 

Take Vista, as an example.  Love it or hate it, it *is* a more secure operating system and it features a slew of functionality that is causing dread and panic in the security industry — especially from folks like Symantec, hence the antitrust suits in the EU.  If the OS becomes secure, how will we sell our Super Agents.  ANTI-TRUST!

Let me reiterate that while we make progress toward secure OS’s and applications, instead of going from tons of agents to a Super Agent, the more intelligent approach is a graceful introduction of an interoperable framework of open-standards based protocols that allow these components to work together as the "natural" consolidation effect takes its course and markets become features.  Don’t go from one extreme to the other.

Jon sums it up with the following describing solving the interoperability problem:

In short, let the market play out, rather than relying on and hoping for central planning. If customers demand it, it will emerge. There is no reason why there can’t be multiple standards competing for market share (look at all the different web syndication standards for example). Essentially, a standard would be collaboration between vendors to make their stuff play well together so they can win business. They create frameworks and APIs to make that happen more easily in the future so they can win business easier. If customers like it, it becomes a “standard”.

At any rate, I’m sitting in the Starbucks around the corner from tonight’s BeanSec! event.  We’re going to solve world hunger tonight — I wonder if a Super Agent will do that one day, too?

/Hoff

Categories: Endpoint Security, Open Standards Tags:

We Used to Worry About Security Appliance Sprawl – Now It’s Endpoint Security Software Sprawl!

September 1st, 2007 8 comments

Endpointcontrolhell_3Over
the last 2-3 years, we’ve seen a lot of consolidation take place in the
security industry; physically, functionally and from a business
perspective.

Security practitioners have struggled to gain a foothold against the
rising tide of threats washing their way while having to deploy and
manage security box after security box in response to each new class of
threat and vulnerability.

Each of these boxes provides a specialized function and each expects
to be in-line with one another sitting in front of the asset(s) to be
protected.

It’s a nasty self-perpetuating game of security dominoes with each in-line device adding its own latency tax and contributing to an ever-expanding cascading failure domain by adding to the burden of operational complexity, cost and a weaker security posture.

Why a weaker security posture?  Doesn’t "defense in depth" make me more secure?  Well, since it’s  extremely difficult to understand the effect one device’s security policy has on another, this truly exemplifies the notion of a chain only being as strong as its weakest link.  And with everything being encrypted, we can’t even make decent decisions on content.  Don’t even get me started on the effect this has on change control.

Dominoes_4
As we’ve continued to deal with this security appliance sprawl across
our networks, the natural evolution of the consolidation effect is to
combine security functions at arterial points across or in the
network taking the form of UTM devices or embedded in routers and switches.

However, we’ve also come to realize that the locus of the threats
and vulnerabilities demands that we get as close to the assets and data
we seek to protect, so now in an ironic twist, the industry has turned
to instead sprinkle software-based agents directly on the machines instead.

After all, the endpoint is the closest thing to the data, so the more
endpoint control the better, right?

What we’ve done now is transferred the management of a few gateways to that of potentially thousands of endpoints.

Based on an informal survey of security practitioners, the diagram I
whipped up above demonstrates various end-point agents that reside on a
typical enterprise client and/or server.  It’s absurdly obscene:

  • Software Management/Helpdesk  Remote Control
  • Patch Management (OS)
  • Asset Management (Inventory)
  • HIDS/HIPS
  • Firewall
  • VPN
  • Anti-Virus
  • Anti-Spyware
  • Anti-Spam
  • Browser Security Plug-ins
  • Encryption (Disk/eMail)
  • 802.1x Supplicants
  • NAC Agents
  • DLP
  • Single Sign-On
  • Forensic Agents
  • Device Control (USB, CD, etc…)

You generally have to manage each of those pieces of software independently.  So while you may not have a separate "appliance" to manage, these things live on your host; they take memory, processor, disk and sometimes step rather rudely on one another.  They represent the most basic premise of software appliances and they each generally have their own administrative console to manage them.

Sunearthmoonmars_2
Granted, we’re seeing the same sort of consolidation occur on the
software side with "super agent endpoints," but these pieces of
bloatware can be worse than stacking individual agents up, one against
the other.  Security in width (not in depth) will become our undoing and the benefits of consolidation wear off when you end up with a "single vendor’s version of the truth" that ends up being a jack of all trades and a master of none.

To point out the obvious, this is madness.

We all know that what we need is robust protocols, strong mutual authentication, encryption, resilient operating systems and applications that don’t suck.

But because we can’t wait until the Sun explodes to get this, we need a way for these individual security components to securely communicate and interoperate using a common protocol based upon open standards.

We need to push for an approach to an ecosystem that allows devices that have visibility to our data and the network that interconnects them to tap this messaging bus and either enact a disposition, describe how to, and communicate appropriately when we do so.

We have the technology, we have the ability, we have the need.  Now all we need is the vendor gene pool to get off their duff and work together to get it done.  The only thing preventing this is GREED.

It’s a shame it seems we’ll have to wait for the Sun to explode for this, too.

/Hoff

Categories: Endpoint Security Tags: