Archive

Archive for the ‘Open Standards’ Category

Incomplete Thought: Why We Need Open Source Security Solutions More Than Ever…

July 17th, 2010 1 comment
Illustrates a rightward shift in the demand curve.
Image via Wikipedia

I don’t have time to write a big blog post and quite frankly, I don’t need to. Not on this topic.

I do, however, feel that it’s important to bring back into consciousness how very important open source security solutions are to us — at least those of us who actually expect to make an impact in our organizations and work toward making a dent in our security problem pile.

Why do open source solutions matter so much in our approach to dealing with securing the things that matter most to us?

It comes down to things we already know but are often paralyzed to do anything about:

  1. The threat curve and innovation of attacker outpaces that of the defender by orders of magnitudes (duh)
  2. Disruptive technology and innovation dramatically impacts the operational, threat and risk modeling we have to deal with (duh duh)
  3. The security industry is not in the business of solving security problems that don’t have a profit motive/margin attached to it (ugh)

We can’t do much about #1 and #2 except be early adopters, by agile/dynamic and plan for change. I’ve written about this many times and built and entire series of talks presentations (Security and Disruptive Innovation) that Rich Mogull and I have taken to updating over the last few years.

We can do something about #3 and we can do it by continuing to invest in the development, deployment, support, and perhaps even the eventual commercialization of open source security solutions.

To be clear, it’s not that commercialization is required for success, but often it just indicates it’s become mainstream and valued and money *can* be made.)

When you look at the motivation most open source project creators bring a solution to market, it’s because the solution generally is not commercially available, it solves an immediate need and it’s contributed to by a community. These are all fantastic reasons to use, support, extend and contribute back to the open source movement — even if you don’t code, you can help by improving the roadmaps of these projects by making suggestions and promoting their use.

Open source security solutions deliver and they deliver quickly because the roadmaps and feature integration occur in an agile, meritocratic and vetted manner than often times lacks polish but delivers immediate value — especially given their cost.

We’re stuck in a loop (or a Hamster Sine Wave of Pain) because the problems we really need to solve are not developed by the companies that are in the best position to develop them in a timely manner. Why? Because when these emerging solutions are evaluated, they live or die by one thing: TAM (total addressable market.)

If there’s no big $$$ attached and someone can’t make the case within an organization that this is a strategic (read: revenue generating) big bet, the big companies wait for a small innovative startup to develop technology (or an open source tool,) see if it lives long enough for the market demand to drive revenues and then buy them…or sometimes develop a competitive solution.

Classical crossing the chasm/Moore stuff.

The problem here is that this cycle is broken horribly and we see perfectly awesome solutions die on the vine. Sometimes they come back to life years later cyclically when the pain gets big enough (and there’s money to be made) or the “market” of products and companies consolidate, commoditize and ultimately becomes a feature.

I’ve got hundreds of examples I can give of this phenomenon — and I bet you do, too.

That’s not to say we don’t have open-source-derived success stories (Snort, Metasploit, ClamAV, Nessus, OSSec, etc.) but we just don’t have enough of them. Further, there are disruptions such as virtualization and cloud computing that fundamentally change the game that we can harness in conjunction with open source solutions that can accelerate the delivery and velocity of solutions because of how impacting the platform shift can be.

I’ve also got dozens of awesome ideas that could/would fundamentally solve many attendant issues we have in security — but the timing, economics, culture, politics and readiness/appetite for adoption aren’t there commercially…but they can be via open source.

I’m going to start a series which identifies and highlights solutions that are either available as kernel-nugget technology or past-life approaches that I think can and should be taken on as open source projects that could fundamentally help our cause as a community.

Maybe someone can code/create open source solutions out of them that can help us all.  We should encourage this behavior.

We need it more than ever now.

/Hoff

Enhanced by Zemanta

Redux: Patching the Cloud

September 23rd, 2009 3 comments

Back in 2008 I wrote a piece titled “Patching the Cloud” in which I highlighted the issues associated with the black box ubiquity of Cloud and what that means to patching/upgrading processes:

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  How does an enterprise that has no insight into the “black box” model of the cloud operator, setup a dev/test/staging environment that acceptably mimics the operating environment?

What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)

How does one negotiate the process for determining when and how a patch is deployed?  Where does the cloud operator draw the line?   If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service?  If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.

I followed this up with a practical example when Microsoft’s Azure services experienced a hiccup due to this very thing.  We see wholesale changes that can be instantiated on a whim by Cloud providers that could alter service functionality and service availability such as this one from Google (Published Google Documents to appear in Google search) — have you thought this through?

So now as we witness ISP’s starting to build Cloud service offerings from common Cloud OS platforms and espouse the portability of workloads (*ahem* VM’s) from “internal” Clouds to Cloud Providers — and potentially multiple Cloud providers — what happens when the enterprise is at v3.1 of Cloud OS, ISP A is at version 2.1a and ISP B is at v2.9? Portability is a cruel mistress.

Pair that little nugget with the fact that even “global” Cloud providers such as Amazon Web Services have not maintained parity in terms of functionality/services across their regions*. The US has long had features/functions that the european region has not.  Today, in fact, AWS announced bringing infrastructure capabilities to parity for things like elastic load balancing and auto-scale…

It’s important to understand what happens when we squeeze the balloon.

/Hoff

*corrected – I originally said “availability zones” which was in error as pointed out by Shlomo in the comments. Thanks!

Security Interoperability: Standards Are Great, Especially When They’re Yours…

September 19th, 2007 6 comments

Agentmaxwell Wow, this is a rant and a half…grab a beer, you’re going to need it…

Jon Robinson pens a lovely summary of the endpoint security software sprawl discussion we’ve been chatting about lately.

My original post on the matter is here. 

Specifically, he isolates what might appear to be diametrically-opposed perspectives on the matter; mine and Amrit Williams’ from BigFix.

No good story flows without a schism-inducing polarizing galvanic component, so Jon graciously obliges by proposing to slice the issue in half with the introduction of what amounts to a discussion of open versus proprietary approaches to security interoperability between components. 

I’m not sure that this is the right starting point to frame this discussion, and I’m not convinced that Amrit and I are actually at polar ends of the discussion.  I think we’re actually both describing the same behavior in the market, and whilst Amrit works for a company that produces endpoint agents, I think he’s discussing the issue at hand in a reasonably objective manner. 

We’ll get back to this in a second.  First, let’s peel back the skin from the skeleton a little.

Dissect_crazy_frog Dissecting the Frog
Just like in high school, this is the messy part of science class where people either reveal their darksides as they take deep lung-fulls of formaldehyde vapor and hack the little amphibian victim to bits…or run shrieking from the room.

Jon comments on Amrit’s description of the "birth of the endpoint protection platform" while I care to describe it as the unnatural (but predictable) abortive by-product of industrial economic consolidation. The notion here — emotionally blanketed by the almost-unilateral hatred for anti-virus — is that we’ll see a:

"…convergence of desktop security functionality into a single product that delivers antivirus, antispyware, personal firewall and other styles of host intrusion prevention (for example, behavioral blocking) capabilities into a single and cohesive policy-managed solution."

I acknowledge this and agree that it’s happening.  I’m not very happy about *how* it’s manifesting itself, however.  We’re just ending up with endpoint oligopolies that still fail to provide a truly integrated and holistic security solution, and when a new class of threat or vulnerability arises, we get another agent — or chunky bit grafted onto the Super Agent from some acquisition that clumsily  ends up as a product roadmap feature due to market opportunism. 

You know, like DLP, NAC, WAF… 😉

One might suggest that if the "platform" as described was an open, standards-based framework that defined how to operate and communicate, acted as a skeleton upon which to hang the muscular offerings of any vendor, and provided a methodology and communications protocol that allowed them all to work together and intercommunicate using a common nervous system, that would be excellent.

We would end up with a much lighter-weight intelligent threat defense mechanism.  Adaptive and open, flexible and accommodating.  Modular and a cause of little overhead.

But it isn’t, and it won’t be.

Unfortunately, all the "Endpoint Protection Platform" illustrates, as I pointed out previously, is that the same consolidation issues pervasive in the network world are happening now at the endpoint.  All those network-based firewalls, IPS’s, AV gateways, IDS’s, etc. are smooshing into UTM platforms (you can call it whatever you want) and what we’re ending up with is the software equivalent of UTM on the endpoint.

SuperAgents or "Endpoint Protection Platforms" represent the desperately selfish grasping by security vendors (large and small) to remain relevant in an ever-shrinking marketspace.  Just like most UTM offerings at the network level.  Since piling up individual endpoint software hasn’t solved the problem, it must hold true that one is better than many, right?

Each of these vendors producing "Super Agent" frameworks, all have their own standards.  Each of them are battling furiously to be "THE" standard, and we’re still not solving the problem.

Man, that stinks
Jon added some color to my point that the failure to interoperate is really an economic issue, not a technical one, by my describing "greed" as the cause.  I got a chuckle out of his response:

Hoff goes on to say that he doesn’t think we will ever see this type of interoperability among vendors because of greed. I wouldn’t blame greed though, unless by greed he means an unwillingness to collaborate because they believe their value lies in their micro-monopoly patents and their ability to lock customers in their solution. (Little do they know, that they are making themselves less valuable by doing so.) No, there isn’t any interoperability because customers aren’t demanding it.

Some might suggest that my logic is flawed and the market demonstrates it with an example like where GE booted out Symantec in favor of 350,000 seats of Sophos:

Seeking to improve manageability and reduce costs which arise from managing multiple solutions, GE will introduce Network Access Control (NAC) as well as antivirus and client firewall protection which forms part of the Sophos Security and Control solution.

Sophos CEO, Steve Munford, said companies want a single integrated agent that handles all aspects of endpoint security on each PC.                     

"Other vendors offer security suites that are little more than a bunch of separate applications bundled together, all vying for resources on the user’s computer," he said.    

"Enterprises tell us that the tide has turned, and the place for NAC and integrated security is at the endpoint."

While I philosophically don’t agree with the CEO’s comment relating the need for a Super Agent,  the last line is the most important "…the place for…integrated security is at the endpoint."  They didn’t say Super Agent, they said "integrated."  If we had integration and interoperability, the customer wouldn’t care about how many "components" it would take so long as it was cost-effective and easily managed.  That’s the rub because we don’t. 

So I get the point here.  Super Agents are our only logical choice, right?  No!

I suggest that while we make progress toward secure OS’s and applications, instead of moving from tons of agents to a Super Agent, the more intelligent approach would be a graceful introduction of an interoperable framework of open-standards based protocols that allow these components to work together as the "natural" consolidation effect takes its course and markets become features.  Don’t go from one extreme to the other.

I have yet to find anyone that actually believes that deploying a monolithic magnum malware mediator that maintains a modality of mediocrity masking a monoculture  is a good idea.

…unless, of course, all you care about is taking the cost out of operationalizing security and not actually reducing risk.  For some reason, these are being positioned by people as mutually-exclusive.  The same argument holds true in the network space; in some regards we’re settling for "good enough" instead of pushing to fix the problem and not the symptoms.

If people would vote with the wallets (which *is* what the Jericho Forum does, Rich) we wouldn’t waste our time yapping about this, we’d be busy solving issues relevant to the business, not the sales quotas of security company sales reps. I guess that’s what GE did, but they had a choice.  As the biggest IT consumer on the planet (so I’ve been told,) they could have driven their vendors together instead of apart.

People are loathe to think that progress can be made in this regard.  That’s a shame, because it can, and it has.   It may not be as public as you think, but there are people really working hard behind the scenes to make the operating systems, applications and protocols more secure. 

As Jon points out, and many others like Ranum have said thousands of times before, we wouldn’t need half of these individual agents — or even Super Agents — if the operating systems and software were secure in the first place. 

Run, Forrest, Run!
This is where people roll their eyes and suggest that I’m copping out because I’m describing a problem that’s not going to be fixed anytime soon.  This is where they stop reading.  This is where they just keep plodding along on the Hamster Wheel of Pain and add that line item for either more endpoint agents or a roll-up to a Super Agent.

I suggest that those of you who subscribe to this theory are wrong (and probably have huge calves from all that running.)  The first evidence of this is already showing up on shelves.  It’s not perfect, but it’s a start. 

Take Vista, as an example.  Love it or hate it, it *is* a more secure operating system and it features a slew of functionality that is causing dread and panic in the security industry — especially from folks like Symantec, hence the antitrust suits in the EU.  If the OS becomes secure, how will we sell our Super Agents.  ANTI-TRUST!

Let me reiterate that while we make progress toward secure OS’s and applications, instead of going from tons of agents to a Super Agent, the more intelligent approach is a graceful introduction of an interoperable framework of open-standards based protocols that allow these components to work together as the "natural" consolidation effect takes its course and markets become features.  Don’t go from one extreme to the other.

Jon sums it up with the following describing solving the interoperability problem:

In short, let the market play out, rather than relying on and hoping for central planning. If customers demand it, it will emerge. There is no reason why there can’t be multiple standards competing for market share (look at all the different web syndication standards for example). Essentially, a standard would be collaboration between vendors to make their stuff play well together so they can win business. They create frameworks and APIs to make that happen more easily in the future so they can win business easier. If customers like it, it becomes a “standard”.

At any rate, I’m sitting in the Starbucks around the corner from tonight’s BeanSec! event.  We’re going to solve world hunger tonight — I wonder if a Super Agent will do that one day, too?

/Hoff

Categories: Endpoint Security, Open Standards Tags: