Archive

Archive for March, 2007

A Spectacular Risk Management Blog

March 1st, 2007 2 comments

Rmilogo It’s not often that I will read back through every post archived on a blog, but I must tell you that I have found kindred spirits (even if they don’t know it) in Alex and Jack’s RiskAnalys.is blog.  Fantastic stuff.  The work they have done bringing FAIR (Factor Analysis of Information Risk) to the world at large is awesome. 

Something I’d like to do is relate FAIR to OCTAVE which I have used to feed SRM systems like Skybox (because I’m obviously not busy enough…)

I’m not usually at a loss for words, but these guys really, really have an amazing grasp of the realities, vagaries and challenges of assessing, communicating and managing risk. 

Please do yourself a favor and read/subscribe to their blog and better yet, check out FAIR and Risk Management Insight’s (RMI) website.

Really great stuff.

Categories: Risk Management Tags:

My Take on the future of Vulnerability Management

March 1st, 2007 No comments

Vafuture
I’ve followed Alan Shimel’s musings on the furture of vulnerability assessment (VA) and found myself nodding along for the most part about where Alan sees the VA market and technology heading:

    "Over the past year, many have asked what is next for   
    VA.  I think we
are seeing the answer.  The answer is VA
    is morphing into security
configuration management."

Alan preceded this conclusion by illustrating the progression VA has taken over the lifecycle of offerings wherein pure "scanning"  VA toolsets  evolved with integration into vulnerability management (VM) suites that included reporting, remediation and integration and ultimately into a compliance measurement mechanism.

So Alan’s alluded that ultimately VA/VM was really a risk management play and I wholeheartedly agree with this.  However, I am confused as to how broadly the definition of "security configuration management (SCM)" spreads its arms under the fold of the definition of "risk management;" it seems to me that SCM is a subset of an overall risk management framework and not vice versa. Perhaps this is already clear enough to folks reading his post, but it wasn’t to me.

So, to the punchline:

My vision for where VA is going aligns with Alan’s except it doesn’t end (or even next-step to) configuration management.  It leapfrogs directly to security risk management (SRM.)  It’s also already available in products such as Skybox and RedSeal.  (Disclosure: I am on Skybox’s Customer Advisory Board.)

Before you dismiss this as an ad for Skybox, please realize that I’ve
purchased and implemented this solution in conjunction with the other tools I
describe.  It represents an incredible tool and methodology that provided a level of
transparency and accuracy that allowed me to communicate and make decisions that were totally aligned to the most important elements within my business which is exactly what security should do.

Skybox is the best-kept secret in the risk manager’s arsenal.  It’s an amazing product that solves some very difficult business problems that stymie security professionals due to their inability to truly communicate (in real time) the risk posture of their organization.  What do I focus on first?

SRM is defined thusly (pasted from Skybox’s website because it’s the perfect definition that I couldn’t improve upon):

IT Security Risk Management is the complete process of understanding
threats, prioritizing vulnerabilities, limiting damage from potential
attacks, and understanding the impact of proposed changes or patches on
the target systems.
  –
IT SRM Solution for Vulnerability Management, Gartner, 2005

Security
Risk Management collects network infrastructure and security
configurations, evaluates vulnerability scan results, maps dependencies
among security devices and incorporates the business value of critical
assets. SRM calculates all possible access paths, and highlights
vulnerabilities that can be exploited by internal and external
attackers as well as malicious worms.

By using
Security Risk Management, the information overload associated with
thousands of network security policies, control devices and
vulnerability scans can be demystified and automated. This is
accomplished by prioritizing tens of thousands of vulnerabilities into
just the few that should be mitigated in order to prevent cyber
attacks. The benefit is a more secure network, higher operational
efficiency and reduced IT workload.

That being said, starting some 3 years ago I saw where VA/VM was headed and it was down a rathole that provided very little actionable intelligence in terms of managing risk because the VA/VM tools knew nothing of the definition or value of the assets against which the VA was performed. 

We got 600 page reports of vulnerability dumps with massive amounts of false positives.  While the technology has improved, the underlying "evolution" of VA is occurring only because the information it conveyed is not valuable if you want to make an informed decision.    If you manage purely threats and vulnerabilities, you’ll be patching forever.

Qualys, Foundstone (nee McAfee) and nCircle (for example) all started to evolve their products by attaching qualitative or quantitative weighting to IT assets (or groups of them) which certainly allowed folks to dashboard the relative impact a vulnerability would have should it be exploited.

The problem is that these impact or "risk" statements (and the ensuing compliance reporting) were still disconnected from the linked dependencies and cascading failure modalities that occur when these assets are interconnected via a complex network.   These tools measure compliance and impact one vulnerability at a time and within a vulnerability diameter of a single host.  They don’t have the context of the network, actors, skill sets or hierarchical infrastructure dependencies.  Throw in dynamic routing and numerous controls and network components in between them and these models proved unrealistic and unreliable.

These tools also assume that somehow you’re able to apply the results of a risk assessment (RA) and be able to translate the groups of assets to a singular "asset" against which impact can be measured based upon the existence of a vulnerability but not necessarily the potential for exploit — VA tools have no concept of whether a  control is in place that mitigates the risk.  You also have to understand an map the sub-components of impact against known elements such as confidentiality, integrity and availability.

That’s exactly where Skybox and SRM comes in. [From their materials]

Skybox4_step
The four step process of SRM is a continuous, consistent, automated and repeatable framework:

  • Model the IT environment
  • Simulate access scenarios and attack paths
  • Analyze network connectivity, business risk and regulatory compliance
  • Plan optimal mitigation strategies and safe network changes

What this means is that you can take all that nifty VA/VM data, understand and model the network and the steady-state risk posture of your organization, perform "what-if’s" and ultimately understand what a change will do to your environment from not only a pure threat/vulnerability perspective, but also a risk/impact one. The accuracy of your data depends on how good you risk input  data is and how up-to-date the  network and  vulnerability results are.  Automate this and it’s as close as you can get to "real-time."  Let’s call it "near-time."

You can communicate these results at any level up and down the stack of your organization and truly show compliance as it matters not only to the spirit of a regulation or law, but also to your business (and sometimes those are different things.)  It slots into configuration management/security configuration management programs and interfaces with CMM models and quality and governance management frameworks such as CobiT and ITIL.

This, in my opinion, is where VA is headed — as a vital component of an intelligent risk management portfolio that is called Security Risk Management.

/Hoff