Archive

Archive for the ‘Innovation’ Category

Security Analyst Sausage Machine Firms Quash Innovation

July 10th, 2008 15 comments

Tackle
Quis custodiet ipsos custodes? Who will watch the watchers?

Short and sweet and perhaps a grumpy statement of the obvious: Security Analyst Sausage Machine Firms quash innovation in vendors’ development cycles and in many cases prevent the consumer — their customers — from receiving actual solutions to real problems because of the stranglehold they maintain on what defines and categorizes a "solution."

What do I mean?

If you’re a vendor — emerging or established — and create a solution that is fantastic and solves real business problems but doesn’t fit neatly within an existing "quadrant," "cycle," "scope," or "square," you’re SCREWED.  You may sell a handful of your widgets to early adopters, but your product isn’t real unless an analyst says it is and you still have money in the bank after a few years to deliver it.

If you’re a customer, you may never see that product develop and see the light of day and you’re the ones who pay your membership dues to the same analyst firms to advise you on what to do!

I know that we’ve all basically dropped trow and given in to the fact that we’ve got to follow the analyst hazing rituals, but that doesn’t make it right.  It really sucks monkey balls.

What’s funny to me is that we have these huge lawsuits filed against corporations for anti-trust and unfair business practices, and there’s nobody who contests this oligopoly from the sausage machine analysts — except for other former analysts who form their own analyst firms to do battle with their former employers…but in a kindler, gentler, "advisory" capacity, of course…

Speaking of which, some of these folks who lead these practices often times have never used, deployed, tested, or sometimes even seen the products they take money for and advise their clients on.  Oh, and objectivity?  Yeah, right.  If an analyst doesn’t like your idea, your product, your philosophy, your choice in clothing or you, you’re done.

This crappy system stifles innovation, it grinds real solutions into the dirt such that small startups that really could be "the next big thing" often are now forced to be born as seed technology starters for larger companies to buy for M&A pennies so they can slow-roll the IP into the roadmaps over a long time and smooth the curve once markets are "mature."

Guess who defines them as being "mature?"  Right.

Crossing the chasm?  Reaching the tipping point?  How much of that even matters anymore?

Ah, the innovator’s dilemma…

If you have a product that well and truly does X, Y and Z, where X is a feature that conforms and fits into a defined category but Y and Z — while truly differentiating and powerful — do not, you’re forced to focus on, develop around and hype X, label your product as being X, and not invest as much in Y and Z.

If you miss the market timing and can’t afford to schmooze effectively and don’t look forward enough with a business model that allows for flexibility, you may make the world’s best X, but when X commoditizes and Y and Z are now the hottest "new" square, chances are you won’t matter anymore, even if you’ve had it for years.

The product managers, marketing directors and salesfolk are forced to
fit a product within an analyst’s arbitrary product definition or risk
not getting traction, miss competitive analysis/comparisons or even get
funding; ever try to convince a VC that they should fund you when
you’re the "only one" in the space and there’s no analyst recognition
of a "market?"

Yech.

A vendor’s excellent solution can simply wither and die on the vine in
a battle of market definition attrition because the vendor is forced to
conform and neuter a product in order to make a buck and can’t actually
differentiate or focus on the things that truly make it a better
solution.

Who wins here? 

Not the vendors.  Not the customers. The analysts do. 

The vendor pays them a shitload of kowtowing and money for the privilege to show up in a box so they get recognized — and not necessarily for the things that truly matter — until the same analyst changes his/her mind and recognizes that perhaps Y and Z are "real" or creates category W, and the vicious cycle starts anew.

So while you’re a vendor struggling to make a great solution or a customer trying to solve real business problems, who watches the watchers?

/Hoff

Notes from the IBM Global Innovation Outlook: Security and Society

June 12th, 2008 No comments

Gio2008
This week I had the privilege to attend IBM’s Global Innovation Outlook in Chicago which focused this go-round on the topic of security and society.   This was the last in the security and society series with prior sessions held in Moscow, Berlin, and Tokyo.

The mission of the GIO is as follows:

The GIO is rooted in the belief that if we are to surface the truly revolutionary innovations of our time, the ones that will change the world for the better, we are going to need everyone’s help. So for the past three years IBM has gathered together the brightest minds on the planet — from the worlds of business, politics, academia, and non-profits – and challenged them to work collaboratively on tackling some of the most vexing challenges on earth. Healthcare, the environment, transportation.

We do this through a global series of open and candid conversations called “deep dives.” These deep dives are typically done on location. Already, 25 GIO deep dives have brought together more than 375 influencers from three dozen countries on four continents. But this year we’re taking the conversation digital, and I’m going to help make that happen.

The focus on security and society seeks to address the following:

The 21st Century has brought with it a near total redefining of the notion of security. Be it identity theft, border security, or corporate espionage, the security of every nation, business, organization and individual is in constant flux thanks to sophisticated technologies and a growing global interdependence. All aspects of security are being challenged by both large and small groups — even individuals — that have a disruptive capability disproportionate to their size or resources.

At the same time, technology is providing unprecedented ways to sense and deter theft and other security breaches.  Businesses are looking for innovative ways to better protect their physical and digital assets, as well as the best interests of their customers. Policy makers are faced with the dilemma of enabling socioeconomic growth while mitigating security threats. And each of us is charged with protecting ourselves and our assets in this rapidly evolving, increasingly confusing, global security landscape.

The mixture of skill sets, backgrounds, passions and agendas of those in attendance was intriguing and impressive.  Some of the folks we had in attendance were:

  • Michael Barrett, the CISO of PayPal
  • Chris Kelly, the CPO of Facebook
  • Ann Cavoukian, the Information & Privacy Commissioner or Ontario
  • Dave Trulio, special assistant to the president/homeland security council
  • Carol Rizzo, CTO of Kaiser Permanente
  • Mustaque Ahamad, Director, Georgia Tech Information Security Center
  • Julie Ferguson, VP of Emerging Technology, Debix
  • Linda Foley, Founder of the Identity Theft Resource Center
  • Andrew Mack, Director, Human Security Report Project, Simon Fraser University

The 24 of us with the help of a moderator spent the day discussing, ideating and debating various elements of security and society as we clawed our way through pressing issues and events both current and some focused on the future state.

Securityvprivacy
What was interesting to me — but not necessarily surprising — was that the discussions almost invariably found their way back to the issue of privacy, almost to the exclusion of anything else.

I don’t mean to suggest that privacy is not important — far from it — but I found that it became a blackhole into which much of the potential for innovation became gravitationally lured.   Security is, and likely always will be, at odds in a delicate (or not so) struggle with the need for privacy and it should certainly not take a back seat. 

However, given what we experienced, where privacy became the "yeah, but" that almost stunted discussions of innovation from starting, one might play devil’s advocate (and I did) and ask how we balance the issues at hand.  It was interesting to poke and prod to hear people’s reactions.

Given the workup of many of the attendees it’s not hard to see why things trended in this direction, but I don’t think we ever really got into the mode of discussing the solutions in lieu of being focused on the problems.

I certainly was responsible for some of that as Dan Briody, the event’s official blogger, highlighted a phrase I used to apologize in advance for some of the more dour aspects of what I wanted to ground us all with when I said “I know this conversation is supposed to be about rainbows and unicorns, but the Internet is horribly, horribly broken."

My goal was to ensure we talked about the future whilst also being mindful of the past and present — I didn’t expect we’d get stuck there, however.  I was hopeful that we could get past the way things were/are in the morning and move to the way things could be in the afternoon, but it didn’t really materialize.

There was a shining moment, as Dan wrote in the blog, that I found as the most interesting portion of the discussion, and it came from Andrew Mack.  Rather than paraphrase, I’m going to quote from Dan who summed it up perfectly:

Andrew Mack, the Director of the Human Security Report Project at the Simon Fraser University School for International Studies in Vancouver has a long list of data that supports the notion that, historically speaking, the planet is considerably more secure today than at any time. For example, the end of colonialism has created a more stable political environment. Likewise, the end of the Cold War has removed one of the largest sources of ideological tension and aggression from the global landscape. And globalization itself is building wealth in developing countries, increasing income per capita, and mitigating social unrest.

All in all, Mack reasons, we are in a good place. There have been sharp declines in political violence, global terrorism, and authoritarian states. Human nature is to worry. And as such, we often believe that the most dangerous times are the ones in which we live. Not true. Despite the many current and gathering threats to our near- and long-term security, we are in fact a safer, more secure global society.

I really wished we were able to spend more time exploring deeper these social issues in balance with the privacy and technology elements that dominated the discussion and actually unload the baggage to start thinking about novel ways of dealing with things 5 or 10 years out.

My feedback would be to split the sessions into two-day events.  Day one could be spent framing the problem sets and exploring the past and present.  This allows everyone to clearly define the problem space.  Day two would then focus on clearing the slate and mindmapping the opportunities for innovation and change to solve the challenges defined in day one.

In all, it was a great venue and I met some fantastic people and had great conversation.  I plan to continue to stay connected and work towards proposing and crafting solutions to some of the problems we discussed.

I hope I made a difference in a good way.

/Hoff

Categories: Innovation Tags:

It’s Virtualization March Madness! Up First, Montego Networks

March 27th, 2008 No comments

Marchmadness
If you want to read about Montego Networks right off the bat, you can skip the Hoff-Tax and scroll down to the horizontal rule and start reading.  Though I’ll be horribly offended, I’ll understand…

I like being contradictory, even when it appears that I’m contradicting myself.  I like to think of it as giving a balanced perspective on my schizophrenic self…

You will likely recall that my latest post suggested that the real challenge for virtualization at this stage in the game is organizational and operational and not technical. 

Well, within the context of this post, that’s obviously half right, but it’s an incredibly overlooked fact that is causing distress in most organizations, and it’s something that technology — as a symptom of the human condition — cannot remedy.

But back to the Tech.

The reality is that for reasons I’ve spoken of many times, our favorite ISV’s have been a little handicapped by what the virtualization platforms offer up in terms of proper integration against which we can gain purchase from a security perspective.  They have to sell what they’ve got while trying to remain relevant all the while watching the ground drop out beneath them.

Bs_2
These vendors have a choice: employ some fancy marketing messaging to make it appear as though the same products you run on a $50,000+ dedicated security appliance will actually perform just as well in a virtual form.

Further, tell you that you’ll enjoy just as much visibility without disclosing limitations when interfaced to a virtual switch that makes it next to impossible to replicate most complex non-virtualized topologies. 

Or, just wait it out and see what happens hoping to sell more appliances in the meantime.

Some employ all three strategies (with a fourth being a little bit of hope.)

Some of that hoping is over and is on it’s way to being remedied with enablers like VMware’s VMsafe initiative.  It’s a shame that we’ll probably end up with a battle of API’s with ISV’s having to choose which virtualization platform providers’ API to support rather than a standard across multiple platforms.

Simon Crosby from Xen/Citrix made a similar comment in this article:

While I totally agree with his sentiment, I’m not sure Simon would be as vocal or egalitarian had Citrix been first out of the gate with their own VMsafe equivalent.  It’s always sad when one must plead for standardization when you’re not in control of the standards…and by the way, Simon, nobody held a gun to the heads of the 20 companies that rushed for the opportunity to be the first out of the gate with VMsafe as it’s made available.

While that band marches on, some additional measure of aid may come from innovative youngbloods looking to build and sell you the next better mousetrap.


As such, in advance of the RSA Conference in a couple of weeks, the security world’s all aflutter with the sounds of start-ups being born out of stealth as well as new-fangled innovation clawing its way out of up-starts seeking to establish a beachhead in the attack on your budget.

With the normal blitzkrieg of press releases that will undoubtedly make their way to your doorstop, I thought I’d comment on a couple of these companies in advance of the noise.

A lot of what I want to say is sadly under embargo, but I’ll get further in-depth later when I’m told I can take the wraps off.  You should know that almost all of these emerging solutions, as with the one below, operate as virtual appliances inside your hosts and require close and careful configuration of the virtual networking elements therein.

If you go back to the meat of the organization/operational issue I describe above, who do you think has access and control over the virtual switch configurations?  The network team?  The security team?  How about the virtual server admin. team…are you concerned yet?

Here’s my first Virtualized March Madness (VMM, get it!) ISV:

  • Montegomodel
    Montego Networks – John Peterson used to be the CTO at Reflex, so he knows a thing or two about switching, virtualization and security.  I very much like Montego’s approach to solving some of the networking issues associated with vSwitch integration and better yet, they’ve created a very interesting business model that actually is something like VMsafe in reverse. 

    Essentially Montego’s HyperSwitch works in conjunction with the integrated vSwitch in the VMM and uses some reasonably elegant networking functionality to classify traffic and either enforce dispositions natively using their own "firewall" technologies (L2-L4) or — and this is the best part — redirect traffic to other named security software partners to effect disposition. 

    If you look on Montego’s website, you’ll see that they show StillSecure and BlueLane as candidates as what they call HyperVSecurity partners.  They also do some really cool stuff with Netflow.

    Neat model.  When VMsafe is available, Montego should then allow these other third party ISV’s to take advantage of VMsafe (by virtue of the HyperSwitch) without the ISV’s having to actually modify their code to do so – Montego will build that to suit.  There’s a bunch of other stuff that I will write about once the embargo is lifted.

    I’m not sure how much runway and strategic differentiation Montego will have from a purely technical perspective as VMsafe ought to level the playing field for some of the networking functionality with competitors, but the policy partnering is a cool idea. 

    We’ll have to see what the performance implications are given the virtual appliance model Montego (and everyone else) has employed.  There’s lots of software in them thar hills doing the flow/packet processing and enacting dispositions…and remember, that’s all virtualized too.

    In the long term, I expect we’ll see some of this functionality appear natively in other virtualization platforms.

    We’ll see how well that prediction works out over time as well as keep an eye out for that Cisco virtual switch we’ve all been waiting for…*

I’ll be shortly talking about Altor Networks and Blue Lane’s latest goodies.

If you’ve got a mousetrap you’d like to see in lights here, feel free to ping me, tell me why I should care, and we’ll explore your offering.  I guarantee that if it passes the sniff test here it will likely mean someone else will want a whiff.

/Hoff

* Update: Alan over at the Virtual Data Center Blog did a nice write-up on his impressions and asks why this functionality isn’t in the vSwitch natively.  I’d pile onto that query, too.  Also, I sort of burned myself by speaking to Montego because the details of how they do what they do is under embargo based on my conversation for a little while longer, so I can’t respond to Alan…

I Love the Smell of Big Iron In the Morning…

March 9th, 2008 1 comment

Univac
Does Not Compute…

I admit that I’m often fascinated by the development of big iron and I also see how to some this seems at odds with my position that technology isn’t the answer to the "security" problem.  Then again, it really depends on what "question" is being asked, what "problem" we’re trying to solve and when we expect to solve them.

It’s pretty clear that we’re still quite some time off from having secure code, solid protocols, brokered authentication and encryption and information-centric based controls that provide the assurance dictated by the policies described by the information itself. 

In between now and then, we see the evolution of some very interesting "solutions" from those focused on the network and host perspectives.  It’s within this bubble that things usually get heated between those proponents who argue that innovation in networking and security is constrained to software versus those who maintain that the way to higher performance, efficacy and coverage can only be achieved with horsepower found in hardware.

I always find it interesting that the networking front prompts argument in this vein, but nobody seems to blink when we see continued development in mainframes — even in this age of Web3.0, etc.  Take IBM’s Z10, for example.  What’s funny is that a good amount of the world still ticks due to big iron in the compute arena despite the advances of distributed systems, SOA, etc., so why do we get so wrapped up when it comes to big iron in networking or security?

I dare you to say "value." 😉

I’ve had this argument on many fronts with numerous people and realized that in most cases what we were missing was context.  There is really no argument to "win" here, but rather a need for examination of what most certainly is a manifest destiny of our own design and the "natural" phenomena associated with punctuated equilibrium.

An Example: Cisco’s New Hardware…and Software to Boot [it.]

Both camps in the above debate would do well to consider the amount of time and money a bellwether in this space — Cisco —  is investing in a balanced portfolio of both hardware and software. 

If we start to see how the pieces are being placed on Cisco’s chess board, it makes for some really interesting scenarios:

Many will look at these developments and simply dismiss them as platforms that will only solve the very most demanding of high-end customers and that COTS hardware trumps the price/performance index when compared with specialty high-performance iron such as this. 

This is a rather short-sighted perspective and one that cyclically has proven inaccurate.   

The notion of hardware versus software superiority is a short term argument which requires context.  It’s simply silly to argue one over the other in generalities.  If you’d like to see what I mean, I refer you once again to Bob Warfield’s "Multi-Core Crisis" meme.  Once we hit cycle limits on processors we always find that memory, bus and other I/O contention issues arise.  It ebbs and flows based upon semiconductor fabrication breakthroughs and the evolution and ability of software and operating systems to take advantage of them.

Toss a couple of other disruptive and innovative technologies into the mix and the landscape looks a little more interesting. 

It’s All About the Best Proprietary Open Platforms…

I don’t think anyone — including me at this point — will argue that a
good amount of "security" will just become a checkbox in (and I’ll use *gasp* Stiennon’s language) the "fabric."  There will always be point
solutions to new problems that will get bolted on, but most of the
security solutions out there today are becoming features before they
mature to markets due to this behavior.

What’s interesting to me is where the "fabric" is and in what form it will take. 

If we look downrange and remember that Cisco has openly discussed it’s strategy of de-coupling its operating systems from hardware in order to provide for a more modular and adaptable platform strategy, all this investment in hardware may indeed seem to support this supposition.

If we also understand Cisco’s investment in virtualization (a-la VMware and IOS-XE) as well as how top-side investment trickles down over time, one could easily see how circling the wagons around both hardware for high-end core/service provide platforms [today] and virtualized operating systems for mid-range solutions will ultimately yield greater penetration and coverage across markets.

We’re experiencing a phase shift in the periodic oscillation associated with where in the stack networking vendors see an opportunity to push their agenda, and if you look at where virtualization and re-perimeterization are pushing us, the "network is the computer" axiom is beginning to take shape again. 

I find the battle for the datacenter OS between the software-based virtualization players and the hardware-based networking and security gianst absolutely delicious, especially when you consider that the biggest in the latter (Cisco) is investing in the biggest of the former (VMware.)

They’re both right.  In the long term, we’re all going to end up with 4-5 hypervisors in our environments supporting multiple modular, virtualized and distributed "fabrics."  I’m not sure that any of that is going to get us close to solving the real problems, but if you’re in the business of selling tin or the wrappers that go on it, you can smile…

Imagine a blade server from your favorite vendor with embedded virtualization capabilities coupled with dedicated network processing hardware supporting your favorite routing/switching vendor’s networking code and running any set of applications you like — security or otherwise — with completely virtualized I/O functions forming a grid/utility compute model.*

Equal parts hardware, software, and innovation.  Cool, huh?

Now, about that Information-Centricity Problem…

*The reality is that this is what attracted me to Crossbeam:
custom-built high-speed networking hardware, generic compute stacks
based on Intel-reference designs, both coupled with a Linux-based
operating system that supports security applications from multiple
sources as an on-demand scalable security services layer virtualized
across the network.

Trouble is, others have caught on now…

VMWare’s VMSafe: Security Industry Defibrilator….Making Dying Muscle Twitch Again.

March 2nd, 2008 6 comments

Defibrilator
Nurse, 10 cc’s of Adrenalin, stat!

As I mentioned in a prior posting, VMware’s VMsafe has the potential to inject life back into the atrophied and withering heart muscle of the security industry and raise the prognosis from DOA to the potential for a vital economic revenue stream once more.

How?  Well, the answer to this question really comes down to whether you believe that keeping a body on assisted life support means that the patient is living or simply alive, and the same perspective goes for the security industry.

With the inevitable consolidation of solutions and offerings in the security industry over the last few years, we have seen the commoditization of many markets as well as the natural emergence of others in response to the ebb and flow of economic, technological, cultural and political forces.

One of the most impacting disruptive and innovative forces that is causing arrhythmia in the pulse of both consumers and providers and driving the emergence of new market opportunities is virtualization. 

For the last two years, I’ve been waving my hands about the fact that virtualization changes everything across the information lifecycle.  From cradle to grave, the evolution of virtualization will profoundly change what, where, why and how we do what we do.

I’m not claiming that I’m the only one, but it was sure lonely from a general security practitioner’s perspective up until about six months ago.  In the last four months, I’ve given two keynotes and three decently visible talks on VirtSec, and I have 3-4 more tee’d up over the next 3 months, so somebody’s interested…better late than never, I suppose.

How’s the patient?

For the purpose of this post, I’m going to focus on the security implications of virtualization and simply summarize by suggesting that virtualization up until now has quietly marked a tipping point where we see the disruption stretch security architectures and technologies to their breaking point and in many cases make much of our invested security portfolio redundant and irrelevant.

I’ve discussed why and how this is the case in numerous posts and presentations, but it’s clear (now) to most that the security industry has been clearly out of phase with what has plainly been a well-signaled (r)evolution in computing.

Is anyone really surprised that we are caught flat-footed again?  Sorry to rant, but…

This is such a sorry indicator of why things are so terribly broken with "IT/Information Security" as it stands today; we continue to try and solve short term problems with even shorter term "solutions" that do nothing more than perpetuate the problem — and we do so in a horrific display of myopic dissonance, it’s a wonder we function at all.   Actually, it’s a perfectly wonderful explanation as to why criminals are always 5 steps ahead — they plan strategically while acting tactically against their objectives and aren’t afraid to respond to the customers proactively.

So, we’ve got this fantastic technological, economic, and cultural transformation occurring over the last FIVE YEARS (at least,) and the best we’ve seen as a response from most traditional security vendors is that they have simply marketed their solutions slimly as "virtualization ready" or "virtualization aware" when in fact, these are simply hollow words for how to make their existing "square" products fit into the "round" holes of a problem space that virtualization exposes and creates.

Firewalls, IDS/IPSs, UTM, NAC, DLP — all of them have limited visibility in this rapidly "re-perimeterized" universe in which our technology operates, and in most cases we’re busy looking at uninteresting and practically non-actionable things anyway.  As one of my favorite mentors used to say, "we’re data rich, but information poor."

The vendors in these example markets — with or without admission — are all really worried about what virtualization will do to their already shrinking relevance.  So we wait.

Doctor, it hurts when I do this…

VMSafe represents a huge opportunity for these vendors to claw their way back to life, making their solutions relevant once more, and perhaps even more so.

Most of the companies who have so far signed on to VMsafe will, as I mentioned previously, need to align roadmaps and release new or modified versions of their product lines to work with the new API’s and management planes. 

This is obviously a big deal, but one that is unavoidable for these companies — most of which are clumbsy and generally not agile or responsive to third parties.  However, you don’t get 20 of some of the biggest "monoliths" of the security world scrambling to sign up for a program like VMsafe just for giggles — and the reality is that the platform version of VMware’s virtualization products that will support this technology aren’t even available yet.

I am willing to wager that you will, in extremely short time given VMware’s willingness to sign on new partners, see many more vendors flock to the program.  I further maintain that despite their vehement denial, NAC vendors (with pressure already from the oncoming tidal wave of Microsoft’s NAP) will also adapt their wares to take advantage of this technology for reasons I’ve outlined here.

They literally cannot afford not to.

I am extremely interested in what other virtualization vendors’ responses will be — especially Citrix.  It’s pretty clear what Microsoft has in mind.  It’s going to further open up opportunities for networking vendors such as Cisco, f5, etc., and we’re going to see the operational, technical, administrative, "security" and governance lines  blur even further.

Welcome back from the dead, security vendors, you’ve got a second chance in life.  I’m not sure it’s warranted, but it’s "natural" even though we’re going to end up with a very interesting Frankenstein of a "solution" over the long term.

The Doctor prescribes an active lifestyle, healthy marketing calisthenics, a diet with plenty of roughage, and jumping back on the hamster wheel of pain for exercise.

/Hoff

Pondering Implications On Standards & Products Due To Cold Boot Attacks On Encryption Keys

February 22nd, 2008 4 comments

Scientist
You’ve no doubt seen the latest handywork of Ed Felten and his team from the Princeton Center for Information Technology Policy regarding cold boot attacks on encryption keys:

Abstract: Contrary to popular assumption, DRAMs used in
most modern computers retain their contents for seconds to minutes
after power is lost, even at operating temperatures and even if removed
from a motherboard. Although DRAMs become less reliable when they are
not refreshed, they are not immediately erased, and their contents
persist sufficiently for malicious (or forensic) acquisition of usable
full-system memory images. We show that this phenomenon limits the
ability of an operating system to protect cryptographic key material
from an attacker with physical access. We use cold reboots to mount
attacks on popular disk encryption systems — BitLocker, FileVault,
dm-crypt, and TrueCrypt — using no special devices or materials. We
experimentally characterize the extent and predictability of memory
remanence and report that remanence times can be increased dramatically
with simple techniques. We offer new algorithms for finding
cryptographic keys in memory images and for correcting errors caused by
bit decay. Though we discuss several strategies for partially
mitigating these risks, we know of no simple remedy that would
eliminate them.

Check out the video below (if you have scripting disabled, here’s the link.)  Fascinating and scary stuff.

Would a TPM implementation mitigate this if they keys weren’t stored (even temporarily) in RAM?

Given the surge lately toward full disk encryption products, I wonder how the market will react to this.  I am interested in both the broad industry impact and response from vendors.  I won’t be surprised if we see new products crop up in a matter of days advertising magical defenses against such attacks as well as vendors scrambling to do damage control.

This might be a bit of a reach, but equally as interesting to me are the potential implications upon DoD/Military crypto standards such as FIPS140.2 ( I believe the draft of 140.3 is circulating…)  In the case of certain products at specific security levels, it’s obvious based on the video that one wouldn’t necessarily need physical access to a crypto module (or RAM) in order to potentially attack it.

It’s always amazing to me when really smart people think of really creative, innovative and (in some cases) obvious ways of examining what we all take for granted.

Security Innovation & the Bendy Hammer

February 17th, 2008 4 comments

MaxstrikeSee that odd looking hammer to the left?  It’s called the MaxiStrike from Redback Tools.

No, it hasn’t been run over by a Panzer, nor was there grease on the lens  during the photography session. 

Believe it or not, that odd little bend enables this 20 ounce mallet with the following features:

     > maximize strike force

     > reduce missed hits

     > leave clearance for nailing in cramped areas

All from that one little left hand turn from linear thought in product design.

You remember that series of posts I did on Disruptive Innovation?

This is a perfect illustration of how innovation can be "evolutionary" as opposed to revolutionary.

Incrementalism can be just as impacting as one of those tipping point "big-bang" events that have desensitized us to some of the really cool things that pop up and can actually make a difference.

So I know this hammer isn’t going to cure cancer, but it makes for easier, more efficient and more accurate nailing.  Sometimes that’s worth a hell of a lot to someone who does a lot of hammering…

Things like this happen around us all the time — even in our little security puddle of an industry. 

It’s often quite fun when you spot them.

I bet if you tried, you can come up with some examples in security.

Well?

Security Today == Shooting Arrows Through Sunroofs of Cars?

February 7th, 2008 14 comments

Archer_2
In this Dark Reading post, Peter Tippett, described as the inventor of what is now Norton Anti-virus, suggests that the bulk of InfoSec practices are "…outmoded or outdated concepts that don’t apply to today’s computing
environments."

As I read through this piece, I found myself flip-flopping between violent agreement and incredulous eye-rolling from one paragraph to the next, caused somewhat by the overuse of hyperbole in some of his analogies.  This was disappointing, but overall, I enjoyed the piece.

Let’s take a look at Peter’s comments:

For example, today’s security industry focuses way too much time
on vulnerability research, testing, and patching, Tippett suggested.
"Only 3 percent of the vulnerabilities that are discovered are ever
exploited," he said. "Yet there is huge amount of attention given to
vulnerability disclosure, patch management, and so forth."

I’d agree that the "industry" certainly focuses their efforts on these activities, but that’s exactly the mission of the "industry" that he helped create.  We, as consumers of security kit, have perpetuated a supply-driven demand security economy.

There’s a huge amount of attention paid to vulnerabilities, patching and prevention that doesn’t prevent because at this point, that’s all we’ve got.  Until we start focusing on the the root cause rather than the symptoms, this is a cycle we won’t break.  See my post titled "Sacred Cows, Meatloaf, and Solving the Wrong Problems" for an example of what I mean.


Tippett compared vulnerability research with automobile safety
research. "If I sat up in a window of a building, I might find that I
could shoot an arrow through the sunroof of a Ford and kill the
driver," he said. "It isn’t very likely, but it’s possible.


"If I disclose that vulnerability, shouldn’t the automaker put in
some sort of arrow deflection device to patch the problem? And then
other researchers may find similar vulnerabilities in other makes and
models," Tippett continued. "And because it’s potentially fatal to the
driver, I rate it as ‘critical.’ There’s a lot of attention and effort
there, but it isn’t really helping auto safety very much."

What this really means and Peter doesn’t really ever state, is that mitigating vulnerabilities in the absence of threat, impact or probability is a bad thing.  This is why I make such a fuss about managing risk instead of mitigating vulnerabilities.  If there were millions of malicious archers firing arrows through the sunroofs of unsuspecting Ford Escort drivers, then the ‘critical’ rating is relevant given the probability and impact of all those slings and arrows of thine enemies…

Tippett also suggested that many security pros waste time trying
to buy or invent defenses that are 100 percent secure. "If a product
can be cracked, it’s sometimes thrown out and considered useless," he
observed. "But automobile seatbelts only prevent fatalities about 50
percent of the time. Are they worthless? Security products don’t have
to be perfect to be helpful in your defense."

I like his analogy and the point he’s trying to underscore.  What I find in many cases is that the binary evaluation of security efficacy — in products and programs — still exists.  In the absence of measuring the effective impact that something has in effecting one’s risk posture, people revert to a non-gradient scale of 0% or 100% insecure or secure.  Is being "secure" really important or is managing to a level of risk that is acceptable — with or without losses — the really relevant measure of success?   

This concept also applies to security processes, Tippett said.
"There’s a notion out there that if I do certain processes flawlessly,
such as vulnerability patching or updating my antivirus software, that
my organization will be more secure. But studies have shown that there
isn’t necessarily a direct correlation between doing these processes
well and the frequency or infrequency of security incidents.


"You can’t always improve the security of something by doing it
better," Tippett said. "If we made seatbelts out of titanium instead of
nylon, they’d be a lot stronger. But there’s no evidence to suggest
that they’d really help improve passenger safety."

I would like to see these studies.  I think that companies who have rigorous, mature and transparent processes that they execute "flawlessly" may not be more "secure," (a measurement I’d love to see quantified) but are in a much better position to respond and recover when (not if) an event occurs.  Based upon the established corollary that we can’t be 100% "secure" in the first place, we then know we’re going to have incidents.

Being able to recover from them or continue to operate while under duress is more realistic and important in my view.  That’s the point of information survivability.


Security teams need to rethink the way they spend their time,
focusing on efforts that could potentially pay higher security
dividends, Tippett suggested. "For example, only 8 percent of companies
have enabled their routers to do ‘default deny’ on inbound traffic," he
said. "Even fewer do it on outbound traffic. That’s an example of a
simple effort that could pay high dividends if more companies took the
time to do it."

I agree.  Focusing on efforts that eliminate entire classes of problems based upon reducing risk is a more appropriate use of time, money and resources.

Security awareness programs also offer a high
rate of return, Tippett said. "Employee training sometimes gets a bad
rap because it doesn’t alter the behavior of every employee who takes
it," he said. "But if I can reduce the number of security incidents by
30 percent through a $10,000 security awareness program, doesn’t that
make more sense than spending $1 million on an antivirus upgrade that
only reduces incidents by 2 percent?"

Nod.  That was the point of the portfolio evaluation process I gave in my disruptive innovation presentation:

24. Provide Transparency in portfolio effectiveness
Isd2007031_2

I didn’t invent this graph, but it’s one of my favorite ways of
visualizing my investment portfolio by measuring in three dimensions:
business impact, security impact and monetized investment.  All of
these definitions are subjective within your organization (as well as
how you might measure them.)

The Y-axis represents the "security impact" that the solution
provides.  The X-axis represents the "business impact" that the
solution provides while the size of the dot represents the capex/opex
investment made in the solution.

Each of the dots represents a specific solution in the portfolio.

If you have a solution that is a large dot toward the bottom-left of
the graph, one has to question the reason for continued investment
since it provides little in the way of perceived security and business
value with high cost.   On the flipside, if a solution is represented
by a small dot in the upper-right, the bang for the buck is high as is
the impact it has on the organization.

The goal would be to get as many of your investments in your
portfolio from the bottom-left to the top-right with the smallest dots
possible.

This transparency and the process by which the portfolio is assessed
is delivered as an output of the strategic innovation framework which
is really comprised of part art and part science.

All in all, a good read from someone who helped create the monster and is now calling it ugly…

/Hoff

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

And Now Some Useful 2008 Information Survivability Predictions…

December 7th, 2007 1 comment

Noculars
So, after the obligatory dispatch of gloom and doom as described in my
2008 (in)Security Predictions, I’m actually going to highlight some of
the more useful things in the realm of Information Security that I
think are emerging as we round the corner toward next year.

They’re not really so much predictions as rather some things to watch.

Unlike folks who can only seem to talk about desperation, futility
and manifest destiny or (worse yet) "anti-pundit pundits" who try to
suggest that predictions and forecasting are useless (usually because
they suck at it,) I gladly offer a practical roundup of impending
development, innovation and some incremental evolution for your
enjoyment. 

You know, good news.

As Mogull mentioned,
I don’t require a Cray XMP48, chicken bones & voodoo or a
prehensile tail to make my picks.  Rather I grab a nice cold glass of
Vitamin G (Guiness) and sit down and think for a minute or two,
dwelling on my super l33t powers of common sense and pragmatism with just a
pinch of futurist wit.

Many of these items have been underway for some time, but 2008 will
be a banner year for these topics as well as the previously-described
"opportunities for improvement…"

That said, let’s roll with some of the goodness we can look forward to in the coming year.  This is not an exhaustive list by any means, but some examples I thought were important and interesting:

  1. More robust virtualization security toolsets with more native hypervisor/vmm accessibility
    Though
    it didn’t start with the notion of security baked in, virtualization
    for all of its rush-to-production bravado will actually yield some
    interesting security solutions that help tackle some very serious
    challenges.  As the hypervisors become thinner, we’re going to see the
    management and security toolsets gain increased access to the guts of
    the sausage machine in order to effect security appropriately and this
    will be the year we see the virtual switch open up to third parties and
    more robust APIs for security visibility and disposition appear.
     
  2. The focus on information centric security survivability graduates from v1.0 to v1.1
    Trying
    to secure the network and the endpoint is like herding cats and folks
    are tired of dumping precious effort on deploying kitty litter around
    the Enterprise to soak up the stinky spots.  Rather, we’re going to see
    folks really start to pay attention to information classification,
    extensible and portable policy definition, cradle-to-grave lifecycle
    management, and invest in technology to help get them there.

    Interestingly
    the current maturity of features/functions such as NAC and DLP have
    actually helped us get closer to managing our information and
    information-related risks.  The next generation of these offerings in
    combination with many of the other elements I describe herein and their
    consolidation into the larger landscape of management suites will
    actually start to deliver on the promise of focusing on what matters —
    the information.
     

  3. Robust Role-based policy, Identity and access management coupled with entitlement, geo-location and federation…oh and infrastructure, too!
    We’re
    getting closer to being able to affect policy not only based upon just
    source/destination IP address, switch and router topology and the odd entry in active directory on
    a per-application basis, but rather holistically based upon robust
    lifecycle-focused role-based policy engines that allow us to tie in all of the major
    enterprise components that sit along the information supply-chain.

    Who, what, where, when, how and ultimately why will be the decision
    points considered with the next generation of solutions in this space.
    Combine the advancements here with item #2 above, and someone might
    actually start smiling.

    If you need any evidence of the convergence/collision of the application-oriented with the network-oriented approach and a healthy overlay of user entitlement provisioning, just look at the about-face Cisco just made regarding TrustSec.  Of course, we all know that it’s not a *real* security concern/market until Cisco announces they’ve created the solution for it 😉
     

  4. Next Generation Networks gain visibility as they redefine the compute model of today
    Just
    as there exists a Moore’s curve for computing, there exists an
    overlapping version for networking, it just moves slower given the
    footprint.  We’re seeing the slope of this curve starting to trend up
    this coming year, and it’s much more than bigger pipes, although that
    doesn’t hurt either…

    These next generation networks will
    really start to emerge visibly in the next year as the existing
    networking models start to stretch the capabilities and capacities of
    existing architecture and new paradigms drive requirements that dictate
    a much more modular, scalable, resilient, high-performance, secure and
    open transport upon which to build distributed service layers.

    How
    networks and service layers are designed, composed, provisioned,
    deployed and managed — and how that intersects with virtualization and
    grid/utility computing — will start to really sink home the message
    that "in the cloud" computing has arrived.  Expect service providers
    and very large enterprises to adapt these new computing climates first
    with a trickle-down to smaller business via SaaS and hosted service
    operators to follow.

    BT’s 21CN
    (21st Century Network) is a fantastic example of what we can expect
    from NGN as the demand for higher speed, more secure, more resilient and more extensible interconnectivity really
    takes off.
     

  5. Grid and distributed utility computing models will start to creep into security
    A
    really interesting by-product of the "cloud compute" model is that as
    data, storage, networking, processing, etc. get distributed, so shall
    security.  In the grid model, one doesn’t care where the actions take
    place so long as service levels are met and the experiential and
    business requirements are delivered.  Security should be thought of in
    exactly the same way. 

    The notion that you can point to a
    physical box and say it performs function ‘X’ is so last Tuesday.
    Virtualization already tells us this.  So, imagine if your security
    processing isn’t performed by a monolithic appliance but instead is
    contributed to in a self-organizing fashion wherein the entire
    ecosystem (network, hosts, platforms, etc.) all contribute in the
    identification of threats and vulnerabilities as well as function to
    contain, quarantine and remediate policy exceptions.

    Sort of sounds like that "self-defending network" schpiel, but not focused on the network and with common telemetry and distributed processing of the problem.

    Check out Red Lambda’s cGrid technology for an interesting view of this model.
     

  6. Precision versus accuracy will start to legitimize prevention as
    the technology starts to allow us the confidence to start turning the
    corner beyond detection

    In a sad commentary on the last few
    years of the security technology grind, we’ve seen the prognostication
    that intrusion detection is dead and the deadpan urging of the security
    vendor cesspool convincing us that we must deploy intrusion prevention
    in its stead. 
       
    Since there really aren’t many pure-play intrusion detection systems
    left anyway, the reality is that most folks who have purchased IPSs
    seldom put them in in-line mode and when they do, they seldom turn on
    the "prevention" policies and instead just have them detect attacks,
    blink a bit and get on with it.

    Why?  Mostly because while the
    threats have evolved the technology implemented to mitigate them hasn’t
    — we’re either stuck with giant port/protocol colanders or
    signature-driven IPSs that are nothing more than IDSs with the ability
    to send RST packets.

    So the "new" generation of technology has
    arrived and may offer some hope of bridging that gap.  This is due to
    not only really good COTS hardware but also really good network
    processors and better software written (or re-written) to take
    advantage of both.  Performance, efficacy and efficiency have begun to
    give us greater visibility as we get away from making decisions based
    on ports/protocols (feel free to debate proxies vs. ACLs vs. stateful
    inspection…) and move to identifying application usage and getting us
    close to being able to make "real time" decisions on content in context
    by examining the payload and data.  See #2 above.

    The
    precision versus accuracy discussion is focused around being able to
    really start trusting in the ability for prevention technology to
    detect, defend and deter against "bad things" with a fidelity and
    resolution that has very low false positive rates.

    We’re getting closer with the arrival of technology such as Palo Alto Network’s solutions
    — you can call them whatever you like, but enforcing both detection
    and prevention using easy-to-define policies based on application (and
    telling the difference between any number of apps all using port
    80/443) is a step in the right direction.
     

  7. The consumerization of IT will cause security and IT as we know it to die radically change
    I know it’s heretical but 2008 is going to really push the limits of
    the existing IT and security architectures to their breaking points, which is
    going to mean that instead of saying "no," we’re going to have to focus
    on how to say "yes, but with this incremental risk" and find solutions for an every increasingly mobile and consumerist enterprise. 

    We’ve talked about this before, and most security folks curl up into a fetal position when you start mentioning the adoption by the enterprise of social
    neworking, powerful smartphones, collaboration tools, etc.  The fact is that the favorable economics, agility , flexibility and efficiencies gained with the adoption of consumerization of IT outweigh the downsides in the long run.  Let’s not forget the new generation of workers entering the workforce. 

    So, since information is going to be leaking from our Enterprises like a sieve on all manners of devices and by all manner of methods, it’s going to force our hands and cause us to focus on being information centric and stop worrying about the "perimeter problem," stop focusing on the network and the host, and start dealing with managing the truly important assets while allowing our employees to do their jobs in the most effective, collaborative and efficient methods possible.

    This disruption will be a good thing, I promise.  If you don’t believe me, ask BP — one of the largest enterprises on the planet.  Since 2006 they’ve put some amazing initiatives into play:

    like this little gem:

    Oil giant BP is pioneering a "digital consumer" initiative
    that will give some employees an allowance to buy their own IT
    equipment and take care of their own support needs.

    The
    project, which is still at the pilot stage, gives select BP staff an
    annual allowance — believed to be around $1,000 — to buy their own
    computing equipment and use their own expertise and the manufacturer’s
    warranty and support instead of using BP’s IT support team.

    Access
    to the scheme is tightly controlled and those employees taking part
    must demonstrate a certain level of IT proficiency through a computer
    driving licence-style certification, as well as signing a diligent use
    agreement.

    …combined with this:

    Rather
    than rely on a strong network perimeter to secure its systems, BP has
    decided that these laptops have to be capable of coping with the worst
    that malicious hackers can throw at it, without relying on a network
    firewall.

    Ken Douglas, technology director of BP, told the UK
    Technology Innovation & Growth Forum in London on Monday that
    18,000 of BP’s 85,000 laptops now connect straight to the internet even
    when they’re in the office.

  8. Desktop Operating Systems become even more resilient
    The first steps taken by Microsoft and Apple in Vista and OS X (Leopard) as examples have begun to
    chip away at plugging up some of the security holes that
    have plagued them due to the architectural "feature" that providing an open execution runtime model delivers.  Honestly, nothing short of a do-over will ultimately mitigate this problem, so instead of suggesting that incremental improvement is worthless, we should recognize that our dark overlords are trying to makethings better.

    Elements in Vista such as ASLR, NX, and UAC combined with integrated firewalling, anti-spyware/anti-phishing, disk encryption, integrated rights management, protected mode IE mode, etc. are all good steps in a "more right" direction than previous offerings.  They’re in response to lessons learned.

    On the Mac, we also see ASLR, sandboxing, input management, better firewalling, better disk encryption, which are also notable improvements.  Yes, we’ve got a long way to go, but this means that OS vendors are paying more attention which will lead to more stable and secure platforms upon which developers can write more secure code.

    It will be interesting to see how the intersection of these "more secure" OS’s factor with virtualization security discussed in #1 above.

    Vista SP1 is due to ship in 2008 and will include APIs through which third-party security products can work with kernel patch protection on Vista
    x64, more secure BitLocker drive encryption and a better Elliptical Curve Cryptography PRNG (pseudo-random number generator.)  Follow-on releases to Leopard will likely feature security enhancements to those delivered this year.
     

  9. Compliance stops being a dirty word  & Risk Management moves beyond buzzword
    Today
    we typically see the role of information security described as blocking and tackling; focused on managing threats and
    vulnerabilities balanced against the need to be "compliant" to some
    arbitrary set of internal and external policies.  In many people’s
    assessment then, compliance equals security.  This is an inaccurate and
    unfortunate misunderstanding.

    In 2008, we’ll see many of the functions of security — administrative, policy and operational — become much more visible and transparent to the business and we’ll see a renewed effort placed on compliance within the scope of managing risk because the former is actually a by-product of a well-executed risk management strategy.

    We have compliance as an industry today because we manage technology threats and vulnerabilities and don’t manage risk.  Compliance is actually nothing more than a way of forcing transparency and plugging a gap between the two.  For most, it’s the best they’ve got.

    What’s traditionally preventing the transition from threat/vulnerability management to risk management is the principal focus on technology with a lack of a good risk assessment framework and thus a lack of understanding of business impact.

    The availability of mature risk assessment frameworks (OCTAVE, FAIR, etc.) combined with the maturity of IT and governance frameworks (CoBIT, ITIL) and the readiness of the business and IT/Security cultures to accept risk management as a language and actionset with which they need to be conversant will yield huge benefits this year.

    Couple that with solutions like Skybox and you’ve got the makings of a strategic risk management strategy that can bring the security more closely aligned to the business.
     

  10. Rich Mogull will, indeed, move in with his mom and start speaking Klingon
    ’nuff said.

So, there we have it.  A little bit of sunshine in your otherwise gloomy day.

/Hoff