Archive

Archive for the ‘Disruptive Innovation’ Category

The Walls Are Collapsing Around Information Centricity

March 10th, 2008 2 comments

Since Mogull and I collaborate quite a bit on projects and share many thoughts and beliefs, I wanted to make a couple of comments on his last post on Information Centricity and remind the audience at home of a couple of really important points.

Rich’s post was short and sweet regarding the need for Information-Centric solutions with some profound yet subtle guideposts:

For information-centric security to become a reality, in the long term it needs to follow the following principles:

  1. Information (data) must be self describing and defending.
  2. Policies and controls must account for business context.
  3. Information must be protected as it moves from structured to
    unstructured, in and out of applications, and changing business context.
  4. Policies must work consistently through the different defensive layers and technologies we implement.

I’m not convinced this is a complete list, but I’m trying to keep to
my new philosophy of shorter and simpler. A key point that might not be
obvious is that while we have self-defending data solutions, like DRM
and label security, for success they must grow to account for business
context. That’s when static data becomes usable information.

Mike Rothman gave an interesting review of Rich’s post:


The Mogull just laid out your work for the next 10 years. You just
probably don’t know it yet. Yes, it’s all about ensuring that the
fundamental elements of your data are protected, however and wherever
they are used. Rich has broken it up into 4 thoughts. The first one
made my head explode: "Information (data) must be self-describing and
defending."

Now I have to clean up the mess. Sure things like DRM are a
bad start, and have tarnished how we think about information-centric
security, but you do have to start somewhere. The reality is this is a
really long term vision of a problem where I’m not sure how you get
from Point A to Point B. We all talk about the lack of innovation in
security. And how the market just isn’t exciting anymore. What Rich
lays out here is exciting. It’s also a really really really big
problem. If you want a view of what the next big security company does,
it’s those 4 things. And believe me, if I knew how to do it, I’d be
doing it – not talking about the need to do it.

The comments I want to make are three-fold:

  1. Rich is re-stating and Mike’s head is exploding around the exact concepts that Information Survivability represents and the Jericho Forum trumpets in their Ten Commandments.  In fact, you can read all about that in a prior posts I made on the subjects of the Jericho Forum, re-perimeterization, information survivability and information centricity.  I like this post on a process I call ADAPT (Applied Data and Application Policy Tagging) a lot.

    For reference, here are the Jericho Forum’s Ten Commandments. Please see #9:

    Jericho_comm1Jericho_comm2

  2. As mike alluded, DRM/ERM has received a bad rap because of how it’s implemented — which has really left a sour taste in the mouths of the consumer consciousness.  As a business tool, it is the precursor of information centric policy and will become the lynchpin in how we will ultimately gain a foothold on solving the information resiliency/assurance/survivability problem.
  3. As to the innovation and dialog that Mike suggests is lacking in this space, I’d suggest he’s suffering from a bit of Shitake-ism (a-la mushroom-itis.)  The next generation of DLP solutions that are becoming CMP (Content Monitoring and Protection — a term I coined) are evolving to deal with just this very thing.  It’s happening.  Now.

    Further to that, I have been briefed by some very, very interesting companies that are in stealth mode who are looking to shake this space up as we speak.

So, prepare for Information Survivability, increased Information Resilience and assurance.  Coming to a solution near you…

/Hoff

I Love the Smell of Big Iron In the Morning…

March 9th, 2008 1 comment

Univac
Does Not Compute…

I admit that I’m often fascinated by the development of big iron and I also see how to some this seems at odds with my position that technology isn’t the answer to the "security" problem.  Then again, it really depends on what "question" is being asked, what "problem" we’re trying to solve and when we expect to solve them.

It’s pretty clear that we’re still quite some time off from having secure code, solid protocols, brokered authentication and encryption and information-centric based controls that provide the assurance dictated by the policies described by the information itself. 

In between now and then, we see the evolution of some very interesting "solutions" from those focused on the network and host perspectives.  It’s within this bubble that things usually get heated between those proponents who argue that innovation in networking and security is constrained to software versus those who maintain that the way to higher performance, efficacy and coverage can only be achieved with horsepower found in hardware.

I always find it interesting that the networking front prompts argument in this vein, but nobody seems to blink when we see continued development in mainframes — even in this age of Web3.0, etc.  Take IBM’s Z10, for example.  What’s funny is that a good amount of the world still ticks due to big iron in the compute arena despite the advances of distributed systems, SOA, etc., so why do we get so wrapped up when it comes to big iron in networking or security?

I dare you to say "value." 😉

I’ve had this argument on many fronts with numerous people and realized that in most cases what we were missing was context.  There is really no argument to "win" here, but rather a need for examination of what most certainly is a manifest destiny of our own design and the "natural" phenomena associated with punctuated equilibrium.

An Example: Cisco’s New Hardware…and Software to Boot [it.]

Both camps in the above debate would do well to consider the amount of time and money a bellwether in this space — Cisco —  is investing in a balanced portfolio of both hardware and software. 

If we start to see how the pieces are being placed on Cisco’s chess board, it makes for some really interesting scenarios:

Many will look at these developments and simply dismiss them as platforms that will only solve the very most demanding of high-end customers and that COTS hardware trumps the price/performance index when compared with specialty high-performance iron such as this. 

This is a rather short-sighted perspective and one that cyclically has proven inaccurate.   

The notion of hardware versus software superiority is a short term argument which requires context.  It’s simply silly to argue one over the other in generalities.  If you’d like to see what I mean, I refer you once again to Bob Warfield’s "Multi-Core Crisis" meme.  Once we hit cycle limits on processors we always find that memory, bus and other I/O contention issues arise.  It ebbs and flows based upon semiconductor fabrication breakthroughs and the evolution and ability of software and operating systems to take advantage of them.

Toss a couple of other disruptive and innovative technologies into the mix and the landscape looks a little more interesting. 

It’s All About the Best Proprietary Open Platforms…

I don’t think anyone — including me at this point — will argue that a
good amount of "security" will just become a checkbox in (and I’ll use *gasp* Stiennon’s language) the "fabric."  There will always be point
solutions to new problems that will get bolted on, but most of the
security solutions out there today are becoming features before they
mature to markets due to this behavior.

What’s interesting to me is where the "fabric" is and in what form it will take. 

If we look downrange and remember that Cisco has openly discussed it’s strategy of de-coupling its operating systems from hardware in order to provide for a more modular and adaptable platform strategy, all this investment in hardware may indeed seem to support this supposition.

If we also understand Cisco’s investment in virtualization (a-la VMware and IOS-XE) as well as how top-side investment trickles down over time, one could easily see how circling the wagons around both hardware for high-end core/service provide platforms [today] and virtualized operating systems for mid-range solutions will ultimately yield greater penetration and coverage across markets.

We’re experiencing a phase shift in the periodic oscillation associated with where in the stack networking vendors see an opportunity to push their agenda, and if you look at where virtualization and re-perimeterization are pushing us, the "network is the computer" axiom is beginning to take shape again. 

I find the battle for the datacenter OS between the software-based virtualization players and the hardware-based networking and security gianst absolutely delicious, especially when you consider that the biggest in the latter (Cisco) is investing in the biggest of the former (VMware.)

They’re both right.  In the long term, we’re all going to end up with 4-5 hypervisors in our environments supporting multiple modular, virtualized and distributed "fabrics."  I’m not sure that any of that is going to get us close to solving the real problems, but if you’re in the business of selling tin or the wrappers that go on it, you can smile…

Imagine a blade server from your favorite vendor with embedded virtualization capabilities coupled with dedicated network processing hardware supporting your favorite routing/switching vendor’s networking code and running any set of applications you like — security or otherwise — with completely virtualized I/O functions forming a grid/utility compute model.*

Equal parts hardware, software, and innovation.  Cool, huh?

Now, about that Information-Centricity Problem…

*The reality is that this is what attracted me to Crossbeam:
custom-built high-speed networking hardware, generic compute stacks
based on Intel-reference designs, both coupled with a Linux-based
operating system that supports security applications from multiple
sources as an on-demand scalable security services layer virtualized
across the network.

Trouble is, others have caught on now…

VMWare’s VMSafe: Security Industry Defibrilator….Making Dying Muscle Twitch Again.

March 2nd, 2008 6 comments

Defibrilator
Nurse, 10 cc’s of Adrenalin, stat!

As I mentioned in a prior posting, VMware’s VMsafe has the potential to inject life back into the atrophied and withering heart muscle of the security industry and raise the prognosis from DOA to the potential for a vital economic revenue stream once more.

How?  Well, the answer to this question really comes down to whether you believe that keeping a body on assisted life support means that the patient is living or simply alive, and the same perspective goes for the security industry.

With the inevitable consolidation of solutions and offerings in the security industry over the last few years, we have seen the commoditization of many markets as well as the natural emergence of others in response to the ebb and flow of economic, technological, cultural and political forces.

One of the most impacting disruptive and innovative forces that is causing arrhythmia in the pulse of both consumers and providers and driving the emergence of new market opportunities is virtualization. 

For the last two years, I’ve been waving my hands about the fact that virtualization changes everything across the information lifecycle.  From cradle to grave, the evolution of virtualization will profoundly change what, where, why and how we do what we do.

I’m not claiming that I’m the only one, but it was sure lonely from a general security practitioner’s perspective up until about six months ago.  In the last four months, I’ve given two keynotes and three decently visible talks on VirtSec, and I have 3-4 more tee’d up over the next 3 months, so somebody’s interested…better late than never, I suppose.

How’s the patient?

For the purpose of this post, I’m going to focus on the security implications of virtualization and simply summarize by suggesting that virtualization up until now has quietly marked a tipping point where we see the disruption stretch security architectures and technologies to their breaking point and in many cases make much of our invested security portfolio redundant and irrelevant.

I’ve discussed why and how this is the case in numerous posts and presentations, but it’s clear (now) to most that the security industry has been clearly out of phase with what has plainly been a well-signaled (r)evolution in computing.

Is anyone really surprised that we are caught flat-footed again?  Sorry to rant, but…

This is such a sorry indicator of why things are so terribly broken with "IT/Information Security" as it stands today; we continue to try and solve short term problems with even shorter term "solutions" that do nothing more than perpetuate the problem — and we do so in a horrific display of myopic dissonance, it’s a wonder we function at all.   Actually, it’s a perfectly wonderful explanation as to why criminals are always 5 steps ahead — they plan strategically while acting tactically against their objectives and aren’t afraid to respond to the customers proactively.

So, we’ve got this fantastic technological, economic, and cultural transformation occurring over the last FIVE YEARS (at least,) and the best we’ve seen as a response from most traditional security vendors is that they have simply marketed their solutions slimly as "virtualization ready" or "virtualization aware" when in fact, these are simply hollow words for how to make their existing "square" products fit into the "round" holes of a problem space that virtualization exposes and creates.

Firewalls, IDS/IPSs, UTM, NAC, DLP — all of them have limited visibility in this rapidly "re-perimeterized" universe in which our technology operates, and in most cases we’re busy looking at uninteresting and practically non-actionable things anyway.  As one of my favorite mentors used to say, "we’re data rich, but information poor."

The vendors in these example markets — with or without admission — are all really worried about what virtualization will do to their already shrinking relevance.  So we wait.

Doctor, it hurts when I do this…

VMSafe represents a huge opportunity for these vendors to claw their way back to life, making their solutions relevant once more, and perhaps even more so.

Most of the companies who have so far signed on to VMsafe will, as I mentioned previously, need to align roadmaps and release new or modified versions of their product lines to work with the new API’s and management planes. 

This is obviously a big deal, but one that is unavoidable for these companies — most of which are clumbsy and generally not agile or responsive to third parties.  However, you don’t get 20 of some of the biggest "monoliths" of the security world scrambling to sign up for a program like VMsafe just for giggles — and the reality is that the platform version of VMware’s virtualization products that will support this technology aren’t even available yet.

I am willing to wager that you will, in extremely short time given VMware’s willingness to sign on new partners, see many more vendors flock to the program.  I further maintain that despite their vehement denial, NAC vendors (with pressure already from the oncoming tidal wave of Microsoft’s NAP) will also adapt their wares to take advantage of this technology for reasons I’ve outlined here.

They literally cannot afford not to.

I am extremely interested in what other virtualization vendors’ responses will be — especially Citrix.  It’s pretty clear what Microsoft has in mind.  It’s going to further open up opportunities for networking vendors such as Cisco, f5, etc., and we’re going to see the operational, technical, administrative, "security" and governance lines  blur even further.

Welcome back from the dead, security vendors, you’ve got a second chance in life.  I’m not sure it’s warranted, but it’s "natural" even though we’re going to end up with a very interesting Frankenstein of a "solution" over the long term.

The Doctor prescribes an active lifestyle, healthy marketing calisthenics, a diet with plenty of roughage, and jumping back on the hamster wheel of pain for exercise.

/Hoff

Pondering Implications On Standards & Products Due To Cold Boot Attacks On Encryption Keys

February 22nd, 2008 4 comments

Scientist
You’ve no doubt seen the latest handywork of Ed Felten and his team from the Princeton Center for Information Technology Policy regarding cold boot attacks on encryption keys:

Abstract: Contrary to popular assumption, DRAMs used in
most modern computers retain their contents for seconds to minutes
after power is lost, even at operating temperatures and even if removed
from a motherboard. Although DRAMs become less reliable when they are
not refreshed, they are not immediately erased, and their contents
persist sufficiently for malicious (or forensic) acquisition of usable
full-system memory images. We show that this phenomenon limits the
ability of an operating system to protect cryptographic key material
from an attacker with physical access. We use cold reboots to mount
attacks on popular disk encryption systems — BitLocker, FileVault,
dm-crypt, and TrueCrypt — using no special devices or materials. We
experimentally characterize the extent and predictability of memory
remanence and report that remanence times can be increased dramatically
with simple techniques. We offer new algorithms for finding
cryptographic keys in memory images and for correcting errors caused by
bit decay. Though we discuss several strategies for partially
mitigating these risks, we know of no simple remedy that would
eliminate them.

Check out the video below (if you have scripting disabled, here’s the link.)  Fascinating and scary stuff.

Would a TPM implementation mitigate this if they keys weren’t stored (even temporarily) in RAM?

Given the surge lately toward full disk encryption products, I wonder how the market will react to this.  I am interested in both the broad industry impact and response from vendors.  I won’t be surprised if we see new products crop up in a matter of days advertising magical defenses against such attacks as well as vendors scrambling to do damage control.

This might be a bit of a reach, but equally as interesting to me are the potential implications upon DoD/Military crypto standards such as FIPS140.2 ( I believe the draft of 140.3 is circulating…)  In the case of certain products at specific security levels, it’s obvious based on the video that one wouldn’t necessarily need physical access to a crypto module (or RAM) in order to potentially attack it.

It’s always amazing to me when really smart people think of really creative, innovative and (in some cases) obvious ways of examining what we all take for granted.

A Worm By Any Other Name Is…An Information Epidemic?

February 18th, 2008 2 comments

Virus
Martin McKeay took exception to some interesting Microsoft research that suggested that the similar methodologies and tactics used by malicious software such as worms/viri, could also be used as an effective distributed defense against them:

Microsoft researchers are hoping to use "information epidemics" to distribute software patches more efficiently.

Milan Vojnović
and colleagues from Microsoft Research in Cambridge, UK, want to make
useful pieces of information such as software updates behave more like
computer worms: spreading between computers instead of being downloaded
from central servers.

The research may also help defend against malicious types of worm, the researchers say.

Software
worms spread by self-replicating. After infecting one computer they
probe others to find new hosts. Most existing worms randomly probe
computers when looking for new hosts to infect, but that is
inefficient, says Vojnović, because they waste time exploring groups or
"subnets" of computers that contain few uninfected hosts.

Despite the really cool moniker (information epidemic,) this isn’t a particularly novel distribution approach and in fact, we’ve seen malware do this.  However, it is interesting to see that an OS vendor (Microsoft) is continuing to actively engage in research to explore this approach despite the opinions of others who simply claim it’s a bad idea.  I’m not convinced either way, however.

I, for one, am all for resilient computing environments that are aware of their vulnerabilities and can actively defend against them.  I will be interested to see how this new paper builds off of work previously produced on the subject and its corresponding criticism.

Vojnović’s team have designed smarter strategies that can exploit the way some subnets provide richer pickings than others.

The
ideal approach uses prior knowledge of the way uninfected computers are
spread across different subnets. A worm with that information can focus
its attention on the most fruitful subnets – infecting a given
proportion of a network using the smallest possible number of probes.

But
although prior knowledge could be available in some cases – a company
distributing a patch after a previous worm attack, for example –
usually such perfect information will not be available. So the
researchers have also developed strategies that mean the worms can
learn from experience.

In
the best of these, a worm starts by randomly contacting potential new
hosts. After finding one, it uses a more targeted approach, contacting
only other computers in the same subnet. If the worm finds plenty of
uninfected hosts there, it keeps spreading in that subnet, but if not,
it changes tack.

That being the case, here’s some of Martin’s heartburn:

But the problem is, if both beneficial and malign
software show the same basic behavior patterns, how do you
differentiate between the two? And what’s to stop the worm from being
mutated once it’s started, since bad guys will be able to capture the
worms and possibly subverting their programs.

The article isn’t clear on how the worms will secure their network,
but I don’t believe this is the best way to solve the problem that’s
being expressed. The problem being solved here appears to be one of
network traffic spikes caused by the download of patches. We already
have a widely used protocols that solve this problem, bittorrents and
P2P programs. So why create a potentially hazardous situation using
worms when a better solution already exists. Yes, torrents can be
subverted too, but these are problems that we’re a lot closer to
solving than what’s being suggested.

I don’t want something that’s viral infecting my computer, whether
it’s for my benefit or not. The behavior isn’t something to be
encouraged. Maybe there’s a whole lot more to the paper, which hasn’t
been released yet, but I’m not comfortable with the basic idea being
suggested. Worm wars are not the way to secure the network.

I think that some of the points that Martin raises are valid, but I also think that he’s reacting mostly out of fear to the word ‘worm.’  What if we called it "distributed autonomic shielding?" 😉

Some features/functions of our defensive portfolio are going to need to become more self-organizing, autonomic and intelligent and that goes for the distribution of intelligence and disposition, also.  If we’re not going to advocate being offensive, then we should at least be offensively defensive.  This is one way of potentially doing this.

Interestingly, this dovetails into some discussions we’ve had recently with Andy Jaquith and Amrit Williams; the notion of herds or biotic propagation and response are really quite fascinating.  See my post titled "Thinning the Herd & Chlorinating the Gene Pool"

I’ve left out most of the juicy bits of the story so you should go read it and churn on some of the very interesting points raised as part of the discussion.

/Hoff

Update: Schneier thinks this is a lousy idea. That doesn’t move me one direction or the other, but I think this is cementing my opinion that had the author not used the word ‘worm’ in his analog the idea might not be dismissed so quickly…

Also, Wismer via a comment on Martin’s blog pointed to an interesting read from Vesselin Bontchev titled "Are "Good" Computer Viruses Still a Bad Idea?"

Update #2: See the comments section about how I think the use case argued by Schneier et. al. is, um, slightly missing the point.  Strangely enough, check out the Network World article that just popped up which says ""This was not the primary scenario targeted for this research," according to a statement."

Duh.

Security Innovation & the Bendy Hammer

February 17th, 2008 4 comments

MaxstrikeSee that odd looking hammer to the left?  It’s called the MaxiStrike from Redback Tools.

No, it hasn’t been run over by a Panzer, nor was there grease on the lens  during the photography session. 

Believe it or not, that odd little bend enables this 20 ounce mallet with the following features:

     > maximize strike force

     > reduce missed hits

     > leave clearance for nailing in cramped areas

All from that one little left hand turn from linear thought in product design.

You remember that series of posts I did on Disruptive Innovation?

This is a perfect illustration of how innovation can be "evolutionary" as opposed to revolutionary.

Incrementalism can be just as impacting as one of those tipping point "big-bang" events that have desensitized us to some of the really cool things that pop up and can actually make a difference.

So I know this hammer isn’t going to cure cancer, but it makes for easier, more efficient and more accurate nailing.  Sometimes that’s worth a hell of a lot to someone who does a lot of hammering…

Things like this happen around us all the time — even in our little security puddle of an industry. 

It’s often quite fun when you spot them.

I bet if you tried, you can come up with some examples in security.

Well?

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

Security and Disruptive Innovation Part IV: Embracing Disruptive Innovation by Mapping to a Strategic Innovation Framework

November 29th, 2007 4 comments

This is the last of the series on the topic of "Security and Disruptive Innovation."

In Part I we talked about the definition of innovation, cited some examples of general technology innovation/disruption, discussed technology taxonomies and lifecycles and what initiatives and technologies CIO’s are investing in.

In Parts II and III we started to drill down and highlight some very specific disruptive technologies that were impacting Information Security.

In this last part, we
will explore how to take these and future examples of emerging
disruptive innovation and map them to a framework which will allow you
to begin embracing them rather that reacting to disruptive innovation after the fact.

21. So How Can we embrace disruptive technology?
Isd2007028
Most folks in an InfoSec role find themselves overwhelmed juggling the day-to-day operational requirements of the job against the onslaught of evolving technology, business, culture, and economic "progress"  thrown their way.

In most cases this means that they’re rather busy mitigating the latest threats and remediating vulnerabilities in a tactical fashion and find it difficult to think strategically and across the horizon.

What’s missing in many cases is the element of business impact and how in conjunction with those threats and vulnerabilities, the resultant impact should drive the decision on what to focus on and how to prioritize actions by whether they actually matter to your most important assets.

Rather than managing threats and vulnerabilities without context and just deploy more technology blindly, we need to find a way to better manage risk.

We’ll talk about getting closer to assessing and managing risk in a short while, but if we look at what entails managing threats and vulnerabilities as described above, we usually end up in a discussion focused on technology.  Accepting this common practice today, we need a way to effectively leverage our investment in that technology to get the best bang for our buck.

That means we need to actively invest in and manage a strategic security portfolio — like an investor might buy/sell stocks.  Some items you identify and invest in for the short term and others for the long term.  Accordingly, the taxonomy of those investments would also align to the "foundational, commoditizing, distinguished" model previously discussed so that the diversity of the solutions sets can be associated, timed and managed across the continuum of investment.

This means that we need to understand how the intersection of technology, business, culture and economics intersect to affect the behavior of adopters of disruptive innovation so we can understand where, when, how and if to invest.

If this is done rationally, we will be able to demonstrate how a formalized innovation lifecycle management process delivers transparency and provides a RROI (reduction of risk on investment) over the life of the investment strategy. 

It means we will have a much more leveraged ability to proactively invest in the necessary people, process and technology ahead of the mainstream emergence of the disruptor by building a business case to do so.

Let’s see how we can do that…

22. Understand Technology Adoption Lifecycle
Isd2007029

This model is what we use to map the classical adoption cycle of disruptive innovation/technology and align it to a formalized strategic innovation lifecycle management process.

If you look at the model on the top/right, it shows how innovators initially adopt "bleeding edge" technologies/products which through uptake ultimately drive early adopters to pay attention.

It’s at this point that within the strategic innovation framework that we identify and prioritize investment in these technologies as they begin to evolve and mature.  As business opportunities avail themselves and these identified and screened disruptive technologies are vetted, certain of them are incubated and seeded as they become an emerging solution which adds value and merits further investment.

As they mature and "cross the chasm" then the early majority begins to adopt them and these technologies become part of the portfolio development process.  Some of these solutions will, over time, go away due to natural product and market behaviors, while others go through the entire area under the curve and are managed accordingly.

Pairing the appetite of the "consumer" against the maturity of the product/technology is a really important point.  Constantly reassessing the value brought to the mat by the solution and whether a better, faster, cheaper mousetrap may be present already on your radar is critical.

This isn’t rocket science, but it does take discipline and a formal process.  Understanding how the dynamics of culture, economy, technology and business are changing will only make your decisions more informed and accurate and your investments more appropriately aligned to the business needs.

23. Manage Your Innovation Pipeline
Isd2007030

This slide is another example of the various mechanisms of managing your innovation pipeline.  It is a representation of how one might classify and describe the maturation of a technology over time as it matures into a portfolio solution:

     * Sensing
     * Screening
     * Developing
     * Commercializing

In a non-commerical setting, the last stage might be described as "blessed" or something along those lines. 

The inputs to this pipeline as just as important as the outputs; taking cues from customers, internal and external market elements is critical for a rounded decision fabric.  This is where that intersection of forces comes into play again.  Looking at all the elements and evaluating your efforts, the portfolio and the business needs formally yields a really interesting by-product: Transparency… 

24. Provide Transparency in portfolio effectiveness
Isd2007031_2

I didn’t invent this graph, but it’s one of my favorite ways of visualizing my investment portfolio by measuring in three dimensions: business impact, security impact and monetized investment.  All of these definitions are subjective within your organization (as well as how you might measure them.)

The Y-axis represents the "security impact" that the solution provides.  The X-axis represents the "business impact" that the  solution provides while the size of the dot represents the capex/opex investment made in the solution.

Each of the dots represents a specific solution in the portfolio.

If you have a solution that is a large dot toward the bottom-left of the graph, one has to question the reason for continued investment since it provides little in the way of perceived security and business value with high cost.   On the flipside, if a solution is represented by a small dot in the upper-right, the bang for the buck is high as is the impact it has on the organization.

The goal would be to get as many of your investments in your portfolio from the bottom-left to the top-right with the smallest dots possible.

This transparency and the process by which the portfolio is assessed is delivered as an output of the strategic innovation framework which is really comprised of part art and part science.

25. Balancing Art and Science
Isd2007032

Andy Jaquith, champion of all things measured, who is now at Yankee but previously at security consultancy @Stake, wrote a very interesting paper that suggested that we might learn quite a bit about managing a security portfolio from the investment community on Wall Street.

Andy suggested, as I alluded to above that, this portfolio management concept — while not exactly aligned — is indeed as much art as it is science and elegantly suggested that using a framework to define a security strategy over time is enabled by a mature process:

"While the analogy is imperfect, security managers should be able to use the tools of unique and systematic management to create more-balanced security strategies."

I couldn’t agree more 😉

26. How Are you doing?

Isd2007033

If your CEO/CIO/CFO came to you today and put in front of you this list of disruptive innovation/technology and asked how these might impact your existing security strategy and what you were doing about it, what would your answer be?

Again, many of the security practitioners I have spoken to can articulate in some form how their existing technology investments might be able to absorb some impact this disruption delivers, but many have no formalized process to describe why or how.

Luck?  Serendipity?  Good choices?  Common sense?

Unfortunately, without a formalized process that provides the transparency described above it becomes very difficult to credibly demonstrate that the appropriate amount of long term strategic planning has been provided for and will likely cause angst and concern in the next budget cycle when monies for new technology is asked for.

27. Ranum for President
Isd2007034
At a minimum, what the business wants to know is whether, given the investment made, they are more or less at risk than they were before the investment was made (see here for what they really want to know.)

That’s a heady question and without transparency and process, one most folks would — without relying purely on instinct — have a difficult time answering.  "I guess" doesn’t count.

To make matters worse, people often confuse being "secure" with being less at risk, and I’m not sure that’s always a good thing.  You can be very secure, but unfortunately make the ability for the business to conduct business very difficult.  This elevates risk, which is bad. 

What we really seek to do is balance information sharing with the need to manage risk to an acceptable level.  So when folks ask if the future will be more "secure," I love to refer them to Marcus Ranum’s quote in the slide above: "…it will be just as insecure as it possibly can, while still continuing to function.  Just like it is today."

What this really means is that if we’re doing our job in the world of security, we’ll use the lens that a strategic innovation framework provides and pair it with the needs of the business to deliver a "security supply chain" that is just-in-time and with a level — no less and no more — than what is needed to manage risk to an acceptable level.

I do hope that this presentation gives you some ideas as to how you might take a longer term approach to delivering a strategic service even in the face of disruptive innovation/technology.

/Hoff

Categories: Disruptive Innovation Tags:

Security and Disruptive Innovation Part III: Examples of Disruptive Innovation/Technology in the Security Space

November 24th, 2007 2 comments

Continuing on from my last post titled Security and Disruptive Innovation Part II: Examples of Disruptive Innovation/Technology in the Security Space we’re going to finish up the tour of some security-specific
examples reflecting upon security practices,
movements and methodologies and how disruptors, market pressures and
technology are impacting what we do and how. 

16.  Software as a Service (SaaS)
Isd2007023
SaaS
is a really interesting disruptive element to the traditional approach of
deploying applications and services; so much so that in many cases, the
business has the potential to realize an opportunity to sidestep IT and
Security altogether by being able to spin up a new offering without involving either group. 

There’s no complex infrastructure to buy and install, no obstruction to the business process.  Point, click,
deploy.  The rationalization of reduced time to market, competitive
advantage and low costs are very, very sexy concepts.

On the one hand, we have the agility, flexibility and innovation that SaaS
brings but we also need to recognize how SaaS intersects with the
lifecycle management of applications.  The natural commoditization of software
functionality that is yielded as a by-product of the "webification" of
many of the older applications make SaaS even more attractive as it offers a more cost-effective alternative.  Take WebEx, Microsoft Live, Salesforce.com and Google Apps as examples.

There are a number of other interesting collision spaces that impact information security.  Besides issues surrounding the application of general security controls in a hosted model, since the application and data are hosted offsite, understanding how and where data is stored, backed up, consumed, re-used and secured throughout is very important.  As security practitioners, we lose quite a bit of visibility from an operational perspective in the SaaS model.

Furthermore, one of the most important issues surrounding data security and SaaS is the issue of portability; can you take the data and transpose its use from one service to another?  Who owns it?  What format is it in?  If the investment wager in service from a SaaS company does not pay off, what happens to the information?

SaaS is one of the elements in combination with virtualization and utility/grid computing that will have a profound impact on the way in which we secure our assets.  See the section on next generation centers of data and information centricity below.

17.  Virtualization
Isd2007024
Virtualization is a game-changing technology enabler that provides economic, operational and resilience benefits to the business.  The innovation delivered by this disruptor are plainly visible.

Virtualization in servers today allow us to realize the first of many foundational building blocks of future operating system architectures and next generation computing platforms such as the promises offered by grid and utility computing models.

While many of the technology advancements related to these "sidelined futures" have been in the works for many years, most have failed to grasp mainstream adoption because despite being technically feasible, they were not economically viable.  This is changing.  Grid and utility computing is starting to really take hold thanks to low cost compute stacks, high-speed I/O, and distributed processing/virtualization capabilities.

Virtualization is not constrained to simply the physical consolidation of server iron; it extends to all elements of the computing experience; desktops, data, networks, applications, storage, provisioning, deployment and security.

It’s very clear that like most emerging technologies, we are in the position of playing catch-up with securing the utility that the virtualization delivers.  We’re seeing wholesale shifts in the operationalization of IT resources and it it will continue to radically impact the way in which we think about how to secure the assets most important to us.

In many cases, those who were primarily responsible for the visibility and security of information across well-defined boundaries of trust, classification, and distribution, will find themselves in need of new methods, tools and skillsets when virtualization is adopted in their enterprise.

  To generally argue whether virtualization provides "more" or "less" security as compared to non-virtualized environments is an interesting debate, but one that offers little in the way of relevant assistance to those faced with securing virtualized environments today. 

Any emerging technology yields new attack surfaces, exposes vulnerabilities and provides new opportunities related to managing risk when threats arise.  However, how "more" or "less" secure one is when implementing virtualization is just as subjective a measurement which is dependent upon business impact, how one provisions, administers, and deploys solutions and how ultimately applies security controls to the environment.

Realistically, if your security is not up to par in non-virtualized, physically-isolated infrastructure, you will be comforted by the lack of change when deploying virtualization; it will be equally as good…

There are numerous resources available now discussing the "security" things we should think about when deploying virtualization.  You can find many on my blog here.

18.  De-/Re-Perimeterization
Isd2007025
This topic is near and dear to my heart and inspires some very passionate discussion when raised amongst our community. 

Some of the reasons for heated commentary come from the poor marketing of the underlying message as well as the name of the concept. 

Whether you call it de-perimeterization, re-perimeterization or radical externalization, this concept argues that the way in which security is practiced today is outdated, outmoded and requires a new model that banishes the notion that the inside and outside of our companies are in any way distinguishable today and thus our existing solutions are ineffective to defend them.

De-/Re-perimeterization does not mean that you should scrap your security program or controls in lieu of a new-fangled dogma and set of technology.  It doesn’t mean that one should throw away the firewalls so abundantly prevalent at the "perimeter" borders of the network. 

It does, however, suggest you should redefine the notion of the perimeter.  The perimeter, despite its many holes, is like a colander — filtering out the big chunks at the edge.  However, the problem doesn’t lie with an arbitrary line in the sand, it permeates the computing paradigm and access modalities we’ve adopted to provide access to our most important assets.

Trying to draw a "perimeter" box around an amorphous and dynamic abstraction of our intellectual property in any form is a losing proposition.

However, the perimeter isn’t disappearing.  In fact, I maintain that it’s multiplying, but the diameter is collapsing.

Every element in the network is becoming its own "micro-perimeter" and we have to think about how we can manage and secure hundreds or thousands of these micro-perimeters by re-thinking how we focus on solving the problems we face today and what those problems actually are without being held hostage by vendors who constantly push the equivalent of vinyl siding when the foundations of our houses are silently rotting away in the name of "defense in depth."

"Defense in depth" has really become "defense in width."  As we deploy more and more security "solutions" all wishing to be in-line with one another and do not interoperate, intercommunicate or integrate, we’re not actually solving the problem, we’re treating the symptoms.

We really need endpoints that can self-survive in assuredly hostile environments using mutual authentication and encryption of data which can self-describe the nature of the security controls needed to protect it.  This is the notion of information survivability versus information security.

This is very much about driving progress through pressure on
developers and vendors to produce more secure operating systems,
applications and protocols.  It will require — in the long term —
wholesale architectural changes to our infrastructure and architecture. 

The reality is that these changes are arriving in the form of things like virtualization, SaaS, and even the adoption of consumer technologies as they force us to examine what, how and why we do what we do.

Progress is being made and will require continued effort to realize the benefits that are to come.

19.  Information Centricity
Isd2007026
Building off the themes of SaaS and the de-/re-perimeterization concepts, the notion of what and how we protect our information really comes to light in the topic of information centricity.

You may have heard the term "data-centric" security, but I despise this term because quite frankly, most individuals and companies are overloaded; we’re data rich and information poor.

What we need to do is allow ourselves not to be overwhelmed by the sheer mountains of "data" but rather determine what "information" matters to us most and organize our efforts around protecting it in context.

Today we have networks which cannot provide context and hosts that cannot be trusted to report their status so it’s no wonder we’re in a heapful of trouble. 

We need to look at the tenets described in the de-/re-perimeterization topics above and recognize the wisdom of the notion that "…access to data should be controlled by the security attributes of the data itself." 

If we think of controlling the flow or "routing" of information by putting in place classification systems that work (content in context…) we have a fighting chance of ensuring that the right data gets to only the right people at the right time.

Without blurring the discussion with the taglines of ERM/DRM, controlling information flow and becoming information centric rather than host or network centric is critically important, especially when you consider the fact that your data is not where you think it is…

20.  Next Generation Centers of Data
Isd2007027
This concept is clear and concise.

Today the notion of a "data center" is a place where servers go to die.

A "center of data" on the other hand, is an abstraction that points to anywhere where data is created, processed, stored, secured and consumed.  That doesn’t mean a monolithic building with a keypad out front and a chunk of A/C and battery backup.

In short, thanks to innovation such as virtualization, grid/utility services, SaaS, de-/re-perimeterization and the consumerization of IT, can you honestly tell me that you know where your data is and why?  No.

The next generation centers of data really become the steam that feeds the "data pumps" that power information flow.  While in one sense even if the compute stacks may become physically consolidated, the processing and information flow become more distributed.

Processing architectures and operational realities are starting to provide radically different approaches to the traditional data center.  Take Sun’s Project Blackbox or Google’s distributed processing clusters, for example.  Combined with grid/utility computing models, instead of fixed resource affinity, one looks at pooled sets of resources and distributed computing capacity which are not constrained by the physical brick and mortar wallspaces of today.

If applications, information, processes, storage, backup, and presentation are all distributed across these pools of resources, how can the security of today provide what we need to ensure even the very basic constructs of confidentiality, integrity and availability?

Next we will explore how to take these and future examples of emerging disruptive innovation and map them to a framework which will allow you to begin embracing them rather that reacting to them after the fact.

Categories: Disruptive Innovation Tags:

Hypervisors Are Becoming a Commodity…Virtualization Is a Feature?

November 14th, 2007 No comments

Marketfeature2 A couple of weeks ago I penned a blog entry titled "The Battle for the HyperVisor Heats Up"
in which I highlighted an announcement from Phoenix Technologies
detailing their entry into the virtualization space with their
BIOS-enabled VMM/Hypervisor offering called HyperCore.

This drew immediate parallels (no pun intended) to VMware and Xen’s plans to embed virtualization capabilities into hardware.

The marketing continues this week with interesting announcements from Microsoft, Oracle and VMware:

  1. VMware offers VMware Server 2 as a free virtualization product to do battle against…
  2. Oracle offering "Oracle VM" for free (with paid support if you
    like) which claims to be 3 times as efficient than VMWare — based on
    Xen.
  3. Microsoft officially re-badged its server virtualization technology as Hyper-V (nee Veridian)
    detailing both a stand-alone Hyper-V Server as well technology integrated into W2K8 Server.

It seems that everyone and their mother is introducing a virtualization platform and the underpinning of commonality between basic functionality demonstrates how the underlying virtualization enabler — the VMM/Hypervisor — is becoming a commodity.

We are sure to see fatter, thinner, faster, "more secure" or more open Hypervisors, but this will be an area with less and less differentiation.  Table stakes.  Everything’s becoming virtualized, so a VMM/Hypervisor will be the underlying "OS" enabling that transformation.

To illustrate the commoditization trend as well as a rather fractured landscape of strategies, one need only look at the diversity in existing and emerging VMM/Hypervisor solutions.   Virtualization strategies are beginning to revolve around a set of distinct approaches where virtualization is:

  1. Provided for and/or enhanced in hardware (Intel, AMD, Phoenix)
  2. A function of the operating system (Linux, Unix, Microsoft)
  3. Delivered by means of an enabling software layer (nee
    platform) that is deployed across your entire infrastructure (VMware, Oracle)
  4. Integrated into the larger Data Center "Fabric" or Data Center OS (Cisco)
  5. Transformed into a Grid/Utility Computing model for service delivery

The challenge for a customer is making the decision on whom to invest it now.  Given the fact that there is not a widely-adopted common format for VM standardization, the choice today of a virtualization vendor (or vendors) could profoundly affect one’s business in the future since we’re talking about a fundamental shift in how your "centers of data" manifest.

What is so very interesting is that if we accept virtualization as a feature defined as an abstracted platform isolating software from hardware then the next major shift is the extensibility, manageability and flexibility of the solution offering as well as how partnerships knit out between the "platform" providers and the purveyors of toolsets.

It’s clear that VMware’s lead in the virtualization market is right inline with how I described the need for differentiation and extensibility both internally and via partnerships. 

VMotion is a classic example; it’s clearly an internally-generated killer app. that the other players do not currently have and really speaks to being able to integrate virtualization as a "feature" into the combined fabric of the data center.  Binding networking, storage, computing together is critical.  VMware has a slew of partnerships (and potential acquisitions) that enable even greater utility from their products.

Cisco has already invested in VMware and a recent demo I got of Cisco’s VFrame solution shows they are serious about being able to design, provision, deploy, secure and manage virtualized infrastructure up and down the stack, including servers, networking, storage, business process and logic.

In the next 12 months or so, you’ll be able to buy a Dell or HP server using Intel or AMD virtualization-enabled chipsets pre-loaded with multiple VMM/Hypervisors in either flash or BIOS.  How you manage, integrate and secure it with the rest of your infrastructure — well, that’s the fun part, isn’t it?

I’ll bet we’ll see more and more "free" commoditized virtualization platforms with the wallet ding coming from the support and licenses to enable third party feature integration and toolsets.

/Hoff