Archive

Archive for the ‘Security Innovation & Imagination’ Category

It’s Virtualization March Madness! Up First, Montego Networks

March 27th, 2008 No comments

Marchmadness
If you want to read about Montego Networks right off the bat, you can skip the Hoff-Tax and scroll down to the horizontal rule and start reading.  Though I’ll be horribly offended, I’ll understand…

I like being contradictory, even when it appears that I’m contradicting myself.  I like to think of it as giving a balanced perspective on my schizophrenic self…

You will likely recall that my latest post suggested that the real challenge for virtualization at this stage in the game is organizational and operational and not technical. 

Well, within the context of this post, that’s obviously half right, but it’s an incredibly overlooked fact that is causing distress in most organizations, and it’s something that technology — as a symptom of the human condition — cannot remedy.

But back to the Tech.

The reality is that for reasons I’ve spoken of many times, our favorite ISV’s have been a little handicapped by what the virtualization platforms offer up in terms of proper integration against which we can gain purchase from a security perspective.  They have to sell what they’ve got while trying to remain relevant all the while watching the ground drop out beneath them.

Bs_2
These vendors have a choice: employ some fancy marketing messaging to make it appear as though the same products you run on a $50,000+ dedicated security appliance will actually perform just as well in a virtual form.

Further, tell you that you’ll enjoy just as much visibility without disclosing limitations when interfaced to a virtual switch that makes it next to impossible to replicate most complex non-virtualized topologies. 

Or, just wait it out and see what happens hoping to sell more appliances in the meantime.

Some employ all three strategies (with a fourth being a little bit of hope.)

Some of that hoping is over and is on it’s way to being remedied with enablers like VMware’s VMsafe initiative.  It’s a shame that we’ll probably end up with a battle of API’s with ISV’s having to choose which virtualization platform providers’ API to support rather than a standard across multiple platforms.

Simon Crosby from Xen/Citrix made a similar comment in this article:

While I totally agree with his sentiment, I’m not sure Simon would be as vocal or egalitarian had Citrix been first out of the gate with their own VMsafe equivalent.  It’s always sad when one must plead for standardization when you’re not in control of the standards…and by the way, Simon, nobody held a gun to the heads of the 20 companies that rushed for the opportunity to be the first out of the gate with VMsafe as it’s made available.

While that band marches on, some additional measure of aid may come from innovative youngbloods looking to build and sell you the next better mousetrap.


As such, in advance of the RSA Conference in a couple of weeks, the security world’s all aflutter with the sounds of start-ups being born out of stealth as well as new-fangled innovation clawing its way out of up-starts seeking to establish a beachhead in the attack on your budget.

With the normal blitzkrieg of press releases that will undoubtedly make their way to your doorstop, I thought I’d comment on a couple of these companies in advance of the noise.

A lot of what I want to say is sadly under embargo, but I’ll get further in-depth later when I’m told I can take the wraps off.  You should know that almost all of these emerging solutions, as with the one below, operate as virtual appliances inside your hosts and require close and careful configuration of the virtual networking elements therein.

If you go back to the meat of the organization/operational issue I describe above, who do you think has access and control over the virtual switch configurations?  The network team?  The security team?  How about the virtual server admin. team…are you concerned yet?

Here’s my first Virtualized March Madness (VMM, get it!) ISV:

  • Montegomodel
    Montego Networks – John Peterson used to be the CTO at Reflex, so he knows a thing or two about switching, virtualization and security.  I very much like Montego’s approach to solving some of the networking issues associated with vSwitch integration and better yet, they’ve created a very interesting business model that actually is something like VMsafe in reverse. 

    Essentially Montego’s HyperSwitch works in conjunction with the integrated vSwitch in the VMM and uses some reasonably elegant networking functionality to classify traffic and either enforce dispositions natively using their own "firewall" technologies (L2-L4) or — and this is the best part — redirect traffic to other named security software partners to effect disposition. 

    If you look on Montego’s website, you’ll see that they show StillSecure and BlueLane as candidates as what they call HyperVSecurity partners.  They also do some really cool stuff with Netflow.

    Neat model.  When VMsafe is available, Montego should then allow these other third party ISV’s to take advantage of VMsafe (by virtue of the HyperSwitch) without the ISV’s having to actually modify their code to do so – Montego will build that to suit.  There’s a bunch of other stuff that I will write about once the embargo is lifted.

    I’m not sure how much runway and strategic differentiation Montego will have from a purely technical perspective as VMsafe ought to level the playing field for some of the networking functionality with competitors, but the policy partnering is a cool idea. 

    We’ll have to see what the performance implications are given the virtual appliance model Montego (and everyone else) has employed.  There’s lots of software in them thar hills doing the flow/packet processing and enacting dispositions…and remember, that’s all virtualized too.

    In the long term, I expect we’ll see some of this functionality appear natively in other virtualization platforms.

    We’ll see how well that prediction works out over time as well as keep an eye out for that Cisco virtual switch we’ve all been waiting for…*

I’ll be shortly talking about Altor Networks and Blue Lane’s latest goodies.

If you’ve got a mousetrap you’d like to see in lights here, feel free to ping me, tell me why I should care, and we’ll explore your offering.  I guarantee that if it passes the sniff test here it will likely mean someone else will want a whiff.

/Hoff

* Update: Alan over at the Virtual Data Center Blog did a nice write-up on his impressions and asks why this functionality isn’t in the vSwitch natively.  I’d pile onto that query, too.  Also, I sort of burned myself by speaking to Montego because the details of how they do what they do is under embargo based on my conversation for a little while longer, so I can’t respond to Alan…

VMWare’s VMSafe: Security Industry Defibrilator….Making Dying Muscle Twitch Again.

March 2nd, 2008 6 comments

Defibrilator
Nurse, 10 cc’s of Adrenalin, stat!

As I mentioned in a prior posting, VMware’s VMsafe has the potential to inject life back into the atrophied and withering heart muscle of the security industry and raise the prognosis from DOA to the potential for a vital economic revenue stream once more.

How?  Well, the answer to this question really comes down to whether you believe that keeping a body on assisted life support means that the patient is living or simply alive, and the same perspective goes for the security industry.

With the inevitable consolidation of solutions and offerings in the security industry over the last few years, we have seen the commoditization of many markets as well as the natural emergence of others in response to the ebb and flow of economic, technological, cultural and political forces.

One of the most impacting disruptive and innovative forces that is causing arrhythmia in the pulse of both consumers and providers and driving the emergence of new market opportunities is virtualization. 

For the last two years, I’ve been waving my hands about the fact that virtualization changes everything across the information lifecycle.  From cradle to grave, the evolution of virtualization will profoundly change what, where, why and how we do what we do.

I’m not claiming that I’m the only one, but it was sure lonely from a general security practitioner’s perspective up until about six months ago.  In the last four months, I’ve given two keynotes and three decently visible talks on VirtSec, and I have 3-4 more tee’d up over the next 3 months, so somebody’s interested…better late than never, I suppose.

How’s the patient?

For the purpose of this post, I’m going to focus on the security implications of virtualization and simply summarize by suggesting that virtualization up until now has quietly marked a tipping point where we see the disruption stretch security architectures and technologies to their breaking point and in many cases make much of our invested security portfolio redundant and irrelevant.

I’ve discussed why and how this is the case in numerous posts and presentations, but it’s clear (now) to most that the security industry has been clearly out of phase with what has plainly been a well-signaled (r)evolution in computing.

Is anyone really surprised that we are caught flat-footed again?  Sorry to rant, but…

This is such a sorry indicator of why things are so terribly broken with "IT/Information Security" as it stands today; we continue to try and solve short term problems with even shorter term "solutions" that do nothing more than perpetuate the problem — and we do so in a horrific display of myopic dissonance, it’s a wonder we function at all.   Actually, it’s a perfectly wonderful explanation as to why criminals are always 5 steps ahead — they plan strategically while acting tactically against their objectives and aren’t afraid to respond to the customers proactively.

So, we’ve got this fantastic technological, economic, and cultural transformation occurring over the last FIVE YEARS (at least,) and the best we’ve seen as a response from most traditional security vendors is that they have simply marketed their solutions slimly as "virtualization ready" or "virtualization aware" when in fact, these are simply hollow words for how to make their existing "square" products fit into the "round" holes of a problem space that virtualization exposes and creates.

Firewalls, IDS/IPSs, UTM, NAC, DLP — all of them have limited visibility in this rapidly "re-perimeterized" universe in which our technology operates, and in most cases we’re busy looking at uninteresting and practically non-actionable things anyway.  As one of my favorite mentors used to say, "we’re data rich, but information poor."

The vendors in these example markets — with or without admission — are all really worried about what virtualization will do to their already shrinking relevance.  So we wait.

Doctor, it hurts when I do this…

VMSafe represents a huge opportunity for these vendors to claw their way back to life, making their solutions relevant once more, and perhaps even more so.

Most of the companies who have so far signed on to VMsafe will, as I mentioned previously, need to align roadmaps and release new or modified versions of their product lines to work with the new API’s and management planes. 

This is obviously a big deal, but one that is unavoidable for these companies — most of which are clumbsy and generally not agile or responsive to third parties.  However, you don’t get 20 of some of the biggest "monoliths" of the security world scrambling to sign up for a program like VMsafe just for giggles — and the reality is that the platform version of VMware’s virtualization products that will support this technology aren’t even available yet.

I am willing to wager that you will, in extremely short time given VMware’s willingness to sign on new partners, see many more vendors flock to the program.  I further maintain that despite their vehement denial, NAC vendors (with pressure already from the oncoming tidal wave of Microsoft’s NAP) will also adapt their wares to take advantage of this technology for reasons I’ve outlined here.

They literally cannot afford not to.

I am extremely interested in what other virtualization vendors’ responses will be — especially Citrix.  It’s pretty clear what Microsoft has in mind.  It’s going to further open up opportunities for networking vendors such as Cisco, f5, etc., and we’re going to see the operational, technical, administrative, "security" and governance lines  blur even further.

Welcome back from the dead, security vendors, you’ve got a second chance in life.  I’m not sure it’s warranted, but it’s "natural" even though we’re going to end up with a very interesting Frankenstein of a "solution" over the long term.

The Doctor prescribes an active lifestyle, healthy marketing calisthenics, a diet with plenty of roughage, and jumping back on the hamster wheel of pain for exercise.

/Hoff

A Worm By Any Other Name Is…An Information Epidemic?

February 18th, 2008 2 comments

Virus
Martin McKeay took exception to some interesting Microsoft research that suggested that the similar methodologies and tactics used by malicious software such as worms/viri, could also be used as an effective distributed defense against them:

Microsoft researchers are hoping to use "information epidemics" to distribute software patches more efficiently.

Milan Vojnović
and colleagues from Microsoft Research in Cambridge, UK, want to make
useful pieces of information such as software updates behave more like
computer worms: spreading between computers instead of being downloaded
from central servers.

The research may also help defend against malicious types of worm, the researchers say.

Software
worms spread by self-replicating. After infecting one computer they
probe others to find new hosts. Most existing worms randomly probe
computers when looking for new hosts to infect, but that is
inefficient, says Vojnović, because they waste time exploring groups or
"subnets" of computers that contain few uninfected hosts.

Despite the really cool moniker (information epidemic,) this isn’t a particularly novel distribution approach and in fact, we’ve seen malware do this.  However, it is interesting to see that an OS vendor (Microsoft) is continuing to actively engage in research to explore this approach despite the opinions of others who simply claim it’s a bad idea.  I’m not convinced either way, however.

I, for one, am all for resilient computing environments that are aware of their vulnerabilities and can actively defend against them.  I will be interested to see how this new paper builds off of work previously produced on the subject and its corresponding criticism.

Vojnović’s team have designed smarter strategies that can exploit the way some subnets provide richer pickings than others.

The
ideal approach uses prior knowledge of the way uninfected computers are
spread across different subnets. A worm with that information can focus
its attention on the most fruitful subnets – infecting a given
proportion of a network using the smallest possible number of probes.

But
although prior knowledge could be available in some cases – a company
distributing a patch after a previous worm attack, for example –
usually such perfect information will not be available. So the
researchers have also developed strategies that mean the worms can
learn from experience.

In
the best of these, a worm starts by randomly contacting potential new
hosts. After finding one, it uses a more targeted approach, contacting
only other computers in the same subnet. If the worm finds plenty of
uninfected hosts there, it keeps spreading in that subnet, but if not,
it changes tack.

That being the case, here’s some of Martin’s heartburn:

But the problem is, if both beneficial and malign
software show the same basic behavior patterns, how do you
differentiate between the two? And what’s to stop the worm from being
mutated once it’s started, since bad guys will be able to capture the
worms and possibly subverting their programs.

The article isn’t clear on how the worms will secure their network,
but I don’t believe this is the best way to solve the problem that’s
being expressed. The problem being solved here appears to be one of
network traffic spikes caused by the download of patches. We already
have a widely used protocols that solve this problem, bittorrents and
P2P programs. So why create a potentially hazardous situation using
worms when a better solution already exists. Yes, torrents can be
subverted too, but these are problems that we’re a lot closer to
solving than what’s being suggested.

I don’t want something that’s viral infecting my computer, whether
it’s for my benefit or not. The behavior isn’t something to be
encouraged. Maybe there’s a whole lot more to the paper, which hasn’t
been released yet, but I’m not comfortable with the basic idea being
suggested. Worm wars are not the way to secure the network.

I think that some of the points that Martin raises are valid, but I also think that he’s reacting mostly out of fear to the word ‘worm.’  What if we called it "distributed autonomic shielding?" 😉

Some features/functions of our defensive portfolio are going to need to become more self-organizing, autonomic and intelligent and that goes for the distribution of intelligence and disposition, also.  If we’re not going to advocate being offensive, then we should at least be offensively defensive.  This is one way of potentially doing this.

Interestingly, this dovetails into some discussions we’ve had recently with Andy Jaquith and Amrit Williams; the notion of herds or biotic propagation and response are really quite fascinating.  See my post titled "Thinning the Herd & Chlorinating the Gene Pool"

I’ve left out most of the juicy bits of the story so you should go read it and churn on some of the very interesting points raised as part of the discussion.

/Hoff

Update: Schneier thinks this is a lousy idea. That doesn’t move me one direction or the other, but I think this is cementing my opinion that had the author not used the word ‘worm’ in his analog the idea might not be dismissed so quickly…

Also, Wismer via a comment on Martin’s blog pointed to an interesting read from Vesselin Bontchev titled "Are "Good" Computer Viruses Still a Bad Idea?"

Update #2: See the comments section about how I think the use case argued by Schneier et. al. is, um, slightly missing the point.  Strangely enough, check out the Network World article that just popped up which says ""This was not the primary scenario targeted for this research," according to a statement."

Duh.

Security Innovation & the Bendy Hammer

February 17th, 2008 4 comments

MaxstrikeSee that odd looking hammer to the left?  It’s called the MaxiStrike from Redback Tools.

No, it hasn’t been run over by a Panzer, nor was there grease on the lens  during the photography session. 

Believe it or not, that odd little bend enables this 20 ounce mallet with the following features:

     > maximize strike force

     > reduce missed hits

     > leave clearance for nailing in cramped areas

All from that one little left hand turn from linear thought in product design.

You remember that series of posts I did on Disruptive Innovation?

This is a perfect illustration of how innovation can be "evolutionary" as opposed to revolutionary.

Incrementalism can be just as impacting as one of those tipping point "big-bang" events that have desensitized us to some of the really cool things that pop up and can actually make a difference.

So I know this hammer isn’t going to cure cancer, but it makes for easier, more efficient and more accurate nailing.  Sometimes that’s worth a hell of a lot to someone who does a lot of hammering…

Things like this happen around us all the time — even in our little security puddle of an industry. 

It’s often quite fun when you spot them.

I bet if you tried, you can come up with some examples in security.

Well?

Ginko Financial Collapse Ultimately Yields Real Virtual Risk (Huh?)

January 15th, 2008 5 comments

Ginkofinancial
I’m feeling old lately.  First it was my visceral reaction to the paranormal super-poking goings-on on Facebook and now it’s this news regarding Linden Lab’s Second Life that has my head spinning. 

It seems that the intersection between the virtual and physical worlds is continuing to inch ever closer.

In fact, it’s hitting people where it really counts, their (virtual) wallets. 

We first saw something like this bubble up with in-world gambling issues and now Linden announced in their blog today that any virtual "in-world banks" must be registered with real-world financial/banking regulatory agencies:

As of January 22, 2008, it will be prohibited to offer interest or
any direct return on an investment (whether in L$ or other currency)
from any object, such as an ATM, located in Second Life, without proof
of an applicable government registration statement or financial
institution charter. We’re implementing this policy after reviewing
Resident complaints, banking activities, and the law, and we’re doing
it to protect our Residents and the integrity of our economy.

Why?  It seems there’s more bad juju brewin’.  A virtual bank shuts down and defaults.  What’s next?  A virtual sub-prime loan scandal?

Since the collapse of Ginko Financial in August 2007, Linden Lab has
received complaints about several in-world “banks” defaulting on their
promises. These banks often promise unusually high rates of L$ return,
reaching 20, 40, or even 60 percent annualized.

Usually, we don’t step in the middle of Resident-to-Resident conduct
– letting Residents decide how to act, live, or play in Second Life.

But these “banks” have brought unique and substantial risks to
Second Life, and we feel it’s our duty to step in. Offering
unsustainably high interest rates, they are in most cases doomed to
collapse – leaving upset “depositors” with nothing to show for their
investments. As these activities grow, they become more likely to lead
to destabilization of the virtual economy. At least as important, the
legal and regulatory framework of these non-chartered, unregistered
banks is unclear, i.e., what their duties are when they offer
“interest” or “investments.”

There is no workable alternative. The so-called banks are not
operated, overseen or insured by Linden Lab, nor can we predict which
will fail or when. And Linden Lab isn’t, and can’t start acting as, a
banking regulator.

Some may argue that Residents who deposit L$ with these “banks” must
know they’re assuming a big risk – the high interest rates promised
aren’t guaranteed, and the banks aren’t overseen by Linden Lab or
anyone else. That may be true. But for all of the other reasons we’ve
set out above, we can’t let this activity continue.

Thus, as we did in the past with gambling, as of January 22, 2008 we will begin removing
any virtual ATMs or other objects that facilitate the operation or
facilitation of in-world “banking,” i.e., the offering of interest or a
rate of return on L$ invested or deposited. We ask that between now and
then, those who operate these “banks” settle up on any promises they
have made to other Residents and, of course, honor valid withdrawals.
After that date, we may sanction those who continue to offer these
services with suspension, termination of accounts, and loss of land.

Wow.  Loss of land!  I thought overdraft fees were harsh!?

Ed Felten from Freedom to Tinker summed it up nicely:

This was inevitable, given the ever-growing connections between the
virtual economy of Second Life and the real-world economy. In-world
Linden Dollars are exchangeable for real-world dollars, so financial
crime in Second Life can make you rich in the real world. Linden
doesn’t have the processes in place to license “banks” or investigate
problems. Nor does it have the enforcement muscle to put bad guys in
jail.

Expect this trend to continue. As virtual world “games” are played for
higher and higher stakes, the regulatory power of national governments
will look more and more necessary.

So far I’ve stayed away from Second Life; I’ve got enough to manage in my First one.  Perhaps it’s time to take a peek and see what all the fuss is about?

/Hoff

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

Take5 (Episode #5) – Five Questions for Allwyn Sequeira, SVP of Product Operations, Blue Lane

August 21st, 2007 18 comments

This fifth episode of Take5 interviews Allwyn Sequeira, SVP of Product Operations for Blue Lane.  

First a little background on the victim:

Allwyn
Allwyn Sequeira is Senior Vice President of Product Operations at Blue
Lane Technologies, responsible for managing the overall product life
cycle, from concept through research, development and test, to delivery
and support. He was previously the Senior Vice President of Technology
and Operations at netVmg, an intelligent route control company acquired
by InterNap in 2003, where he was responsible for the architecture,
development and deployment of the industry-leading flow control
platform. Prior to netVmg, he was founder, Chief Technology Officer and
Executive Vice President of Products and Operations at First Virtual
Corporation (FVC), a multi-service networking company that had a
successful IPO in 1998. Prior to FVC, he was Director of the Network
Management Business Unit at Ungermann-Bass, the first independent local
area network company. Mr. Sequeira has previously served as a Director
on the boards of FVC and netVmg.


Mr. Sequeira started his career as a software developer at HP in the
Information Networks Division, working on the development of TCP/IP
protocols. During the early 1980’s, he worked on the CSNET project, an
early realization of the Internet concept. Mr. Sequeira is a recognized
expert in data networking, with twenty five years of experience in the
industry, and has been a featured speaker at industry leading forums
like Networld+Interop, Next Generation Networks, ISP Con and RSA
Conference.

Mr. Sequeira holds a Bachelor of Technology degree in Computer
Science from the Indian Institute of Technology, Bombay, and a Master
of Science in Computer Science from the University of Wisconsin,
Madison.   

Allwyn, despite all this good schoolin’ forgot to send me a picture, so he gets what he deserves 😉
(Ed: Yes, those of you quick enough were smart enough to detect that the previous picture was of Brad Pitt and not Allwyn.  I apologize for the unnecessary froth-factor.)

 Questions:

1) Blue Lane has two distinct product lines, VirtualShield and PatchPoint.  The former is a software-based solution which provides protection for VMware Infrastructure 3 virtual servers as an ESX VM plug-in whilst the latter offers a network appliance-based solution for physical servers.  How are these products different than either virtual switch IPS’ like Virtual Iron or in-line network-based IPS’s?

IPS technologies have been charged with the incredible mission of trying to protect everything from anything.  Overall they’ve done well, considering how much the perimeter of the network has changed and how sophisticated hackers have become. Much of their core technology, however, was relevant and useful when hackers could be easily identified by their signatures. As many have proclaimed, those days are coming to an end.

A defense department official recently quipped, "If you offer the same protection for your toothbrushes and your diamonds you are bound to lose fewer toothbrushes and more diamonds."  We think that data center security similarly demands specialized solutions.  The concept of an enterprise network has become so ambiguous when it comes to endpoints and devices and supply chain partners, etc. we think its time to think more realistically in terms of trusted, yet highly available zones within the data center.

It seems clear at this point that different parts of the network need very different security capabilities.  Servers, for example need highly accurate solutions that do not block or impede good traffic and can correct bad traffic, especially when it comes to closing network-facing vulnerability windows.  They need to maintain availability with minimal latency for starters; and that has been a sort of Achilles heel for signature-based approaches.  Of course, signatures also bring considerable management burdens over and beyond their security capabilities.

No one is advocating turning off the IPS, but rather approaching servers with more specialized capabilities.  We started focusing on servers years ago and established very sophisticated application and protocol intelligence, which has allowed us to correct traffic inline without the noise, suspense and delay that general purpose network security appliance users have come to expect.

IPS solutions depend on deep packet inspection typically at the perimeter based on regexp pattern matching for exploits.  Emerging challenges with this approach have made alert and block modes absolutely necessary as most IPS solutions aren’t accurate enough to be trusted in full library block. 

Blue Lane uses a vastly different approach.  We call it deep flow inspection/correction for known server vulnerabilities based on stateful decoding up to layer 7.  We can alert, block and correct, but most of are deployments are in correct mode, with our full capabilities enabled. From an operational standpoint we have substantially different impacts.

A typical IPS may have 10K signatures while experts recommend turning on just a few hundred.  That kind of marketing shell game (find out what really works) means that there will be plenty of false alarms, false positives and negatives and plenty of tuning.  With polymorphic attacks signature libraries can increase exponentially while not delivering meaningful improvements in protection. 

Blue Lane supports about 1000 inline security patches across dozens of very specific server vulnerabilities, applications and operating systems.  We generate very few false alarms and minimal latency.  We don’t require ANY tuning.  Our customers run our solution in automated, correct mode.

The traditional static signature IPS category has evolved into an ASIC war between some very capable players for the reasons we just discussed.Exploding variations of exploits and vectors means that exploit-centric approaches will require more processing power.

Virtualization is pulling the data center into an entirely different direction, driven by commodity processors.  So of course our VirtualShield solution was a much cleaner setup with a hypervisor; we can plug into the hypervisor layer and run on top of existing hardware, again with minimal latency and footprint.

You don’t have to be a Metasploit genius to evade IPS signatures.  Our higher layer 7 stateful decoding is much more resilient. 

2) With zero-days on the rise, pay-for-play vulnerability research and now Zero-Bay (WabiSabiLabi) vulnerability auctions and the like, do you see an uptake in customer demand for vulnerability shielding solutions?

Exploit-signature technologies are meaningless in the face of evanescent, polymorphic threats, resulting in 0-day exploits. Slight modifications to signatures can bypass IPSes, even against known vulnerabilities.  Blue Lane technology provides 0-day protection for any variant of an exploit against known vulnerabilities.  No technology can provide ultimate protection against 0-day exploits based on 0-day vulnerabilities. However, this requires a different class of hacker.

3) As large companies start to put their virtualization strategies in play, how do you see customers addressing securing their virtualized infrastructure?  Do they try to adapt existing layered security methodologies and where do these fall down in a virtualized world?

I’ve explored this topic in depth at the Next Generation Data Center conference last week. Also, your readers might be interested in listening to a recent podcast: The Myths and Realities of Virtualization Security: An Interview. 

To summarize, there are a few things that change with virtualization, that folks need to be aware of.  It represents a new architecture.  The hypervisor layer represents the un-tethering and clustering of VMs, and centralized control.  It introduces a new virtual network layer.  There are entirely new states of servers, not anticipated by traditional static security approaches (like instant create, destroy, clone, suspend, snapshot and revert to snapshot). 

Then you’ll see unprecedented levels of mobility and new virtual appliances and black boxing of complex stacks including embedded databases.  Organizations will have to work out who is responsible for securing this very fluid environment.  We’ll also see unprecedented scalability with Infiniband cores attaching LAN/SAN out to 100’s of ESX hypervisors and thousands of VMs.

Organizations will need the capability to shield these complex, fluid environments; because trying to keep track of individual VMs, states, patch levels, locations will make tuning an IPS for polymorphic attacks look like childs play in comparison.   Effective solutions will need to be highly accurate, low latency solutions deployed in correct mode. Gone will be the days of man-to-man blocking and tuning.  Here to stay are the days of zone defense.

4) VMware just purchased Determina and intends to integrate their memory firewall IPS product as an ESX VM plug-in.  Given your early partnership with VMware, are you surprised by this move?  Doesn’t this directly compete with the VirtualSheild offering?

I wouldn’t read too much into this. Determina hit the wall on sales, primarily because it’s original memory wall technology was too intrusive, and fell short of handling new vulnerabilities/exploits.

This necessitated the LiveShield product, which required ongoing updates, destroying the value proposition of not having to touch servers, once installed. So, this is a technology/people acquisition, not a product line/customer-base acquisition.

VMware was smart to get a very bright set of folks, with deep memory/paging/OS, and a core technology that would do well to be integrated into the hypervisor for the purpose of hypervisor hardening, and interVM isolation. I don’t see VMware entering the security content business soon (A/V, vulnerabilities, etc.). I see Blue Lane’s VirtualShield technology integrated into the virtual networking layer (vSwitch), as a perfect complement to anything that will come out of the Determina acquisition.

5) Citrix just acquired XenSource.  Do you have plans to offer VirtualShield for Xen? 

A smart move on Citrix’s part to get back into the game. Temporary market caps don’t matter. Virtualization matters. If Citrix can make this a two or three horse race, it will keep the VMware, Citrix, Microsoft triumvirate on their toes, delivering better products, and net good for the customer.

Regarding BlueLane, and Citrix/Xensource, we will continue to pay attention to what customers are buying as they virtualize their data centers. For now, this is a one horse show 🙂

Oh SNAP! VMware acquires Determina! Native Security Integration with the Hypervisor?

August 19th, 2007 12 comments

Determinalogo
Hot on the trails of becoming gigagillionaires, the folks at VMware make my day with this.  Congrats to the folks @ Determina.

Methinks that for the virtualization world, it’s a very, very good thing.  A step in the right direction.

I’m going to prognosticate that this means that Citrix will buy Blue Lane or Virtual Iron next (see bottom of the post) since their acquisition of XenSource leaves them with the exact same problem that this acquisition for VMware tries to solve:

VMware Inc., the market leader in virtualization software, has acquired
Determina Inc., a Silicon Valley maker of host intrusion prevention
products.

…the security of virtualized
environments has been something of an unknown quantity due to the
complexity of the technology and the ways in which hypervisors interact
with the host OS. 
Determina’s technology is designed specifically to protect the OS
from malicious code, regardless of the origin of the attack, so it
would seem to be a sensible fit for VMware, analysts say.Memoryfirewall

In his analysis of the deal, Gartner’s MacDonald sounded many of
the same notes. "By potentially integrating Memory Firewall into the
ESX hypervisor, the hypervisor itself can provide an additional level
of protection against intrusions. We also believe the memory protection
will be extended to guest OSs as well: VMware’s extensive use of binary
emulation for virtualization puts the ESX hypervisor in an advantageous
position to exploit this style of protection," he wrote.

I’ve spoken a lot recently  about how much I’ve been dreading the notion that security was doomed to repeat itself with the accelerated take off of server virtualization since we haven’t solved many of the most basic security problem classes.  Malicious code is getting more targeted and more intelligent and when you combine an emerging market using hot technology without an appropriate level of security… 

Basically, my concerns have stemmed from the observation that if we can’t do a decent job protecting physically-seperate yet interconnected network elements with all the security fu we have, what’s going to happen when the "…network is the computer" (or vice versa.)  Just search for "virtualization" via the Lijit Widget above for more posts on this…

Some options for securing virtualized guest OS’s in a VM are pretty straight foward:

  1. Continue to deploy layered virtualized security services across VLAN segments of which each VM is a member (via IPS’s, routers, switches, UTM devices…)
  2. Deploy software like Virtual Iron’s which looks like a third party vSwitch IPS on each VM
  3. Integrate something like Blue Lane’s ESX plugin-in which interacts with and at the VMM level
  4. As chipset level security improves, enable it
  5. Deploy HIPS as part of every guest OS.

Each of these approaches has its own sets of pros and cons, and quite honestly, we’ll probably see people doing all five at the same time…layered defense-in-depth.  Ugh.

What was really annoying to me, however, is that it really seemed that in many cases, the VM solution providers were again expecting that we’d just be forced to bolt security ON TO our VM environments instead of BAKING IT IN.  This was looking like a sad reality.

I’ll get into details in another post about Determina’s solution, but I am encouraged by VMware’s acquisition of a security company which will be integrated into their underlying solution set.  I don’t think it’s  a panacea, but quite honestly, the roadmap for solving these sorts of problems were blowing in the wind for VMware up until this point.

"Further, by
using the LiveShield capabilities, the ESX hypervisor could be used
‘introspectively’ to shield the hypervisor and guest OSs from attacks
on known vulnerabilities in situations where these have not yet been
patched. Both Determina technologies are fairly OS- and
application-neutral, providing VMware with an easy way to protect ESX
as well as Linux- and Windows-based guest OSs."

Quite honestly, I hoped they would have bought Blue Lane since the ESX Hypervisor is now going to be a crowded space for them…

We’ll see how well this gets integrated, but I smiled when I read this.

Oh, and before anyone gets excited, I’m sure it’s going to be 100% undetectable! 😉

/Hoff

Secure Services in the Cloud (SSaaS/Web2.0) – InternetOS Service Layers

July 13th, 2007 2 comments

Internet
The last few days of activity involving Google and Microsoft have really catalyzed some thinking and demonstrated some very intriguing indicators as to how the delivery of applications and services is dramatically evolving. 

I don’t mean the warm and fuzzy marketing fluff.  I mean some real anchor technology investments by the big-boys putting their respective stakes in the ground as they invest hugely in redefining their business models to setup for the future.

Enterprises large and small are really starting to pay attention to the difference between infrastructure and architecture and this has a dramatic effect on the service providers and supply chain who interact with them.

It’s become quite obvious that there is huge business value associated with divorcing the need for "IT" to focus on physically instantiating and locating "applications" on "boxes" and instead  delivering "services" with the Internet/network as the virtualized delivery mechanism.

Google v. Microsoft – Let’s Get Ready to Rumble!

My last few posts on Google’s move to securely deliver a variety of applications and services represents the uplift of the "traditional" perspective of backoffice SaaS offerings such as Salesforce.com but also highlights the migration of desktop applications and utility services to the "cloud" also.

This is really executing on the vision of the thin-client Internet-centric vision from back in the day o’ the bubble when we saw a ton of Internet-borne services such as storage, backup, etc.  using the "InternetOS" as the canvas for service.

So we’ve talked about Google.  I maintain that their strategy is to ultimately take on Microsoft — including backoffice, utility and desktop applications.  So let’s look @ what the kids from Redmond are up to.

What Microsoft is developing towards with their vision of CloudOS was just recently expounded upon by one Mr. Ballmer.

Not wanting to lose mindshare or share of wallet, Microsoft is maneuvering to give the customer control over how they want to use applications and more importantly how they might be delivered.  Microsoft Live bridges the gap between the traditional desktop and puts that capability into the "cloud."

Let’s explore that a little:

In addition to making available its existing services, such as mail and
instant messaging, Microsoft also will create core infrastructure
services, such as storage and alerts, that developers can build on top
of. It’s a set of capabilities that have been referred to as a "Cloud OS," though it’s not a term Microsoft likes to use publicly.

Late last month, Microsoft introduced two new Windows Live Services,
one for sharing photos and the other for all types of files. While
those services are being offered directly by Microsoft today, they
represent the kinds of things that Microsoft is now promising will be
also made available to developers.

Among the other application and infrastructure components,
Microsoft plans to open are its systems for alerts, contact management,
communications (mail and messenger) and authentication.

As it works to build out the underlying core services, Microsoft is
also offering up applications to partners, such as Windows Live
Hotmail, Windows Live Messenger and the Spaces blogging tool.

Combine the emerging advent of "thinner" end-points (read: mobility products) with high-speed, lower latency connectivity and we can see why this model is attractive and viable.  I think this battle is heating up and the consumer will benefit.

A Practical Example of SaaS/InternetOS Today?

So if we take a step back from Google and Microsoft for a minute, let’s take a snapshot of how one might compose, provision, and deploy applications and data as a service using a similar model over the Internet with tools other than Live or GoogleGear.

Let me give you a real-world example — deliverable today — of this capability with a functional articulation of this strategy; on-demand services and applications provided via virtualized datacenter delivery architectures using the Internet as the transport.  I’m going to use a mashup of two technologies: Yahoo Pipes and 3tera’s AppLogic.

Yahoo Pipes is  "…an interactive data aggregator and manipulator that lets you mashup your favorite online data sources."  Assuming you have data from various sources you want to present an application environment such as Pipes will allow you to dynamically access, transform and present this information any way you see fit.

This means that you can create what amounts to application and services on demand. 

Let’s agree however that while you have the data integration/presentation layer, in many cases you would traditionally require a complex collection of infrastructure from which this source data is housed, accessed, maintained and secured. 

However, rather than worry about where and how the infrastructure is physically located, let’s use the notion of utility/grid computing to make available dynamically an on-demand architecture that is modular, reusable and flexible to make my service delivery a reality — using the Internet as a transport.

Enter 3Tera’s AppLogic:

3Tera’s AppLogic is used by hosting providers to offer true utility computing. You get all the control of having your own virtual datacenter, but without the need to operate a single server.

Deploy and operate applications in your own virtual private datacenter

Set up infrastructure, deploy apps and manage operations with just a browser    
Scale from a fraction of a server to hundreds of servers in days

Deploy and run any Linux software without modifications

Get your life back: no more late night rushes to replace failed equipment

In fact, BT is using them as part of the 21CN project which I’ve written about many times before.

So check out this vision, assuming the InternetOS as a transport.  It’s the drag-and-drop, point-and-click Metaverse of virtualized application and data combined with on-demand infrastructure.

You first define the logical service composition and provisioning through 3Tera with a visual drag-drop canvas, defining firewalls, load-balancers, switches, web servers, app. servers, databases, etc.  Then you click the "Go" button.  AppLogic provisions the entire thing for you without you even necessarily knowing where these assets are.

Then, use something like Pipes to articulate how data sources can be accessed, consumed and transformed to deliver the requisite results.  All over the Internet, transparent to you securely.

Very cool stuff.

Here are some screen-caps of Pipes and 3Tera.

Yahoopipes

3tera

 

 

 

I see your “More on Data Centralization” & Raise You One “Need to Conduct Business…”

June 19th, 2007 1 comment

Pokerhand
Bejtlich continues to make excellent points regarding his view on centralizing data within an enterprise.  He cites the increase in litigation regarding inadequate eDiscovery investment and the increasing pressures amassed from compliance.

All good points, but I’d like to bring the discussion back to the point I was trying to make initially and here’s the perfect perch from which to do it.  Richard wrote:

Christopher Christofer Hoff used the term "agile" several times in his good blog post. I think "agile" is going to be thrown out the window when corporate management is staring at $50,000 per day fines for not being able to produce relevant documents during ediscovery. When a company loses a multi-million dollar lawsuits because the judge issued an adverse inference jury instruction, I guarantee data will be centralized from then forward. "

…how about when a company loses the ability to efficiently and effectively conduct business because they spend so much money and time on "insurance policies" against which a balanced view of risk has not been applied?  Oh, wait.  That’s called "information security." 😉

Fear.  Uncertainty.  Doubt.  Compliance.  Ugh.  Rinse, later, repeat.

I’m not taking what you’re proposing lightly, Richard, but the notion of agility, time to market, cost transformation and enhancing customer experience are being tossed out with the bathwater here. 

Believe it or not, we have to actually have a sustainable business in order to "secure" it. 

It’s fine to be advocating Google Gears and all these other Web 2.0
applications and systems. There’s one force in the universe that can
slap all that down, and that’s corporate lawyers. If you disagree, whom
do you think has a greater influence on the CEO: the CTO or the
corporate lawyer? When the lawyer is backed by stories of lost cases,
fines, and maybe jail time, what hope does a CTO with plans for
"agility" have?

But going back to one of your own mantras, if you bake security into your processes and SDLC in the first place, then the CEO/CTO/CIO and legal counsel will already have assessed the position the company has and balance the risk scorecard to ensure that they have exercised the appropriate due care in the first place. 

The uncertainty and horrors associated with the threat of punitive legal impacts have, are, and will always be there…and they will continue to be exploited by those in the security industry to buy more stuff and justify a paycheck.

Given the business we’re in, it’s not a surprise that the perspective presented is very, very siloed and focused on the potential "security" outcomes of what happens if we don’t start centralizing data now; everything looks like a nail when you’re a hammer.

However, you still didn’t address the other two critical points I made previously:

  1. The underlying technology associated with decentralization of data and applications is at complete odds with the "curl up in a fetal position and wait for the sky to fall" approach
  2. The only reason we have security in the first place is to ensure survivability and availability of service — and make sure that we stay in business.  That isn’t really a technical issue at all, it’s a business one.  I find it interesting that you referenced this issue as the CTO’s problem and not the CIO.

As to your last point, I’m convinced that GE — with the resources, money and time it has to bear on a problem — can centralize its data and resources…they can probably get cold fusion out of a tuna fish can and a blow pop, but for the rest of us on planet Earth, we’re going to have to struggle along trying to cram all the ‘agility’ and enablement we’ve just spent the last 10 years giving to users back into the compliance bottle.

/Hoff