Archive

Archive for the ‘Vulnerability Assessment / Vulnerability Management’ Category

Patching the (Hypervisor) Platform: How Do You Manage Risk?

April 12th, 2010 7 comments

Hi. Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

Reblog this post [with Zemanta]

Extending the Concept: A Security API for Cloud Stacks

July 24th, 2009 7 comments

Please See the follow-on to this post: http://www.rationalsurvivability.com/blog/?p=1276

Update: Wow, did this ever stir up an amazing set of commentary on Twitter. No hash tag, unfortunately, but comments from all angles.  Most of the SecTwits dropped into “fire in the hole” mode, but it’s understandable.  Thank you @rybolov (who was there when I presented this to the gub’mint and @shrdlu who was the voice of, gulp, reason 😉

The Audit, Assertion, Assessment, and Assurance API (A6) (Title credited to @CSOAndy)

It started innocently enough with a post I made on the crushing weight of companies executing “right to audit clauses” in their contracts.  Craig Balding followed that one up with an excellent post of his own.

This lead to Craig’s excellent idea around solving a problem related to not being able to perform network-based vulnerability scans of Cloud-hosted infrastructure due to contractual and technical concerns related to multi-tenancy.  Specifically, Craig lobbied to create an open standard for vulnerability scanning API’s (an example I’ve been using in my talks for quite some time to illustrate challenges in ToS, for example.)  It’s an excellent idea.

So I propose — as I did to a group of concerned government organizations yesterday — that we take this concept a step further, beyond just “vulnerability scanning.”

Let’s solve BOTH of the challenges above with one solution.

Specifically, let’s take the capabilities of something like SCAP and embed a standardized and open API layer into each IaaS, PaaS and SaaS offering (see the API blocks in the diagram below) to provide not only a standardized way of scanning for network vulnerabilities, but also configuration management, asset management, patch remediation, compliance, etc.

Further (HT to @davidoberry who reminded me about my posts on the topic) we could use TCG IF-MAP as a comms. protocol for telemetry.

mappingmetal_compliance.044

This way you win two ways: automated audit and security management capability for the customer/consumer and a a streamlined, cost effective, and responsive way of automating the validation of said controls in relation to compliance, SLA and legal requirements for service providers.

Since we just saw a story today titled “Feds May Come Up With Cloud Security Standards” — why not use one they already have in SCAP to suggest we leverage it to get even better bang for the buck from a security perspective.  This concept extends well beyond the Public sector and it doesn’t have to be SCAP, but it seems like a good example.

Of course we would engineer in authentication/authorization to interface via the APIs and then you could essentially get ISV’s who already support things like SCAP, etc. to provide the capability in their offerings — physical or virtual — to enable it.

We’re not reinventing the wheel and we have lots of technology and standardized solutions we can already use to engineer into the stack.

Whaddya thunk?

/Hoff

Reblog this post [with Zemanta]

Cloud Security Will NOT Supplant Patching…Qualys Has Its Head Up Its SaaS

May 4th, 2009 4 comments

“Cloud Security Will  Supplant Patching…”

What a sexy-sounding claim in this Network World piece which is titled with the opposite suggestion from the title of my blog post.  We will still need patching.  I agree, however, that how it’s delivered needs to change.

Before we get to the issues I have, I do want to point out that the article — despite it’s title —  is focused on the newest release of Qualys’ Laws of Vulnerability 2.0 report (pdf,) which is the latest version of the Half Lives of Vulnerability study that my friend Gerhardt Eschelbeck started some years ago.

In the report, the new author, Qualys’ current CTO Wolfgang Kandek, delivers a really disappointing statistic:

In five years, the average time taken by companies to patch vulnerabilities had decreased by only one day, from 60 days to 59 days, at a time when the number of flaws and the speed at which they are being exploited has accelerated from weeks to, in some cases, days. During the same period, the number of IP scanned on an anonymous basis by the company from its customer base had increased from 3 million to a statistically significant 80 million, with the number of vulnerabilities uncovered rocketing from 3 million to 680 million. Of the latter, 72 million were rated by Qualys as being of ‘critical’ severity.

That lack of progress is sobering, right? So far I’m intrigued, but then that article goes off the reservation by quoting Wolfgang as saying:

Taken together, the statistics suggested that a new solution would be needed in order to make further improvement with the only likely candidate on the horizon being cloud computing. “We believe that cloud security providers can be held to a higher standard in terms of security,” said Kandek. “Cloud vendors can come in and do a much better job.”  Unlike corporate admins for whom patching was a sometimes complex burden, in a cloud environment, patching applications would be more technically predictable – the small risk of ‘breaking’ an application after patching it would be nearly removed, he said.

Qualys has its head up its SaaS.  I mean that in the most polite of ways… 😉

Let me make a couple of important observations on the heels of those I’ve already made and an excellent one Lori MacVittie made today in here post titled “The Real Meaning Of Cloud Security Revealed:

  1. I’d like a better definition of the context of “patching applications.”  I don’t know whether Kandek mean applications in an enterprise or those hosted by a Cloud Provider or both?
  2. There’s a difference between providing security services via the Cloud versus securing Cloud and its application/data.  The quotes above mix the issues.  A “Cloud Security” provider like Qualys can absolutely provide excellent solutions to many of the problems we have today associated with point product deployments of security functions across the enterprise. Anti-spam and vulnerability management are excellent examples.  What that does not mean is that the applications that run in an enterprise can be delivered and deployed more “securely” thanks to the efforts of the same providers.
  3. To that point, the Cloud is not all SaaS-based.  Not every application is going to be or can be moved to a SaaS.  Patching legacy applications (or hosting them for that matter) can be extremely difficult.  Virtualization certainly comes into play here, but by definition, that’s an IaaS/PaaS opportunity, not a SaaS one.
  4. While SaaS providers who do “own the entire stack” are in a better position through consolidated multi-tenancy to transfer the responsibility of patching “their” infrastructure and application(s) on your behalf, it doesn’t really mean they do it any better on an application-by-application basis.  If a SaaS provider only has 1-2 apps to manage (with lots of customers) versus an enterprise with hundreds (and lost of customers,) the “quality” measurements as it relates to management of defect (from any perspective) would likely look better were you the competent SaaS vendor mentioned in this article.  You can see my point here.
  5. If you add in PaaS and IaaS as opposed to simply SaaS (as managed by a third party.) then the statement that “…patching applications would be more technically predictable – the small risk of ‘breaking’ an application after patching it would be nearly removed” is false.

It’s really, really important to compare apples to apples here. Qualys is a fantastic company with a visionary leader in Phillipe Courtot.  I was an early adopter of his SaaS service.  I was on his Customer Advisory Board.  However, as I pointed out to him at the Jericho event where I was a panelist, delivering a security function via the Cloud is not the same thing as securing it and SaaS is merely one piece of the puzzle.

I wrote a couple of other blogs about this topic:

/Hoff

Patching The Cloud?

October 25th, 2008 2 comments

PatchingJust to confuse you, as a lead-in on this topic, please first read my recent rant titled “Will You All Please Shut-Up About Securing THE Cloud…NO SUCH THING…”

Let’s say the grand vision comes to fruition where enterprises begin to outsource to a cloud operator the hosting of previously internal complex mission-critical enterprise applications.

After all, that’s what we’re being told is the Next Big Thing™

In this version of the universe, the enterprise no longer owns the operational elements involved in making the infrastructure tick — the lights blink, packets get delivered, data is served up and it costs less for what is advertised is the same if not better reliability, performance and resilience.

Oh yes, “Security” is magically provided as an integrated functional delivery of service.

Tastes great, less datacenter filling.

So, in a corner case example, what does a boundary condition like the out-of-cycle patch release of MS08-067 mean when your infrastructure and applications are no longer yours to manage and the ownership of the “stack” disintermediates you from being able to control how, when or even if vulnerability remediation anywhere in the stack (from the network on up to the app) is assessed, tested or deployed.

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  How does an enterprise that has no insight into the “black box” model of the cloud operator, setup a dev/test/staging environment that acceptably mimics the operating environment?

What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)

How does one negotiate the process for determining when and how a patch is deployed?  Where does the cloud operator draw the line?   If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service?  If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.

I hate to get all “Star Trek II: The Wrath of Khan” on you, but as Spock said, “The needs of the many outweigh the needs of the few.”  How, when and if a provider might roll a patch has a broad impact across the entire customer base — as it has had in the hosting markets for years — but again the types of applications we are talking about here are far different than what we we’re used to today where the applications and the infrastructure are inextricably joined at the hip.

Hosting/SaaS providers today can scale because of one thing: standardization.  Certainly COTS applications can be easily built on standardized tiered models for compute, storage and networking, but again, we’re being told that enterprises will move all their applications to the cloud, and that includes bespoke creations.

If that’s not the case, and we end up with still having to host some apps internally and some apps in the cloud, we’ve gained nothing (from a cost reduction perspective) because we won’t be able to eliminate the infrastructure needed to support either.

Taking it one step further, what happens if there is standardization on the underlying Cloud platform (CloudOS?) and one provider “patches” or updates their Cloud offering but another does or cannot? If we ultimately talk about VM portability between providers running the “same” platform, what will this mean?  Will things break horribly or be instantiated in an insecure manner?

What about it?  Do you see cloud computing as just an extension of SaaS and hosting of today?  Do you see dramatically different issues arise based upon the types of information and applications that are being described in this model?  We’ve seen issues such as data ownership, privacy and portability bubble up, but these are much more basic operational questions.

This is obviously a loaded set of questions for which I have much to say — some of which is obvious — but I’d like to start a discussion, not a rant.

/Hoff

*This little ditty was inspired by a Twitter exchange with Bob Rudis who was complaining that Amazon’s EC2 service did not have the MS08-067 patch built into the AMI…Check out this forum entry from Amazon, however, as it’s rather apropos regarding the very subject of this blog…

On Patch Tuesdays for Virtualization Platforms…

January 14th, 2008 2 comments

Bandaid
In December I made note of an interesting post on the virtualization.info blog titled "Patch Tuesday for VMware."  This issue popped up today in conversation with a customer and I thought to bubble it back up for discussion.

The post focused on some work done by Ronald Oglesby and Dan Pianfetti from GlassHouse
Technologies regarding the volume, frequency and distribution of patches across VMware’s ESX platform. 

When you combine Ronald and Dan’s data with Kris Lamb’s from ISS that I wrote about a few months ago, it’s quite interesting.

The assertion that Ronald/Dan are making in their post is that platforms like VMware’s ESX have to date required just as much care and feeding from a patching/vulnerability management perspective as a common operating system such as a Windows Server:

So why make this chart and look at the time between patches? Let’s take a hypothetical
server built on July 2nd of 2007, 5 months ago almost exactly. Since
being built on that day and put into production that server would have
been put into maintenance mode and patched/updated eight times. That’s
right eight (8) times in 5 months. How did this happen? Let’s look at
the following timeline:


Maybe it’s time to slow down and look at this as a QA issue? Maybe it’s
time to stop thinking about these platforms as rock solid, few moving
parts systems? Maybe it’s better for us not to draw attention to it,
and instead let it play out and the markets decide whether all this
patching is a good thing or not. Obviously patching is a necessary
evil, and maybe because we are so used to it in the Windows world, we
have ignored this so far. But a patch every 18.75 days for our
"hypothetical" server is a bit much, don’t you think?

I think this may come as a shock to some who have long held the belief that bare-metal, Type 1 virtualization platforms require little or no patching and that because of this, the "security" and availability of virtualized hosts was greater than that of their non-virtualized counterparts.

The reality of the situation and the effort and potential downtime (despite tools that help) have led to unexpected service level deviance, hidden costs and latent insecurity in deployed virtualized environments.  I think Ronald/Dan said it best:

If a client is buying into the idea of server virtualization as a piece
of infrastructure (like a SAN or a switch) only to see the types of
patching we see in Windows, they are going to get smacked in the face
with the reality that these are SERVERS. The reality that the vendors
are sticking so much into the OS that patches are going to happen just
as often as with Windows Servers? Or, if the client believes the
stability/rock solidness and skips a majority of general
patches, they wind up with goofy time issues or other problems with iSCSI, until they catch up.

As a counterpoint to this argument I had hoped to convince Kris Lamb to extend his patch analysis of VMware’s releases and see if he could tell how many patched vulnerabilities existed in the service console (the big ol’ fat Linux OS globbed onto the side of ESX) versus the actual VMM implementation itself.  For some reason, he’s busy with his day job. 😉 This is really an important data point.  I guess I’ll have to do that myself ;(

The reason why this is important is exactly the reason that you’re seeing VMware and other industry virtualization players moving to embedded hypervisors; skinnying down the VMM’s to yield less code, less attack surface and hopefully less vulnerabilities.  So, to be fair, the evolution of the virtualization platforms is really on-par with what one ought to expect with a technology that’s still fairly nascent.

In fact, that’s exactly Nand Mulchandani, VMware’s Sr. Director of Security Product Management & Marketing in response to Ronald/Dan’s post:

As
the article points out, "patching is a necessary evil" – and that the
existence of ESX patches should not come as a shock to anyone. So let’s
talk about the sinister plan behind the increase in ESX patches.
Fortunately, the answer is in the article itself. Our patches contain a
lot of different things, from hardware compatibility updates, feature
enhancements, security fixes, etc.

We
also want customers to view ESX as an appliance – or more accurately,
as a product that has appliance-like characteristics.

Speaking
of appliances, another thing to consider is that we are now offering
ESX in a number of different form-factors, including the brand new ESX Server 3i.
3i will have a significantly different patch characteristics – it does
not have a Console OS and has a different patching mechanism than ESX
that will be very attractive to customers.

I see this as a reasonable and rational response to the issue, but it does point out that whether you use VMware or any other vendor’s virtualization platform, you should make sure to recognize that patching and vulnerability management of the underlying virtualization platforms is another — if not really critical — issue that will require operational attention and potential cost allocation.

/Hoff

P.S. Mike D. does a great job of stacking up other vendors against Microsoft in this vein such as Microsoft, Virtual Iron, and SWSoft.

Virtualization Threat Surface Expands: We Weren’t Kidding…

September 21st, 2007 No comments

Skyisfalling
First the Virtualization Security Public Service Announcement:

By now you’ve no doubt heard that Ryan Smith and Neel Mehta from IBM/ISS X-Force have discovered vulnerabilities in VMware’s DHCP implementation that could allow for "…specially crafted packets to gain system-level privileges" and allow an attacker to execute arbitrary code on the system with elevated privileges thereby gaining control of the system.   

Further, Dark Reading details that Rafal Wojtczvk (whose last name’s spelling is a vulnerability in and of itself!) from McAfee discovered the following vulnerability:

A vulnerability
that could allow a guest operating system user with administrative
privileges to cause memory corruption in a host process, and
potentially execute arbitrary code on the host. Another fix addresses a
denial-of-service vulnerability that could allow a guest operating
system to cause a host process to become unresponsive or crash.

…and yet another from the Goodfellas Security Research Team:

An additional update, according to the advisory, addresses a
security vulnerability that could allow a remote hacker to exploit the
library file IntraProcessLogging.dll to overwrite files in a system. It
also fixes a similar bug in the library file vielib.dll.

It is important to note that these vulnerabilities have been mitigated by VMWare at the time of this announcement.  Further information regarding mitigation of all of these vulnerabilities can be found here.


You can find details regarding these vulnerabilities via the National Vulnerability Database here:

CVE-2007-0061The DHCP server in EMC VMware Workstation before 5.5.5 Build 56455 and
6.x before 6.0.1 Build 55017, Player before 1.0.5 Build 56455 and
Player 2 before 2.0.1 Build 55017, ACE before 1.0.3 Build 54075 and ACE
2 before 2.0.1 Build 55017, and Server before 1.0.4 Build 56528 allows
remote attackers to execute arbitrary code via a malformed packet that
triggers "corrupt stack memory.

CVE-2007-0062 – Integer overflow in the DHCP server in EMC VMware Workstation before
5.5.5 Build 56455 and 6.x before 6.0.1 Build 55017, Player before 1.0.5
Build 56455 and Player 2 before 2.0.1 Build 55017, ACE before 1.0.3
Build 54075 and ACE 2 before 2.0.1 Build 55017, and Server before 1.0.4
Build 56528 allows remote attackers to execute arbitrary code via a
malformed DHCP packet that triggers a stack-based buffer overflow.

CVE-2007-0063 – Integer underflow in the DHCP server in EMC VMware Workstation before
5.5.5 Build 56455 and 6.x before 6.0.1 Build 55017, Player before 1.0.5
Build 56455 and Player 2 before 2.0.1 Build 55017, ACE before 1.0.3
Build 54075 and ACE 2 before 2.0.1 Build 55017, and Server before 1.0.4
Build 56528 allows remote attackers to execute arbitrary code via a
malformed DHCP packet that triggers a stack-based buffer overflow.

CVE-2007-4496 – Unspecified vulnerability in EMC
VMware Workstation before 5.5.5 Build 56455 and 6.x before 6.0.1 Build
55017, Player before 1.0.5 Build 56455 and Player 2 before 2.0.1 Build
55017, ACE before 1.0.3 Build 54075 and ACE 2 before 2.0.1 Build 55017,
and Server before 1.0.4 Build 56528 allows authenticated users with
administrative privileges on a guest operating system to corrupt memory
and possibly execute arbitrary code on the host operating system via
unspecified vectors.

CVE-2007-4155 – Absolute path traversal vulnerability in a certain ActiveX control in
vielib.dll in EMC VMware 6.0.0 allows remote attackers to execute
arbitrary local programs via a full pathname in the first two arguments
to the (1) CreateProcess or (2) CreateProcessEx method.


I am happy to see that VMware moved on these vulnerabilities (I do not have the timeframe of this disclosure and mitigation available.)  I am convinced that their security team and product managers truly take this sort of thing seriously.

However, this just goes to show you that as the virtualization platforms enter further-highlighted mainstream adoption, exploitable vulnerabilities will continue to follow as those who follow the money begin to pick up the scent. 

This is another phrase that’s going to make a me a victim of my own Captain Obvious Award, but it seems like we’ve been fighting this premise for too long now.  I recognize that this is not the first set of security vulnerabilities we’ve seen from VMware, but I’m going to highlight them for a reason.

It seems that due to a lack of well-articulated vulnerabilities that extended beyond theoretical assertions or POC’s, the sensationalism of research such as Blue Pill has desensitized folks to the emerging realities of virtualization platform attack surfaces.

I’ve blogged about this over the last year and a half, with the latest found here and an interview here.  It’s really just an awareness campaign.  One I’m more than willing to wage given the stakes.  If that makes me the noisy canary in the coal mine, so be it.

These very real examples are why I feel it’s ludicrous to take seriously any comments that suggest by generalization that virtualized environments are "more secure" by design; it’s software, just like anything else, and it’s going to be vulnerable.

I’m not trying to signal that the sky is falling, just the opposite.  I do, however, want to make sure we bring these issues to your attention.

Happy Patching!

/Hoff

Take5 (Episode #5) – Five Questions for Allwyn Sequeira, SVP of Product Operations, Blue Lane

August 21st, 2007 18 comments

This fifth episode of Take5 interviews Allwyn Sequeira, SVP of Product Operations for Blue Lane.  

First a little background on the victim:

Allwyn
Allwyn Sequeira is Senior Vice President of Product Operations at Blue
Lane Technologies, responsible for managing the overall product life
cycle, from concept through research, development and test, to delivery
and support. He was previously the Senior Vice President of Technology
and Operations at netVmg, an intelligent route control company acquired
by InterNap in 2003, where he was responsible for the architecture,
development and deployment of the industry-leading flow control
platform. Prior to netVmg, he was founder, Chief Technology Officer and
Executive Vice President of Products and Operations at First Virtual
Corporation (FVC), a multi-service networking company that had a
successful IPO in 1998. Prior to FVC, he was Director of the Network
Management Business Unit at Ungermann-Bass, the first independent local
area network company. Mr. Sequeira has previously served as a Director
on the boards of FVC and netVmg.


Mr. Sequeira started his career as a software developer at HP in the
Information Networks Division, working on the development of TCP/IP
protocols. During the early 1980’s, he worked on the CSNET project, an
early realization of the Internet concept. Mr. Sequeira is a recognized
expert in data networking, with twenty five years of experience in the
industry, and has been a featured speaker at industry leading forums
like Networld+Interop, Next Generation Networks, ISP Con and RSA
Conference.

Mr. Sequeira holds a Bachelor of Technology degree in Computer
Science from the Indian Institute of Technology, Bombay, and a Master
of Science in Computer Science from the University of Wisconsin,
Madison.   

Allwyn, despite all this good schoolin’ forgot to send me a picture, so he gets what he deserves 😉
(Ed: Yes, those of you quick enough were smart enough to detect that the previous picture was of Brad Pitt and not Allwyn.  I apologize for the unnecessary froth-factor.)

 Questions:

1) Blue Lane has two distinct product lines, VirtualShield and PatchPoint.  The former is a software-based solution which provides protection for VMware Infrastructure 3 virtual servers as an ESX VM plug-in whilst the latter offers a network appliance-based solution for physical servers.  How are these products different than either virtual switch IPS’ like Virtual Iron or in-line network-based IPS’s?

IPS technologies have been charged with the incredible mission of trying to protect everything from anything.  Overall they’ve done well, considering how much the perimeter of the network has changed and how sophisticated hackers have become. Much of their core technology, however, was relevant and useful when hackers could be easily identified by their signatures. As many have proclaimed, those days are coming to an end.

A defense department official recently quipped, "If you offer the same protection for your toothbrushes and your diamonds you are bound to lose fewer toothbrushes and more diamonds."  We think that data center security similarly demands specialized solutions.  The concept of an enterprise network has become so ambiguous when it comes to endpoints and devices and supply chain partners, etc. we think its time to think more realistically in terms of trusted, yet highly available zones within the data center.

It seems clear at this point that different parts of the network need very different security capabilities.  Servers, for example need highly accurate solutions that do not block or impede good traffic and can correct bad traffic, especially when it comes to closing network-facing vulnerability windows.  They need to maintain availability with minimal latency for starters; and that has been a sort of Achilles heel for signature-based approaches.  Of course, signatures also bring considerable management burdens over and beyond their security capabilities.

No one is advocating turning off the IPS, but rather approaching servers with more specialized capabilities.  We started focusing on servers years ago and established very sophisticated application and protocol intelligence, which has allowed us to correct traffic inline without the noise, suspense and delay that general purpose network security appliance users have come to expect.

IPS solutions depend on deep packet inspection typically at the perimeter based on regexp pattern matching for exploits.  Emerging challenges with this approach have made alert and block modes absolutely necessary as most IPS solutions aren’t accurate enough to be trusted in full library block. 

Blue Lane uses a vastly different approach.  We call it deep flow inspection/correction for known server vulnerabilities based on stateful decoding up to layer 7.  We can alert, block and correct, but most of are deployments are in correct mode, with our full capabilities enabled. From an operational standpoint we have substantially different impacts.

A typical IPS may have 10K signatures while experts recommend turning on just a few hundred.  That kind of marketing shell game (find out what really works) means that there will be plenty of false alarms, false positives and negatives and plenty of tuning.  With polymorphic attacks signature libraries can increase exponentially while not delivering meaningful improvements in protection. 

Blue Lane supports about 1000 inline security patches across dozens of very specific server vulnerabilities, applications and operating systems.  We generate very few false alarms and minimal latency.  We don’t require ANY tuning.  Our customers run our solution in automated, correct mode.

The traditional static signature IPS category has evolved into an ASIC war between some very capable players for the reasons we just discussed.Exploding variations of exploits and vectors means that exploit-centric approaches will require more processing power.

Virtualization is pulling the data center into an entirely different direction, driven by commodity processors.  So of course our VirtualShield solution was a much cleaner setup with a hypervisor; we can plug into the hypervisor layer and run on top of existing hardware, again with minimal latency and footprint.

You don’t have to be a Metasploit genius to evade IPS signatures.  Our higher layer 7 stateful decoding is much more resilient. 

2) With zero-days on the rise, pay-for-play vulnerability research and now Zero-Bay (WabiSabiLabi) vulnerability auctions and the like, do you see an uptake in customer demand for vulnerability shielding solutions?

Exploit-signature technologies are meaningless in the face of evanescent, polymorphic threats, resulting in 0-day exploits. Slight modifications to signatures can bypass IPSes, even against known vulnerabilities.  Blue Lane technology provides 0-day protection for any variant of an exploit against known vulnerabilities.  No technology can provide ultimate protection against 0-day exploits based on 0-day vulnerabilities. However, this requires a different class of hacker.

3) As large companies start to put their virtualization strategies in play, how do you see customers addressing securing their virtualized infrastructure?  Do they try to adapt existing layered security methodologies and where do these fall down in a virtualized world?

I’ve explored this topic in depth at the Next Generation Data Center conference last week. Also, your readers might be interested in listening to a recent podcast: The Myths and Realities of Virtualization Security: An Interview. 

To summarize, there are a few things that change with virtualization, that folks need to be aware of.  It represents a new architecture.  The hypervisor layer represents the un-tethering and clustering of VMs, and centralized control.  It introduces a new virtual network layer.  There are entirely new states of servers, not anticipated by traditional static security approaches (like instant create, destroy, clone, suspend, snapshot and revert to snapshot). 

Then you’ll see unprecedented levels of mobility and new virtual appliances and black boxing of complex stacks including embedded databases.  Organizations will have to work out who is responsible for securing this very fluid environment.  We’ll also see unprecedented scalability with Infiniband cores attaching LAN/SAN out to 100’s of ESX hypervisors and thousands of VMs.

Organizations will need the capability to shield these complex, fluid environments; because trying to keep track of individual VMs, states, patch levels, locations will make tuning an IPS for polymorphic attacks look like childs play in comparison.   Effective solutions will need to be highly accurate, low latency solutions deployed in correct mode. Gone will be the days of man-to-man blocking and tuning.  Here to stay are the days of zone defense.

4) VMware just purchased Determina and intends to integrate their memory firewall IPS product as an ESX VM plug-in.  Given your early partnership with VMware, are you surprised by this move?  Doesn’t this directly compete with the VirtualSheild offering?

I wouldn’t read too much into this. Determina hit the wall on sales, primarily because it’s original memory wall technology was too intrusive, and fell short of handling new vulnerabilities/exploits.

This necessitated the LiveShield product, which required ongoing updates, destroying the value proposition of not having to touch servers, once installed. So, this is a technology/people acquisition, not a product line/customer-base acquisition.

VMware was smart to get a very bright set of folks, with deep memory/paging/OS, and a core technology that would do well to be integrated into the hypervisor for the purpose of hypervisor hardening, and interVM isolation. I don’t see VMware entering the security content business soon (A/V, vulnerabilities, etc.). I see Blue Lane’s VirtualShield technology integrated into the virtual networking layer (vSwitch), as a perfect complement to anything that will come out of the Determina acquisition.

5) Citrix just acquired XenSource.  Do you have plans to offer VirtualShield for Xen? 

A smart move on Citrix’s part to get back into the game. Temporary market caps don’t matter. Virtualization matters. If Citrix can make this a two or three horse race, it will keep the VMware, Citrix, Microsoft triumvirate on their toes, delivering better products, and net good for the customer.

Regarding BlueLane, and Citrix/Xensource, we will continue to pay attention to what customers are buying as they virtualize their data centers. For now, this is a one horse show 🙂

Remotely Exploitable Dead Frog with Embedded Web Server – The “Anatomy” of a Zero-Day Threat Surface

July 25th, 2007 No comments

WebserverfrogYou think I make this stuff up, don’t you?

Listen, I’m a renaissance man and I look for analogs to the security space anywhere and everywhere I can find them.

I maintain that next to the iPhone, this is the biggest thing to hit the security world since David Maynor found Jesus (in a pool hall, no less.)

I believe InfoSec Sellout already has produced a zero-day for this using real worms.  No Apple products were harmed during the production of this webserver, but I am sad to announce that there is no potential for adding your own apps to the KermitOS…an SDK is available, however.

The frog’s dead.  Suspended in a liquid.  In a Jar.  Connected to the network via an Ethernet cable.  You can connect to the embedded webserver wired into its body parts.  When you do this, you control which one of its legs twitch.  pwned!

You can find the pertinent information here.

A Snort signature will be available shortly.

/Hoff

(Image and text below thanks to Boing Boing)

The Experiments in Galvanism frog floats in mineral oil, a webserver
installed it its guts, with wires into its muscle groups. You can
access the frog over the network and send it galvanic signals that get
it to kick its limbs.

Experiments in Galvanism is the culmination of studio and gallery
experiments in which a miniature computer is implanted into the dead
body of a frog specimen. Akin to Damien Hirst’s bodies in formaldehyde,
the frog is suspended in clear liquid contained in a glass cube, with a
blue ethernet cable leading into its splayed abdomen. The computer
stores a website that enables users to trigger physical movement in the
corpse: the resulting movement can be seen in gallery, and through a
live streaming webcamera.

    – Risa Horowitz

Garnet Hertz has implanted a miniature webserver in the body of a
frog specimen, which is suspended in a clear glass container of mineral
oil, an inert liquid that does not conduct electricity. The frog is
viewable on the Internet, and on the computer monitor across the room,
through a webcam placed on the wall of the gallery. Through an Ethernet
cable connected to the embedded webserver, remote viewers can trigger
movement in either the right or left leg of the frog, thereby updating
Luigi Galvani’s original 1786 experiment causing the legs of a dead
frog to twitch simply by touching muscles and nerves with metal.

Experiments in Galvanism is both a reference to the origins of
electricity, one of the earliest new media, and, through Galvani’s
discovery that bioelectric forces exist within living tissue, a nod to
what many theorists and practitioners consider to be the new new media:
bio(tech) art.

    – Sarah Cook and Steve Dietz

Fat Albert Marketing and the Monetizing of Vulnerability Research

July 8th, 2007 No comments

Money
Over the last couple of years, we’ve seen the full spectrum of disclosure and "research" portals arrive on scene; examples stem from the Malware Distribution Project to 3Com/TippingPoint’s Zero Day Initiative.  Both of these examples illustrate ways of monetizing the output trade of vulnerability research.   

Good, bad or indifferent, one would be blind not to recognize that these services are changing the landscape of vulnerability research and pushing the limits which define "responsible disclosure."

It was only a matter of time until we saw the mainstream commercial emergence of the open vulnerability auction which is just another play on the already contentious marketing efforts blurring the lines between responsible disclosure for purely "altruistic" reasons versus commercial gain.

MushmouthEnter Wabisabilabi, the eBay of Zero Day vulnerabilities.   

This auction marketplace for vulnerabilities is marketed as a Swiss "…Laboratory & Marketplace Platform for Information Technology Security" which "…helps customers defend their databases, IT infrastructure, network, computers, applications, Internet offerings and access."

Despite a name which sounds like Mushmouth from Fat Albert created it (it’s Japanese in origin, according to the website) I am intrigued by this concept and whether or not it will take off.

I am, however, a little unclear on how customers are able to purchase a vulnerability and then become more secure in defending their assets. 

A vulnerability without an exploit, some might suggest, is not a vulnerability at all — or at least it poses little temporal risk.  This is a fundamental debate of the definition of a Zero-Day vulnerability. 

Further, a vulnerability that has a corresponding exploit but without a countermeasure (patch, signature, etc.) is potentially just as useless to a customer if you have no way of protecting yourself.

If you can’t manufacture a countermeasure, even if you hoard the vulnerability and/or exploit, how is that protection?  I suggest it’s just delaying the inevitable.

I am wondering how long until we see the corresponding auctioning off of the exploit and/or countermeasure?  Perhaps by the same party that purchased the vulnerability in the first place?

Today in the closed loop subscription services offered by vendors who buy vulnerabilities, the subscribing customer gets the benefit of protection against a threat that they may not even know they have, but for those who can’t or won’t pony up the money for this sort of subscription (which is usually tied to owning a corresponding piece of hardware to enforce it,) there exists a point in time between when the vulnerability is published and when it this knowledge is made available universally.

Depending upon this delta, these services may be doing more harm than good to the greater populous.

In fact, Dave G. over at Matasano argues quite rightly that by publishing even the basic details of a vulnerability that "researchers" will be able to more efficiently locate the chunks of code wherein the vulnerability exists and release this information publicly — code that was previously not known to even have a vulnerability.

Each of these example vulnerability service offerings describes how the vulnerabilities are kept away from the "bad guys" by qualifying their intentions based upon the ability to pay for access to the malicious code (we all know that criminals are poor, right?)  Here’s what the Malware Distribution Project describes as the gatekeeper function:

Why Pay?

Easy; it keeps most, if not all of the malicious intent, outside the
gates. While we understand that it may be frustrating to some people
with the right intentions not allowed access to MD:Pro, you have to
remember that there are a lot of people out there who want to get
access to malware for malicious purposes. You can’t be responsible on
one hand, and give open access to everybody on the other, knowing that
there will be people with expressly malicious intentions in that group.

ZDI suggests that by not reselling the vulnerabilities but rather protecting their customers and ultimately releasing the code to other vendors, they are giving back:

The Zero Day Initiative (ZDI) is unique in how the acquired
vulnerability information is used. 3Com does not re-sell the
vulnerability details or any exploit code. Instead, upon notifying the
affected product vendor, 3Com provides its customers with zero day
protection through its intrusion prevention technology. Furthermore,
with the altruistic aim of helping to secure a broader user base, 3Com
later provides this vulnerability information confidentially to
security vendors (including competitors) who have a vulnerability
protection or mitigation product.

As if you haven’t caught on yet, it’s all about the Benjamins. 

We’ve seen the arguments ensue regarding third party patching.  I think that this segment will heat up because in many cases it’s going to be the fastest route to protecting oneself from these rapidly emerging vulnerabilities you didn’t know you had.

/Hoff

Take5- Five Questions for Chris Wysopal, CTO Veracode

June 19th, 2007 No comments

In this first installment of Take5, I interview Chris Wysopal, the CTO of Veracode about his new company, secure coding, vulnerability research and the recent forays into application security by IBM and HP.

This entire interview was actually piped over a point-to-point TCP/IP connection using command-line redirection through netcat.  No packets were harmed during the making of this interview…

First, a little background on the victim, Chris Wysopal:

Wysopalsm
Chris Wysopal is
co-founder and CTO of Veracode. He has testified on Capitol Hill on the subjects of government
computer security and how vulnerabilities are discovered in software. Chris
co-authored the password auditing tool L0phtCrack, wrote the windows version of
netcat, and was a researcher at the security think tank, L0pht Heavy
Industries, which was acquired by @stake. He was VP of R&D at @stake
and later director of development at Symantec, where he led a
team developing binary static analysis technology.

He was influential in
the creation of responsible vulnerability disclosure guidelines and a founder of
the Organization for Internet Safety.  Chris wrote "The Art of
Software Security Testing: Identifying Security Flaws", published by Addison
Wesley and Symantec Press in December 2006. He earned his Bachelor of Science
degree in Computer and Systems Engineering from Rensselaer Polytechnic
Institute.

1) You’re a founder of Veracode
which is described as the industry’s first provider
of automated, on-demand
application security solutions.  What sort of application
security
services does Veracode provide?  Binary analysis, Web Apps?
 
Veracode currently offers binary static analysis of C/C++ applications
for Windows and Solaris and for Java applications.  This allows us to find
the classes of vulnerabilities that source code analysis tools can find but on
the entire codebase including the libraries which you probably don’t have source
code for. Our product roadmap includes support for C/C++ on Linux and C# on
.Net.  We will also be adding additional analysis techniques to our
flagship binary static analysis.
 
2) Is this a SaaS model?
How do you charge for your services?  Do you see
manufacturers
using your services or enterprises?

 
Yes.
Customers upload their binaries to us and we deliver an analysis of their
security flaws via our web portal.  We charge by the megabyte of
code.  We have both software vendors and enterprises who write or outsource
their own custom software using our services.  We also have
enterprises who are purchasing software ask the software vendors to submit their
binaries to us for a 3rd party analysis.  They use this analysis as a
factor in their purchasing decision. It can lead to a "go/no go" decision, a
promise by the vendor to remediate the issues found, or a reduction in price to
compensate for the cost of additional controls or the cost of incident
response that insecure software necessitates.
 
3) I was a Qualys customer
— a VA/VM SaaS company.  Qualys had to spend quite
a bit of time
convincing customers that allowing for the storage of their VA data
was
secure.  How does Veracode address a customer’s security concerns when
uploading their
applications?

We are
absolutely fanatical about the security of our customers data.  I look back
at the days when I was a security consultant where we had vulnerability
data on laptops and corporate file shares and I say, "what were we
thinking?"  All customer data at Veracode is encrypted in storage and at
rest with a unique key per application and customer.  Everyone at Veracode
uses 2 factor authentication to log in and 2 factor is the default for
customers.  Our data center is a SAS 70 Type II facility. All data
access is logged so we know exactly who looked at what and when. As security
people we are professionally paranoid and I think it shows through in the system
we built.  We also believe in 3rd party verification so we have had a top
security boutique do a security review our portal
application.
 
4) With IBM’s acquisition
of Watchfire and today’s announcement that HP will buy
SPI Dynamics, how does
Veracode stand to play in this market of giants who will
be competing to
drive service revenues?

 
We
have designed our solution from the ground up to have the Web 2.0 ease of
use and experience and we have the quality of analysis that I feel is the best
in the market today.  An advantage is Veracode is an independent
assessment company that customers can trust to not play favorites to other
software companies because of partnerships or alliances. Would Moody’s or
Consumer Reports be trusted as a 3rd party if they were part of a big financial
or technology conglomerate? We feel a 3rd party assessment is important in the
security world.
 
5) Do you see the latest
developments in vulnerability research with the drive for
pay-for-zeroday
initiatives pressuring developers to produce secure code out of the box
for
fear of exploit or is it driving the activity to companies like yours?

 
I
think the real driver for developers to produce secure code and for developers
and customers to seek code assessments is the reality that the costs of insecure
code goes up everyday and its adding to the operational risk of companies that
use software.  People exploiting vulnerabilities are not going away
and there is no way to police the internet of vulnerability
information.  The only solution is for customers to demand more secure
code, and proof of it, and for developers to deliver more secure code in
response.