Archive

Archive for July, 2007

Freaky Friday Post: How to Enforce Security in the Penetentiary…Philipino Style

July 20th, 2007 2 comments

Many nation states find innovative ways of enforcing security in their stockades and jails.  We’ve got that crazy sheriff in Arizona who enables the prisoner’s fashion sense by outfitting them with pink jumpsuit couture and then there’s Gitmo.

Honestly, in the U.S. jails have become an effective method of obtaining 3 squares and a warm bed (albeit sometimes with somebody else in it) and quite honestly provide little in the way of taxpayer payback.

I say enough!  Let’s look abroad for inspiration.  Let’s see…the Taliban’s a little extreme, Brazil…that whole waxing thing is over the top…ummmm…

Aha! The Philippines have solved this problem.  It seems they have made an investment that truly demonstrates ROI in the security space.  It allows for both freedom of expression and entertainment whilst repaying the monarchy in a tribute to the King Of Pop.

Ladies and germs, I present you the ensemble cast of inmates from the the Cebu Provincial Detention and Rehabilitation Center, Cebu, Philippines.  Hey, you slackers at Pelican Bay, nut up!

This truly is the "Thriller in Manilla!"

Wow.

Blue Man Group is teh pwned!

/Hoff

Categories: Jackassery Tags:

The Evolution of Bipedal Locomotion & the Energetic Economics of Virtualization

July 17th, 2007 5 comments

Evolution_2
By my own admission, this is a stream-of-consciousness, wacky, caffeine-inspired rant that came about while I was listening to a conference call.   It’s my ode to paleoanthropology and how we, the knuckledraggers of IT/Security, evolve.

My apologies to anyone who actually knows anything about or makes an honest living from science; I’ve quite possibly offended all of you with this post…

I came across this interesting article posted today on the ScienceDaily website which discusses a hypothesis by a University of Arizona professor, David Raichlen, who suggests that bipedalism, or walking on two legs, evolved simply because it used less energy than quad knuckle-walking.  If one looks at the force impact expended whilst quad-knuckle walking, it is comparably 4 times that of a bipedal footprint!  That’s huge.

I’m always looking for good tangential analogs for points I want to reinforce within the context of my line of work, and I found this fantastic fodder for such an exercise.

So without a lot of work on my part, I’m going to post some salient points from the article and leave it up to you to determine how, if at all, the "energetic" evolution of virtualization draws interesting parallels to this very interesting hypothesis; that the heretofore theorized complexity associated with this crucial element of human evolution was, in fact, simply an issue derived from energy efficiency which ultimately led to sustainable survivability and not necessarily because of ecological, behavioral or purely anatomical reasons:

From Here:

The origin of bipedalism, a defining feature of hominids, has been 
attributed to several
competing hypothesis. The postural feeding hypothesis
(Hunt 1996) is an ecological model.
The behavioral model (Lovejoy 1981) attributes bipedality to the social, sexual and
reproductive conduct
of early hominids. The thermoregulatory model (Wheeler 1991) views
the
increased heat loss, increased cooling, reduced heat gain and reduced water requirements conferred by a bipedal stance in a hot, tropical
climate as the selective pressure leading to bipedalism.

At its core, server virtualization might be described as a manifestation of how we rationalize and deal with the sliding-window impacts of time and the operational costs associated with keeping pace with the transformation and adaptation of technology in compressed footprints.  One might describe this as the "energy" (figuratively and literally) that it takes to operate our IT infrastructure.

It’s about doing more with less and being more efficient such that the "energy" used to produce and deliver services is small in comparison to the output mechanics of what is consumed.  One could suggest that once the efficiency gains (or savings?) are realized, the energy can be allocated to other more enabling abilities.  Using the ape to human bipedalism analog, one could suggest that bipedalism lead to bigger brains, better hunting/gathering skills, fashioning tools, etc.  Basically the initial step of efficiency gains leads to exponential capabilities over the long term.

So that’s my Captain Obvious declaration relating bipedalism with virtualization.  Ta Da!

From the article as sliced & diced by the fine folks at ScienceDaily:

Raichlen and his colleagues will publish the article, "Chimpanzee
locomotor energetics and the origin of human bipedalism" in the online
early edition of the Proceedings of the National Academy of Sciences
(PNAS) during the week of July 16. The print issue will be published on
July 24.

Bipedalism marks a critical divergence between humans
and other apes and is considered a defining characteristic of human
ancestors. It has been hypothesized that the reduced energy cost of
walking upright would have provided evolutionary advantages by
decreasing the cost of foraging.

"For decades now researchers
have debated the role of energetics and the evolution of bipedalism,"
said Raichlen. "The big problem in the study of bipedalism was that
there was little data out there."

The researches collected
metabolic, kinematic and kenetic data from five chimpanzees and four
adult humans walking on a treadmill. The chimpanzees were trained to
walk quadrupedally and bipedally on the treadmill.

Humans
walking on two legs only used one-quarter of the energy that
chimpanzees who knuckle-walked on four legs did. On average, the
chimpanzees used the same amount of energy using two legs as they did
when they used four legs. However, there was variability among
chimpanzees in how much energy they used, and this difference
corresponded to their different gaits and anatomy.

"We were
able to tie the energetic cost in chimps to their anatomy," said
Raichlen. "We were able to show exactly why certain individuals were
able to walk bipedally more cheaply than others, and we did that with
biomechanical modeling."

The biomechanical modeling revealed
that more energy is used with shorter steps or more active muscle mass.
Indeed, the chimpanzee with the longest stride was the most efficient
walking upright.

"What those results allowed us to do was to
look at the fossil record and see whether fossil hominins show
adaptations that would have reduced bipedal energy expenditures," said
Raichlen. "We and many others have found these adaptations [such as
slight increases in hindlimb extension or length] in early hominins,
which tells us that energetics played a pretty large role in the
evolution of bipedalism."

The point here is not that I’m trying to be especially witty, but rather to illustrate that when we cut through the FUD and marketing surrounding server virtualization and focus on evolution versus revolution, some very interesting discussion points emerge regarding why folks choose to virtualize their server infrastructure.

After I attended the InterOp Data Center Summit, I walked away with a very different view of the benefits and costs of virtualization than I had before.  I think that as folks approach this topic, the realities of how the game changes once we start "walking upright" will provide a profound impact to how we view infrastructure and what the next step might bring.

Server virtualization at its most basic is about economic efficiency (read: energy == power + cooling…) plain and simple.  However, if we look beyond this as the first "step," we’ll see grid and utility computing paired with Web2.0/SaaS take us to a whole different level.  It’s going to push security to its absolute breaking point.

I liked the framing of the problem set with the bipedal analog.  I can’t wait until we come full circle, grow wings and start using mainframes again 😉

Did that make any bloody sense at all?

/Hoff

P.S. I liked Jeremiah’s evolution picture, too:

Evolution2

 

 

Categories: Virtualization Tags:

BeanSec! 11 – July 18th – 6PM to ?

July 16th, 2007 No comments

Beansec3
Yo!  BeanSec! 11 is upon us.  Wednesday, July 18th, 2007.

BeanSec! is an informal meetup of information security
professionals, researchers and academics in the Greater Boston area
that meets the third Wednesday of each month.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join
up”, present a zero-day exploit, or defend your dissertation to attend.
Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139

Categories: BeanSec! Tags:

Secure Services in the Cloud (SSaaS/Web2.0) – InternetOS Service Layers

July 13th, 2007 2 comments

Internet
The last few days of activity involving Google and Microsoft have really catalyzed some thinking and demonstrated some very intriguing indicators as to how the delivery of applications and services is dramatically evolving. 

I don’t mean the warm and fuzzy marketing fluff.  I mean some real anchor technology investments by the big-boys putting their respective stakes in the ground as they invest hugely in redefining their business models to setup for the future.

Enterprises large and small are really starting to pay attention to the difference between infrastructure and architecture and this has a dramatic effect on the service providers and supply chain who interact with them.

It’s become quite obvious that there is huge business value associated with divorcing the need for "IT" to focus on physically instantiating and locating "applications" on "boxes" and instead  delivering "services" with the Internet/network as the virtualized delivery mechanism.

Google v. Microsoft – Let’s Get Ready to Rumble!

My last few posts on Google’s move to securely deliver a variety of applications and services represents the uplift of the "traditional" perspective of backoffice SaaS offerings such as Salesforce.com but also highlights the migration of desktop applications and utility services to the "cloud" also.

This is really executing on the vision of the thin-client Internet-centric vision from back in the day o’ the bubble when we saw a ton of Internet-borne services such as storage, backup, etc.  using the "InternetOS" as the canvas for service.

So we’ve talked about Google.  I maintain that their strategy is to ultimately take on Microsoft — including backoffice, utility and desktop applications.  So let’s look @ what the kids from Redmond are up to.

What Microsoft is developing towards with their vision of CloudOS was just recently expounded upon by one Mr. Ballmer.

Not wanting to lose mindshare or share of wallet, Microsoft is maneuvering to give the customer control over how they want to use applications and more importantly how they might be delivered.  Microsoft Live bridges the gap between the traditional desktop and puts that capability into the "cloud."

Let’s explore that a little:

In addition to making available its existing services, such as mail and
instant messaging, Microsoft also will create core infrastructure
services, such as storage and alerts, that developers can build on top
of. It’s a set of capabilities that have been referred to as a "Cloud OS," though it’s not a term Microsoft likes to use publicly.

Late last month, Microsoft introduced two new Windows Live Services,
one for sharing photos and the other for all types of files. While
those services are being offered directly by Microsoft today, they
represent the kinds of things that Microsoft is now promising will be
also made available to developers.

Among the other application and infrastructure components,
Microsoft plans to open are its systems for alerts, contact management,
communications (mail and messenger) and authentication.

As it works to build out the underlying core services, Microsoft is
also offering up applications to partners, such as Windows Live
Hotmail, Windows Live Messenger and the Spaces blogging tool.

Combine the emerging advent of "thinner" end-points (read: mobility products) with high-speed, lower latency connectivity and we can see why this model is attractive and viable.  I think this battle is heating up and the consumer will benefit.

A Practical Example of SaaS/InternetOS Today?

So if we take a step back from Google and Microsoft for a minute, let’s take a snapshot of how one might compose, provision, and deploy applications and data as a service using a similar model over the Internet with tools other than Live or GoogleGear.

Let me give you a real-world example — deliverable today — of this capability with a functional articulation of this strategy; on-demand services and applications provided via virtualized datacenter delivery architectures using the Internet as the transport.  I’m going to use a mashup of two technologies: Yahoo Pipes and 3tera’s AppLogic.

Yahoo Pipes is  "…an interactive data aggregator and manipulator that lets you mashup your favorite online data sources."  Assuming you have data from various sources you want to present an application environment such as Pipes will allow you to dynamically access, transform and present this information any way you see fit.

This means that you can create what amounts to application and services on demand. 

Let’s agree however that while you have the data integration/presentation layer, in many cases you would traditionally require a complex collection of infrastructure from which this source data is housed, accessed, maintained and secured. 

However, rather than worry about where and how the infrastructure is physically located, let’s use the notion of utility/grid computing to make available dynamically an on-demand architecture that is modular, reusable and flexible to make my service delivery a reality — using the Internet as a transport.

Enter 3Tera’s AppLogic:

3Tera’s AppLogic is used by hosting providers to offer true utility computing. You get all the control of having your own virtual datacenter, but without the need to operate a single server.

Deploy and operate applications in your own virtual private datacenter

Set up infrastructure, deploy apps and manage operations with just a browser    
Scale from a fraction of a server to hundreds of servers in days

Deploy and run any Linux software without modifications

Get your life back: no more late night rushes to replace failed equipment

In fact, BT is using them as part of the 21CN project which I’ve written about many times before.

So check out this vision, assuming the InternetOS as a transport.  It’s the drag-and-drop, point-and-click Metaverse of virtualized application and data combined with on-demand infrastructure.

You first define the logical service composition and provisioning through 3Tera with a visual drag-drop canvas, defining firewalls, load-balancers, switches, web servers, app. servers, databases, etc.  Then you click the "Go" button.  AppLogic provisions the entire thing for you without you even necessarily knowing where these assets are.

Then, use something like Pipes to articulate how data sources can be accessed, consumed and transformed to deliver the requisite results.  All over the Internet, transparent to you securely.

Very cool stuff.

Here are some screen-caps of Pipes and 3Tera.

Yahoopipes

3tera

 

 

 

More on GoogleTini…(Google/Postini Acquisition) by Way of Shimel’s Post

July 10th, 2007 8 comments

YGoogletini
esterday’s post regarding my prognostication of the Google/Postini M&A activity yielded a ton of off-line feedback/opinion/queries.  I had three press/analyst calls yesterday on my opinion, so either I’m tickling somebody’s interest funny bone or I’m horribly wrong 😉

Either way, Alan Shimel piped up today with his perspective.  It’s not often I disagree with Alan, but the root of his comment leaves me puzzled.  Alan said:

I do not think that Google’s acquisition of Postini is a shot across
the bow of Microsoft.  I think Google goes about its business of
delivering on its vision.  I think its vision is rather simple really.
Google believes that the future belongs to Software as a Service
(SaaS)
.  As part of their SaaS strategy, they need to secure their web
based apps, as well as offer security as a service.  This is not really
much different than Microsofts "Live" program, also a Software as a
Service play.  That is where the competition is.

It appears that Alan’s really re-stating what I said yesterday regarding SaaS and especially as I highlighted the security aspects thereof, but his statements are strangely contradictory in the scope of this single paragraph.

To wit, if Google is indeed focused on SSaaS (Secure Software as a Service) and they’re looking to displace at least for certain markets traditional "Office" applications which are Microsoft’s cash cow ($12B business?) how is this not a "shot across the bow of Microsoft?"

Further, if Microsoft is engaging in SaaS with Live, then it further underscores the direct competitive model that demonstrates that Microsoft (et al.) are firmly in the target hairs.

What am I missing here?

/Hoff

(EDIT: Added a link to an interview I did with TheStreet.com here.)

Categories: Google Tags:

Tell Me Again How Google Isn’t Entering the Security Market? GooglePOPs will Bring Clean Pipes…

July 9th, 2007 2 comments

Googledatacenter
Not to single out Jeremiah, but in my Take5 interview with him, I asked him the following:

3) What do you make of Google’s foray into security?  We’ve seen them crawl sites and index malware.  They’ve launched a security  blog.  They acquired GreenBorder.  Do you see them as an emerging force to be reckoned with in the security space?

…to which he responded:

I doubt Google has plans to make this a direct revenue generating  exercise. They are a platform for advertising, not a security company. The plan is probably to use the malware/solution research  for building in better security in Google Toolbar for their users.  That would seem to make the most sense. Google could monitor a user’s  surfing habits and protect them from their search results at the same time.

To be fair, this was a loaded question because my opinion is diametrically opposed to his.   I believe Google *is* entering the security space and will do so in many vectors and it *will* be revenue generating. 

This morning’s news that Google is acquiring Postini for $625 Million dollars doesn’t surprise me at all and I believe it proves the point. 

In fact, I reckon that in the long term we’ll see the evolution of the Google Toolbar morph into a much more intelligent and rich client-side security application proxy service whereby Google actually utilizes client-side security of the Toolbar paired with the GreenBorder browsing environment and tunnel/proxy all outgoing requests to GooglePOPs.

What’s a GooglePOP?

These GooglePOPs (Google Point of Presence) will house large search and caching repositories that will — in conjunction with services such as those from Postini — provide a "clean pipes service to the consumer.  Don’t forget utility services that recent acquisitions such as GrandCentral and FeedBurner provide…it’s too bad that eBay snatched up Skype…

Google will, in fact, become a monster ASP.  Note that I said ASP and not ISP.  ISP is a commoditized function.  Serving applications and content as close to the user as possible is fantastic.  So pair all the client side goodness with security functions AND add GoogleApps and you’ve got what amounts to a thin client version of the Internet.

Remember all those large sealed shipping containers (not unlike Sun’s Project Blackbox) that Google is rumored to place strategically around the world — in conjunction with their mega datacenters?  I think it was Cringley who talked about this back in 2005:

In one of Google’s underground parking garages in Mountain View …
in a secret area off-limits even to regular GoogleFolk, is a shipping
container. But it isn’t just any shipping container. This shipping
container is a prototype data center.

Google hired a pair of
very bright industrial designers to figure out how to cram the greatest
number of CPUs, the most storage, memory and power support into a 20-
or 40-foot box. We’re talking about 5000 Opteron processors and 3.5
petabytes of disk storage that can be dropped-off overnight by a
tractor-trailer rig.

The idea is to plant one of these puppies
anywhere Google owns access to fiber, basically turning the entire
Internet into a giant processing and storage grid.

Imagine that.  Buy a ton of dark fiber, sprout hundreds of these PortaPOPs/GooglePOPs and you’ve got the Internet v3.0

Existing transit folks that aren’t Yahoo/MSN will ultimately yield to the model because it will reduce their costs for service and they will basically pay Google to lease these services for resale back to their customers (with re-branding?) without the need to pay for all the expensive backhaul.

Your Internet will be served out of cache…"securely."  So now instead of just harvesting your search queries, Google will have intimate knowledge of ALL of your browsing — scratch that — all of your network-based activity.   This will provide for not only much more targeted ads, but also the potential for ad insertion, traffic prioritization to preferred Google advertisers all the while offering "protection" to the consumer.

SMB’s and the average Joe consumers will be the first to embrace this
as cost-based S^2aaS (Secure Software as a Service) becomes mainstream
and this will then yield a trickle-up to the Enterprise and service
providers as demand will pressure them into providing like levels of service…for free.

It’s not all scary, but think about it…

Akamai ought to be worried.  Yahoo and MSN should be worried.  The ISP’s of the world investing in clean pipes technologies ought to be worried (I’ve blogged about Clean Pipes here.)

Should you be worried?  Methinks the privacy elements of all this will spur some very interesting discussions.

Talk amongst yourselves.

/Hoff

(Didn’t see Newby’s post here prior to writing this…good on-topic commentary.  Dennis Fisher over at the SearchSecurity Blog has an interesting Microsoft == Google perspective.)

Take5 (Episode #4) – Five Questions for Shlomo Kramer, Founder/CEO of Imperva

July 8th, 2007 No comments

This fourth instance of Take 5 interviews Shlomo Kramer, Founder and CEO of Imperva.

First a little background on the victim:

ShlomoIn 2006, Shlomo Kramer was selected by Network World magazine as one of 20 luminaries who changed the network industry.

Prior
to founding Imperva, Mr. Kramer co-founded Check Point Software
Technologies Ltd. in 1993. At Check Point, he served in various
executive roles through 1998 and as a member of the board of directors
through 2003. While at Check Point, Mr. Kramer played a key role in
defining and creating several category-defining products and solutions,
including FireWall-1, VPN-1, FloodGate-1, Check Point’s OPSEC alliance,
and Check Point’s security appliance program.

Mr. Kramer has
participated as an early investor and board member in a number of
security and enterprise software companies including Palo Alto
Networks, Serendipity Technologies, and Trusteer. Mr. Kramer received a
Masters degree in Computer Science from Hebrew University of Jerusalem
and a Bachelor of Science degree in Mathematics and Computer Science
from Tel Aviv University.

Questions:

1) As most people know, you are a co-founder of Check Point and the CEO of Imperva.  You’re a serial entrepreneur who has made a career of bringing innovation to the security market.  What are you working on now that is new and exciting?

All my time has been devoted in the last few years to Imperva. This project continues to excite me. After five years of hard work, it is very rewarding to see Imperva being recognized as the leader in application data security and compliance. Imperva delivers data governance and protection solutions for monitoring, audit, and security of business applications and databases. This is really a hot issue for organizations given the new threat landscape, regulations such as PCI and SOX and the ever increasing privacy legislation. I have always believed what we do at Imperva will define a new product category and the last couple of years have been a big step towards that.

I am also involved as an investor and board member in a number of other great security startups.  One example is Palo Alto Networks (www.paloaltonetworks.com), a next-generation firewall company. Their products provide full visibility and policy control over applications across all ports, all protocols, all the time–with no performance degradation. We’ve just launched the company, it’s an exciting time for Palo Alto Networks.

Another great company I am involved with is Trusteer (www.trusteer.com). Trusteer addresses the critical problem of protecting on-line transaction. Trusteer came up with a revolutionary way to protect online business from any "client-side" identity threat such as phishing, pharming, and crimeware. Helping business strengthen consumer trust, reduce costs, and differentiate online services is a big challenge and Trusteer has a very interesting and unique solution.

2) So tell us more about Palo Alto Networks on whose Board you sit.   The company has assembled an absolutely amazing group of heavy hitters from industry.  Either you’ve already got the company sold to Cisco and everyone’s signing on for the options or this is really going to be huge.  What’s so different  about what PAN is doing?

Existing firewalls are based on Stateful Inspection, which employs a port and protocol approach to traffic classification. The problem existing firewall vendors face is the fact that much of their core technology (Stateful Inspection) is over a dozen years old and new applications have found a variety of ways to evade or bypass them with relative ease. Attempts to fix the problem by firewall vendors include ‘bolting-on’ Intrusion Prevention (IPS) or Deep Packet Inspection as an additional feature have proven unsuccessful, resulting in significant issues with accuracy, performance and management complexity.

Starting with a blank slate, the Palo Alto Networks founders took an application-centric approach to traffic classification thereby enabling visibility into-and control over-Internet applications running on enterprise networks. The PA-4000 Series is a next-generation firewall that classifies traffic based on the accurate identification of the application, irrespective of the port, protocol, SSL encryption or evasive tactic used.

3) Having been an early adopter of Check Point, Imperva, Vidius, Skybox, Sanctum, etc. I clued in long ago to the power of the Israeli influence in the security industry.   Why are so many of the market leading technologies coming out of Israel? What’s in the water over there?

Really the start was with IDF based incubation of security know-how some 20 years ago. That for sure has been the case when we started Check Point. Over the years, an independent security community has emerged and by now it is very much a self perpetuating eco-system. I am very proud of being one of the founders not only of Check Point and Imperva but also of this broader Israeli security community.

4) We haven’t had a big worm outbreak in the last couple of years and some would argue it’s quiet out there. While identity theft leads the headlines these days, what’s the silent killer lurking in the background that people aren’t talking  about in the security industry?

When we started Imperva in 2002, security was all about worms – it was about a “my attack is bigger than yours” hacker mentality. We believed that future threats would be different and would be focused on targeted attacks.  We placed a bet that the motive of hackers would shift from ego to profit.  We’ve definitely seen that trend materialize over the last couple of years. On the server side, 50% of data leakage involves SQL-injection attacks and XSS is increasingly a leading threat, especially with the added complexity of Web 2.0 applications. Additionally, on the client side we are seeing many more targeted attacks, all the way down to the specific brokerage and on-line banking system you are using. The crimeware infecting your laptop cannot be addressed by a generic, negative logic solution, like anti-virus or anti-spyware, nor will strong authentication help circumvent its malice.
These targeted attacks on business data and on-line transactions are the focus of both Imperva and Trusteer. Imperva focuses on the server side of the transaction while Trusteer focuses on the client side.

5) With Imperva, you’re in the Web Application Security business.  What’s your take on the recent acquisitions by IBM and HP and how they are approaching the problem.  For companies whose core competencies are not focused on security, will this sort of activity really serve the interest of the customer of is it just opportunism?

Just to clarify, Imperva is actually in the application data security and compliance business, a major component of which is Web application security.  Securing databases and big enterprise applications are also part of that picture, as well as addressing regulatory mandates around data usage.  It’s all interrelated.

I think the moves by HP & IBM validate a general trend that we at Imperva have been evangelizing for some time — that application security is a huge issue, and we as an industry really need to get serious about protecting business applications and data.

I would argue that they won’t solve application security and compliance issues with these acquisitions alone.  The reason is that these solutions are only scratching the surface of the issues.  For one, most organizations use packaged applications and don’t have access to modify the source code to fix the issues they might find.  And lots of organizations take a long time to fix code errors even if they do have the capability to modify the code.  This argues for an independent mechanism to implement protections outside the code development / fix process. 

But the larger issue is scope – the data that organizations ultimately want to protect usually lives in a database and is accessed by a variety of mechanisms –applications are one, but direct access by internal users and other internal systems is another huge area of risk.  So addressing only one part of the application’s relationship to this data is not enough.  In my opinion, addressing the whole application data system is ultimately the way to address the core application and data security issue.

Categories: Uncategorized Tags:

Fat Albert Marketing and the Monetizing of Vulnerability Research

July 8th, 2007 No comments

Money
Over the last couple of years, we’ve seen the full spectrum of disclosure and "research" portals arrive on scene; examples stem from the Malware Distribution Project to 3Com/TippingPoint’s Zero Day Initiative.  Both of these examples illustrate ways of monetizing the output trade of vulnerability research.   

Good, bad or indifferent, one would be blind not to recognize that these services are changing the landscape of vulnerability research and pushing the limits which define "responsible disclosure."

It was only a matter of time until we saw the mainstream commercial emergence of the open vulnerability auction which is just another play on the already contentious marketing efforts blurring the lines between responsible disclosure for purely "altruistic" reasons versus commercial gain.

MushmouthEnter Wabisabilabi, the eBay of Zero Day vulnerabilities.   

This auction marketplace for vulnerabilities is marketed as a Swiss "…Laboratory & Marketplace Platform for Information Technology Security" which "…helps customers defend their databases, IT infrastructure, network, computers, applications, Internet offerings and access."

Despite a name which sounds like Mushmouth from Fat Albert created it (it’s Japanese in origin, according to the website) I am intrigued by this concept and whether or not it will take off.

I am, however, a little unclear on how customers are able to purchase a vulnerability and then become more secure in defending their assets. 

A vulnerability without an exploit, some might suggest, is not a vulnerability at all — or at least it poses little temporal risk.  This is a fundamental debate of the definition of a Zero-Day vulnerability. 

Further, a vulnerability that has a corresponding exploit but without a countermeasure (patch, signature, etc.) is potentially just as useless to a customer if you have no way of protecting yourself.

If you can’t manufacture a countermeasure, even if you hoard the vulnerability and/or exploit, how is that protection?  I suggest it’s just delaying the inevitable.

I am wondering how long until we see the corresponding auctioning off of the exploit and/or countermeasure?  Perhaps by the same party that purchased the vulnerability in the first place?

Today in the closed loop subscription services offered by vendors who buy vulnerabilities, the subscribing customer gets the benefit of protection against a threat that they may not even know they have, but for those who can’t or won’t pony up the money for this sort of subscription (which is usually tied to owning a corresponding piece of hardware to enforce it,) there exists a point in time between when the vulnerability is published and when it this knowledge is made available universally.

Depending upon this delta, these services may be doing more harm than good to the greater populous.

In fact, Dave G. over at Matasano argues quite rightly that by publishing even the basic details of a vulnerability that "researchers" will be able to more efficiently locate the chunks of code wherein the vulnerability exists and release this information publicly — code that was previously not known to even have a vulnerability.

Each of these example vulnerability service offerings describes how the vulnerabilities are kept away from the "bad guys" by qualifying their intentions based upon the ability to pay for access to the malicious code (we all know that criminals are poor, right?)  Here’s what the Malware Distribution Project describes as the gatekeeper function:

Why Pay?

Easy; it keeps most, if not all of the malicious intent, outside the
gates. While we understand that it may be frustrating to some people
with the right intentions not allowed access to MD:Pro, you have to
remember that there are a lot of people out there who want to get
access to malware for malicious purposes. You can’t be responsible on
one hand, and give open access to everybody on the other, knowing that
there will be people with expressly malicious intentions in that group.

ZDI suggests that by not reselling the vulnerabilities but rather protecting their customers and ultimately releasing the code to other vendors, they are giving back:

The Zero Day Initiative (ZDI) is unique in how the acquired
vulnerability information is used. 3Com does not re-sell the
vulnerability details or any exploit code. Instead, upon notifying the
affected product vendor, 3Com provides its customers with zero day
protection through its intrusion prevention technology. Furthermore,
with the altruistic aim of helping to secure a broader user base, 3Com
later provides this vulnerability information confidentially to
security vendors (including competitors) who have a vulnerability
protection or mitigation product.

As if you haven’t caught on yet, it’s all about the Benjamins. 

We’ve seen the arguments ensue regarding third party patching.  I think that this segment will heat up because in many cases it’s going to be the fastest route to protecting oneself from these rapidly emerging vulnerabilities you didn’t know you had.

/Hoff