Archive for the ‘Vulnerability Research’ Category

Brood Parasitism: A Cuckoo Discussion Of Smart Device Insecurity By Way Of Robbing the NEST…

July 18th, 2012 No comments
English: Eastern Phoebe (Sayornis phoebe) nest...

(Photo credit: Wikipedia)


I’m doing some research, driven by recent groundswells of some awesome security activity focused on so-called “smart meters.”  Specifically, I am interested in the emerging interconnectedness, consumerization and prevalence of more generic smart devices and home automation systems and what that means from a security, privacy and safety perspective.

I jokingly referred to something like this way back in 2007…who knew it would be more reality than fiction.

You may think this is interesting.  You may think this is overhyped and boorish.  You may even think this is cuckoo…

Speaking of which, back to the title of the blog…

Brood parasitism is defined as:

A method of reproduction seen in birds that involves the laying of eggs in the nests of other birds. The eggs are left under the parantal care of the host parents. Brood parasitism may be occur between species (interspecific) or within a species (intraspecific) []

A great example is that of the female european Cuckoo which lays an egg that mimics that of a host species.  After hatching, the young Cuckcoo may actually dispose of the host egg by shoving it out of the nest with a genetically-engineered physical adaptation — a depression in its back.  One hatched, the forced-adoptive parent birds, tricked into thinking the hatchling is legitimate, cares for the imposter that may actually grow larger than they, and then struggle to keep up with its care and feeding.

What does this have to do with “smart device” security?

I’m a huge fan of my NEST thermostat. 🙂 It’s a fantastic device which, using self-learning concepts, manages the heating and cooling of my house.  It does so by understanding how my family and I utilize the controls over time doing so in combination with knowing when we’re at home or we’re away.  It communicates with and allows control over my household temperature management over the Internet.  It also has an API <wink wink>  It uses an ARM Cortex A8 CPU and has both Wifi and Zigbee radios <wink wink>

…so it knows how I use power.  It knows how when I’m at home and when I’m not. It allows for remote, out-of-band, Internet connectivity.  I uses my Wifi network to communicate.  It will, I am sure, one day intercommunicate with OTHER devices on my network (which, btw, is *loaded* with other devices already)

So back to my cuckoo analog of brood parasitism and the bounty of “robbing the NEST…”

I am working on researching the potential for subverting the control plane for my NEST (amongst other devices) and using that to gain access to information regarding occupancy, usage, etc.  I have some ideas for how this information might be (mis)used.

Essentially, I’m calling the tool “Cuckoo” and it’s job is that of its nest-robbing namesake — to have it fed illegitimately and outgrow its surrogate trust model to do bad things™.

This will dovetail on work that has been done in the classical “smart meter” space such as what was presented at CCC in 2011 wherein the researchers were able to do things like identify what TV show someone was watching and what capabilities like that mean to privacy and safety.

If anyone would like to join in on the fun, let me know.



Enhanced by Zemanta

Joanna Rutkowska: Making Invisible Things Visible…

March 25th, 2009 No comments

I’ve had my issues in the past with Joanna Rutkowska; the majority of which have had nothing to do with technical content of her work, but more along the lines of how it was marketed.  That was then, this is now.

Recently, the Invisible Things Lab team have released some really interesting work regarding attacking the SMM.  What I’m really happy about is that Joanna and her team are really making an effort to communicate the relevance and impact  the team’s research and exploits really have in ways they weren’t doing before.  As much as I was critical previously, I must acknowledge and thank her for that, too.

I’m reasonably sure that Joanna could care less what I think, but I think the latest work is great and really does indicate the profoundly shaky foundation upon which we’ve built our infrastructure and I am thankful for what this body of work points out. 

Here’s a copy of Joanna’s latest blog titled “The Sky is Falling?” explaining such:

A few reporters asked me if our recent paper on SMM attacking via CPU cache poisoning means the sky is really falling now?

Interestingly, not many people seem to have noticed that this is the 3rd attack against SMM our team has found in the last 10 months. OMG 😮

But anyway, does the fact we can easily compromise the SMM today, and write SMM-based malware, does that mean the sky is falling for the average computer user?

No! The sky has actually fallen many years ago… Default users with admin privileges, monolithic kernels everywhere, most software unsigned and downloadable over plaintext HTTP — these are the main reasons we cannot trust our systems today. And those pathetic attempts to fix it, e.g. via restricting admin users on Vista, but still requiring full admin rights to install any piece of stupid software. Or selling people illusion of security via A/V programs, that cannot even protect themselves properly…

It’s also funny how so many people focus on solving the security problems by “Security by Correctness” or “Security by Obscurity” approaches — patches, patches, NX and ASLR — all good, but it is not gonna work as an ultimate protection (if it could, it would worked out already).

On the other hand, there are some emerging technologies out there that could allow us to implement effective“Security by Isolation” approach. Such technologies as VT-x/AMD-V, VT-d/IOMMU or Intel TXT and TPM.

So we, at ITL, focus on analyzing those new technologies, even though almost nobody uses them today. Because those technologies could actually make the difference. Unlike A/V programs or Patch Tuesdays, those technologies can change the level of sophistication required for the attacker dramatically.

The attacks we focus on are important for those new technologies — e.g. today Intel TXT is pretty much useless without protection from SMM attacks. And currently there is no such protection, which sucks. SMM rootkits sound sexy, but, frankly, the bad guys are doing just fine using traditional kernel mode malware (due to the fact that A/V is not effective). Of course, SMM rootkits are just yet another annoyance for the traditional A/V programs, which is good, but they might not be the most important consequence of SMM attacks.

So, should the average Joe Dow care about our SMM attacks? Absolutely not!

I really appreciate the way this is being discussed; I think the ITL work is (now) moving the discussion forward by framing the issues instead of merely focusing on sensationalist exploits that whip people into a frenzy and cause them to worry about things they can’t control instead of the things they unfortunately choose not to 😉 

I very much believe that we can and will see advancements with the “security by isolation” approach; a lot of other bad habits and classes of problems can be eliminated (or at least significantly reduced) with the benefit of virtualization technology. 


News Flash: If You Don’t Follow Suggested Security Hardening Guidelines, Bad Things Can Happen…

February 26th, 2008 10 comments

The virtualization security (VirtSec) FUD meter is in overdrive this week…

Part I:
     So, I was at a conference a couple of weeks ago.  I sat in on a lot of talks.
     Some of them had demos.  What amazed me about these demos is that in
     many cases, in order for the attacks to work, it was disclosed that the
     attack target was configured by a monkey with all defaults enabled and no
     security controls in place.  "…of course, if you checked this one box, the
     exploit doesn’t work…" *gulp*

Part II:
     We’ve observed a lot of interesting PoC attack demonstrations such as those at shows being picked
     up by the press and covered in blogs and such.  Many of these stories simply ham it up for the
     sensational title.  Some of the artistic license and innacuracies are just plain recockulous. 
     That’s right.  There’s ridiculous, then there’s recockulous. 

     Here’s a by-line from an article which details the PoC attack/code that Jon Oberheide used to show
     how, if you don’t follow VMware’s (and the CIS benchmark) recommendations for securing your
     VMotion network, you might be susceptible to interception of traffic and bad things since — as
     VMware clearly states — VMotion traffic (and machine state) is sent in the clear.

     This was demonstrated at BlackHat DC and here’s how the article portrayed it:

Jon Oberheide, a researcher and PhD candidate at the
University of Michigan, is releasing a proof-of-concept tool called
Xensploit that lets an attacker take over the VM’s hypervisor and
applications, and grab sensitive data from the live VMs.

     Really?  Take over the hypervisor, eh?  Hmmmm.  That sounds super-serious!  Oh, the humanity!

However, here’s how the VMTN blog rationally describes the situation in a measured response that does it better than I could:

Recently a researcher published a proof-of-concept called
Xensploit which allows an attacker to view or manipulate a VM undergoing live
migration (i.e. VMware’s VMotion) from one server to
another. This was shown to work with
both VMware’s and Xen’s version of live migration. Although impressive, this work by no means
represents any new security risk in the datacenter. It should be emphasized this proof-of-concept
does NOT “take over the hypervisor” nor present
unencrypted traffic as a vulnerability needing patching, as some news
reports incorrectly assert. Rather, it a
reminder of how an already-compromised network, if left unchecked, could be
used to stage additional severe attacks in any environment, virtual or
physical. …

Encryption of all data-in-transit is certainly one well-understood mitigation
for man-in-the-middle attacks.  But the fact
that plenty of data flows unencrypted within the enterprise – indeed perhaps
the majority of data – suggests that there are other adequate mitigations. Unencrypted VMotion traffic is not a flaw,
but allowing VMotion to occur on a compromised network can be. So this is a good time to re-emphasize hardening best practices for VMware
Infrastructure and what benefit they serve in this scenario.

I’m going to give you one guess as to why this traffic is unencrypted…see if you can guess right in the comments.

Now, I will concede that this sort of thing represents a new risk in the datacenter if you happen to not pay attention to what you’re doing, but I think Jon’s PoC is a great example of substantiating why you should follow both common sense, security hardening recommendations and NOT BELIEVE EVERYTHING YOU READ.

If you don’t stand for something, you’ll fall for anything.


Grab the Popcorn: It’s the First 2008 “Ethical Security Marketing” (Oxymoron) Dust-Up…

January 5th, 2008 15 comments

Robert Hansen (RSnake / / SecTheory) created a little challenge (pun intended) a couple of days ago titled "The Diminutive XSS worm replication contest":

The diminutive XSS worm replication contest
is a week long contest to get some good samples of the smallest amount
of code necessary for XSS worm propagation. I’m not interested in
payloads for this contest, but rather, the actual methods of
propagation themselves. We’ve seen the live worm code
and all of it is muddied by obfuscation, individual site issues, and
the payload itself. I’d rather think cleanly about the most efficient
method for propagation where every character matters.

Kurt Wismer (anti-virus rants blog) thinks this is a lousy idea:

yes, folks… robert hansen (aka rsnake), the founder and ceo of
sectheory, felt it would be a good idea to hold a contest to see who
could create the smallest xss worm
ok, so there’s no money changing hands this time, but that doesn’t mean
the winner isn’t getting rewarded – there are absolutely rewards to be
had for the winner of a contest like this and that’s a big problem
because lots of people want rewards and this kind of contest will make
people think about and create xss worms when they wouldn’t have

Here’s where Kurt diverges from simply highlighting nominal arguments of the potential for
misuse of the contest derivatives.  He suggests that RSnake is being
unethical and is encouraging this contest not for academic purposes, but rather to reap personal gain from it:

would you trust your security to a person who makes or made malware?
how about a person or company that intentionally motivates others to do
so? why do you suppose the anti-virus industry works so hard to fight
the conspiracy theories that suggest they are the cause of the viruses?
at the very least mr. hansen is playing fast and loose with the publics
trust and ultimately harming security in the process, but there’s a
more insidious angle too…

while the worms he’s soliciting from others are supposed to be merely
proof of concept, the fact of the matter is that proof of concept worms
can still cause problems (the recent orkut worm
was a proof of concept)… moreover, although the winner of the contest
doesn’t get any money, at the end of the day there will almost
certainly be a windfall for mr. hansen – after all, what do you suppose
happens when you’re one of the few experts on some relatively obscure
type of threat and that threat is artificially made more popular? well,
demand for your services goes up of course… this is precisely the
type of shady marketing model i described before
where the people who stand to gain the most out of a problem becoming
worse directly contribute to that problem becoming worse… it made
greg hoglund and jamie butler household names in security circles, and
it made john mcafee (pariah though he may be) a millionaire…

I think the following exchange in the comments section of the contest forum offers an interesting position from RSnake’s perspective:                   

Re: Diminutive XSS Worm Replication Contest


Posted by: Gareth Heyes (IP Logged)


Date: January 04, 2008 04:56PM



This contest is just asking for trouble 🙂

Are there any legal issues for creating such a worm in the uk?



Re: Diminutive XSS Worm Replication Contest



Posted by: rsnake (IP Logged)


Date: January 04, 2008 05:11PM


@Gareth Heyes – perhaps, but trouble is my middle name. So is danger.
Actually I have like 40 middle names it turns out. 😉 No, I’m not
worried, this is academic – it won’t work anywhere without modification
of variables, and has no payload. The goal is to understand worm
propagation and get to the underlying important pieces of code.

I’m not in the UK and am not a lawyer so I can’t comment on the
laws. I’m not suggesting anyone should try to weaponize the code (they
could already do that with the existing worm code if they wanted anyway).

So, we’ve got Wismer’s perspective and (indirectly) RSnake’s. 

What’s yours?  Do you think holding a contest to build a POC for a worm a good idea?  Do the benefits of research and understanding the potential attacks so one can defend against them outweigh the potential for malicious use?  Do you think there are, or will be, legal ramifications from these sorts of activities?


Virtualization Threat Surface Expands: We Weren’t Kidding…

September 21st, 2007 No comments

First the Virtualization Security Public Service Announcement:

By now you’ve no doubt heard that Ryan Smith and Neel Mehta from IBM/ISS X-Force have discovered vulnerabilities in VMware’s DHCP implementation that could allow for "…specially crafted packets to gain system-level privileges" and allow an attacker to execute arbitrary code on the system with elevated privileges thereby gaining control of the system.   

Further, Dark Reading details that Rafal Wojtczvk (whose last name’s spelling is a vulnerability in and of itself!) from McAfee discovered the following vulnerability:

A vulnerability
that could allow a guest operating system user with administrative
privileges to cause memory corruption in a host process, and
potentially execute arbitrary code on the host. Another fix addresses a
denial-of-service vulnerability that could allow a guest operating
system to cause a host process to become unresponsive or crash.

…and yet another from the Goodfellas Security Research Team:

An additional update, according to the advisory, addresses a
security vulnerability that could allow a remote hacker to exploit the
library file IntraProcessLogging.dll to overwrite files in a system. It
also fixes a similar bug in the library file vielib.dll.

It is important to note that these vulnerabilities have been mitigated by VMWare at the time of this announcement.  Further information regarding mitigation of all of these vulnerabilities can be found here.

You can find details regarding these vulnerabilities via the National Vulnerability Database here:

CVE-2007-0061The DHCP server in EMC VMware Workstation before 5.5.5 Build 56455 and
6.x before 6.0.1 Build 55017, Player before 1.0.5 Build 56455 and
Player 2 before 2.0.1 Build 55017, ACE before 1.0.3 Build 54075 and ACE
2 before 2.0.1 Build 55017, and Server before 1.0.4 Build 56528 allows
remote attackers to execute arbitrary code via a malformed packet that
triggers "corrupt stack memory.

CVE-2007-0062 – Integer overflow in the DHCP server in EMC VMware Workstation before
5.5.5 Build 56455 and 6.x before 6.0.1 Build 55017, Player before 1.0.5
Build 56455 and Player 2 before 2.0.1 Build 55017, ACE before 1.0.3
Build 54075 and ACE 2 before 2.0.1 Build 55017, and Server before 1.0.4
Build 56528 allows remote attackers to execute arbitrary code via a
malformed DHCP packet that triggers a stack-based buffer overflow.

CVE-2007-0063 – Integer underflow in the DHCP server in EMC VMware Workstation before
5.5.5 Build 56455 and 6.x before 6.0.1 Build 55017, Player before 1.0.5
Build 56455 and Player 2 before 2.0.1 Build 55017, ACE before 1.0.3
Build 54075 and ACE 2 before 2.0.1 Build 55017, and Server before 1.0.4
Build 56528 allows remote attackers to execute arbitrary code via a
malformed DHCP packet that triggers a stack-based buffer overflow.

CVE-2007-4496 – Unspecified vulnerability in EMC
VMware Workstation before 5.5.5 Build 56455 and 6.x before 6.0.1 Build
55017, Player before 1.0.5 Build 56455 and Player 2 before 2.0.1 Build
55017, ACE before 1.0.3 Build 54075 and ACE 2 before 2.0.1 Build 55017,
and Server before 1.0.4 Build 56528 allows authenticated users with
administrative privileges on a guest operating system to corrupt memory
and possibly execute arbitrary code on the host operating system via
unspecified vectors.

CVE-2007-4155 – Absolute path traversal vulnerability in a certain ActiveX control in
vielib.dll in EMC VMware 6.0.0 allows remote attackers to execute
arbitrary local programs via a full pathname in the first two arguments
to the (1) CreateProcess or (2) CreateProcessEx method.

I am happy to see that VMware moved on these vulnerabilities (I do not have the timeframe of this disclosure and mitigation available.)  I am convinced that their security team and product managers truly take this sort of thing seriously.

However, this just goes to show you that as the virtualization platforms enter further-highlighted mainstream adoption, exploitable vulnerabilities will continue to follow as those who follow the money begin to pick up the scent. 

This is another phrase that’s going to make a me a victim of my own Captain Obvious Award, but it seems like we’ve been fighting this premise for too long now.  I recognize that this is not the first set of security vulnerabilities we’ve seen from VMware, but I’m going to highlight them for a reason.

It seems that due to a lack of well-articulated vulnerabilities that extended beyond theoretical assertions or POC’s, the sensationalism of research such as Blue Pill has desensitized folks to the emerging realities of virtualization platform attack surfaces.

I’ve blogged about this over the last year and a half, with the latest found here and an interview here.  It’s really just an awareness campaign.  One I’m more than willing to wage given the stakes.  If that makes me the noisy canary in the coal mine, so be it.

These very real examples are why I feel it’s ludicrous to take seriously any comments that suggest by generalization that virtualized environments are "more secure" by design; it’s software, just like anything else, and it’s going to be vulnerable.

I’m not trying to signal that the sky is falling, just the opposite.  I do, however, want to make sure we bring these issues to your attention.

Happy Patching!


Remotely Exploitable Dead Frog with Embedded Web Server – The “Anatomy” of a Zero-Day Threat Surface

July 25th, 2007 No comments

WebserverfrogYou think I make this stuff up, don’t you?

Listen, I’m a renaissance man and I look for analogs to the security space anywhere and everywhere I can find them.

I maintain that next to the iPhone, this is the biggest thing to hit the security world since David Maynor found Jesus (in a pool hall, no less.)

I believe InfoSec Sellout already has produced a zero-day for this using real worms.  No Apple products were harmed during the production of this webserver, but I am sad to announce that there is no potential for adding your own apps to the KermitOS…an SDK is available, however.

The frog’s dead.  Suspended in a liquid.  In a Jar.  Connected to the network via an Ethernet cable.  You can connect to the embedded webserver wired into its body parts.  When you do this, you control which one of its legs twitch.  pwned!

You can find the pertinent information here.

A Snort signature will be available shortly.


(Image and text below thanks to Boing Boing)

The Experiments in Galvanism frog floats in mineral oil, a webserver
installed it its guts, with wires into its muscle groups. You can
access the frog over the network and send it galvanic signals that get
it to kick its limbs.

Experiments in Galvanism is the culmination of studio and gallery
experiments in which a miniature computer is implanted into the dead
body of a frog specimen. Akin to Damien Hirst’s bodies in formaldehyde,
the frog is suspended in clear liquid contained in a glass cube, with a
blue ethernet cable leading into its splayed abdomen. The computer
stores a website that enables users to trigger physical movement in the
corpse: the resulting movement can be seen in gallery, and through a
live streaming webcamera.

    – Risa Horowitz

Garnet Hertz has implanted a miniature webserver in the body of a
frog specimen, which is suspended in a clear glass container of mineral
oil, an inert liquid that does not conduct electricity. The frog is
viewable on the Internet, and on the computer monitor across the room,
through a webcam placed on the wall of the gallery. Through an Ethernet
cable connected to the embedded webserver, remote viewers can trigger
movement in either the right or left leg of the frog, thereby updating
Luigi Galvani’s original 1786 experiment causing the legs of a dead
frog to twitch simply by touching muscles and nerves with metal.

Experiments in Galvanism is both a reference to the origins of
electricity, one of the earliest new media, and, through Galvani’s
discovery that bioelectric forces exist within living tissue, a nod to
what many theorists and practitioners consider to be the new new media:
bio(tech) art.

    – Sarah Cook and Steve Dietz

Fat Albert Marketing and the Monetizing of Vulnerability Research

July 8th, 2007 No comments

Over the last couple of years, we’ve seen the full spectrum of disclosure and "research" portals arrive on scene; examples stem from the Malware Distribution Project to 3Com/TippingPoint’s Zero Day Initiative.  Both of these examples illustrate ways of monetizing the output trade of vulnerability research.   

Good, bad or indifferent, one would be blind not to recognize that these services are changing the landscape of vulnerability research and pushing the limits which define "responsible disclosure."

It was only a matter of time until we saw the mainstream commercial emergence of the open vulnerability auction which is just another play on the already contentious marketing efforts blurring the lines between responsible disclosure for purely "altruistic" reasons versus commercial gain.

MushmouthEnter Wabisabilabi, the eBay of Zero Day vulnerabilities.   

This auction marketplace for vulnerabilities is marketed as a Swiss "…Laboratory & Marketplace Platform for Information Technology Security" which "…helps customers defend their databases, IT infrastructure, network, computers, applications, Internet offerings and access."

Despite a name which sounds like Mushmouth from Fat Albert created it (it’s Japanese in origin, according to the website) I am intrigued by this concept and whether or not it will take off.

I am, however, a little unclear on how customers are able to purchase a vulnerability and then become more secure in defending their assets. 

A vulnerability without an exploit, some might suggest, is not a vulnerability at all — or at least it poses little temporal risk.  This is a fundamental debate of the definition of a Zero-Day vulnerability. 

Further, a vulnerability that has a corresponding exploit but without a countermeasure (patch, signature, etc.) is potentially just as useless to a customer if you have no way of protecting yourself.

If you can’t manufacture a countermeasure, even if you hoard the vulnerability and/or exploit, how is that protection?  I suggest it’s just delaying the inevitable.

I am wondering how long until we see the corresponding auctioning off of the exploit and/or countermeasure?  Perhaps by the same party that purchased the vulnerability in the first place?

Today in the closed loop subscription services offered by vendors who buy vulnerabilities, the subscribing customer gets the benefit of protection against a threat that they may not even know they have, but for those who can’t or won’t pony up the money for this sort of subscription (which is usually tied to owning a corresponding piece of hardware to enforce it,) there exists a point in time between when the vulnerability is published and when it this knowledge is made available universally.

Depending upon this delta, these services may be doing more harm than good to the greater populous.

In fact, Dave G. over at Matasano argues quite rightly that by publishing even the basic details of a vulnerability that "researchers" will be able to more efficiently locate the chunks of code wherein the vulnerability exists and release this information publicly — code that was previously not known to even have a vulnerability.

Each of these example vulnerability service offerings describes how the vulnerabilities are kept away from the "bad guys" by qualifying their intentions based upon the ability to pay for access to the malicious code (we all know that criminals are poor, right?)  Here’s what the Malware Distribution Project describes as the gatekeeper function:

Why Pay?

Easy; it keeps most, if not all of the malicious intent, outside the
gates. While we understand that it may be frustrating to some people
with the right intentions not allowed access to MD:Pro, you have to
remember that there are a lot of people out there who want to get
access to malware for malicious purposes. You can’t be responsible on
one hand, and give open access to everybody on the other, knowing that
there will be people with expressly malicious intentions in that group.

ZDI suggests that by not reselling the vulnerabilities but rather protecting their customers and ultimately releasing the code to other vendors, they are giving back:

The Zero Day Initiative (ZDI) is unique in how the acquired
vulnerability information is used. 3Com does not re-sell the
vulnerability details or any exploit code. Instead, upon notifying the
affected product vendor, 3Com provides its customers with zero day
protection through its intrusion prevention technology. Furthermore,
with the altruistic aim of helping to secure a broader user base, 3Com
later provides this vulnerability information confidentially to
security vendors (including competitors) who have a vulnerability
protection or mitigation product.

As if you haven’t caught on yet, it’s all about the Benjamins. 

We’ve seen the arguments ensue regarding third party patching.  I think that this segment will heat up because in many cases it’s going to be the fastest route to protecting oneself from these rapidly emerging vulnerabilities you didn’t know you had.


Take5- Five Questions for Chris Wysopal, CTO Veracode

June 19th, 2007 No comments

In this first installment of Take5, I interview Chris Wysopal, the CTO of Veracode about his new company, secure coding, vulnerability research and the recent forays into application security by IBM and HP.

This entire interview was actually piped over a point-to-point TCP/IP connection using command-line redirection through netcat.  No packets were harmed during the making of this interview…

First, a little background on the victim, Chris Wysopal:

Chris Wysopal is
co-founder and CTO of Veracode. He has testified on Capitol Hill on the subjects of government
computer security and how vulnerabilities are discovered in software. Chris
co-authored the password auditing tool L0phtCrack, wrote the windows version of
netcat, and was a researcher at the security think tank, L0pht Heavy
Industries, which was acquired by @stake. He was VP of R&D at @stake
and later director of development at Symantec, where he led a
team developing binary static analysis technology.

He was influential in
the creation of responsible vulnerability disclosure guidelines and a founder of
the Organization for Internet Safety.  Chris wrote "The Art of
Software Security Testing: Identifying Security Flaws", published by Addison
Wesley and Symantec Press in December 2006. He earned his Bachelor of Science
degree in Computer and Systems Engineering from Rensselaer Polytechnic

1) You’re a founder of Veracode
which is described as the industry’s first provider
of automated, on-demand
application security solutions.  What sort of application
services does Veracode provide?  Binary analysis, Web Apps?
Veracode currently offers binary static analysis of C/C++ applications
for Windows and Solaris and for Java applications.  This allows us to find
the classes of vulnerabilities that source code analysis tools can find but on
the entire codebase including the libraries which you probably don’t have source
code for. Our product roadmap includes support for C/C++ on Linux and C# on
.Net.  We will also be adding additional analysis techniques to our
flagship binary static analysis.
2) Is this a SaaS model?
How do you charge for your services?  Do you see
using your services or enterprises?

Customers upload their binaries to us and we deliver an analysis of their
security flaws via our web portal.  We charge by the megabyte of
code.  We have both software vendors and enterprises who write or outsource
their own custom software using our services.  We also have
enterprises who are purchasing software ask the software vendors to submit their
binaries to us for a 3rd party analysis.  They use this analysis as a
factor in their purchasing decision. It can lead to a "go/no go" decision, a
promise by the vendor to remediate the issues found, or a reduction in price to
compensate for the cost of additional controls or the cost of incident
response that insecure software necessitates.
3) I was a Qualys customer
— a VA/VM SaaS company.  Qualys had to spend quite
a bit of time
convincing customers that allowing for the storage of their VA data
secure.  How does Veracode address a customer’s security concerns when
uploading their

We are
absolutely fanatical about the security of our customers data.  I look back
at the days when I was a security consultant where we had vulnerability
data on laptops and corporate file shares and I say, "what were we
thinking?"  All customer data at Veracode is encrypted in storage and at
rest with a unique key per application and customer.  Everyone at Veracode
uses 2 factor authentication to log in and 2 factor is the default for
customers.  Our data center is a SAS 70 Type II facility. All data
access is logged so we know exactly who looked at what and when. As security
people we are professionally paranoid and I think it shows through in the system
we built.  We also believe in 3rd party verification so we have had a top
security boutique do a security review our portal
4) With IBM’s acquisition
of Watchfire and today’s announcement that HP will buy
SPI Dynamics, how does
Veracode stand to play in this market of giants who will
be competing to
drive service revenues?

have designed our solution from the ground up to have the Web 2.0 ease of
use and experience and we have the quality of analysis that I feel is the best
in the market today.  An advantage is Veracode is an independent
assessment company that customers can trust to not play favorites to other
software companies because of partnerships or alliances. Would Moody’s or
Consumer Reports be trusted as a 3rd party if they were part of a big financial
or technology conglomerate? We feel a 3rd party assessment is important in the
security world.
5) Do you see the latest
developments in vulnerability research with the drive for
initiatives pressuring developers to produce secure code out of the box
fear of exploit or is it driving the activity to companies like yours?

think the real driver for developers to produce secure code and for developers
and customers to seek code assessments is the reality that the costs of insecure
code goes up everyday and its adding to the operational risk of companies that
use software.  People exploiting vulnerabilities are not going away
and there is no way to police the internet of vulnerability
information.  The only solution is for customers to demand more secure
code, and proof of it, and for developers to deliver more secure code in

CSI Working Group on Web Security Reseach Law Concludes…Nothing

June 14th, 2007 1 comment

In May I blogged what I thought was an interesting question regarding the legality and liability of reverse engineering in security vulnerability research.  That discussion focused on the reverse engineering and vulnerability research of hardware and software products that were performed locally.

I continued with a follow-on discussion and extended the topic to include security vulnerability research from the web-based perspective in which I was interested to see how different the opinions on the legality and liability were from many of the top security researchers as it relates to the local versus remote vulnerability research and disclosure perspectives.

As part of the last post, I made reference to a working group organized by CSI whose focus and charter were to discuss web security research law.  This group is made up of some really smart people and I was looking forward to the conclusions reached by them on the topic and what might be done to potentially solve the obvious mounting problems associated with vulnerability research and disclosure.

The first report of this group was published yesterday. 

Unfortunately, the conclusions of the working group is an inditement of the sad state of affairs related to the security space and further underscores the sense of utter hopelessness many in the security community experience.

What the group concluded after 14 extremely interesting and well-written pages was absolutely nothing:

The meeting of minds that took place over the past two months advanced the group’s collective knowledge on the issue of Web security research law.  Yet if one assumed that the discussion advanced the group’s collective understanding of this issue, one might be mistaken.

Informative though the work was, it raised more questions than answers.  In the pursuit of clarity, we found, instead, turbidity.

Thus it follows, that there are many opportunities for further thought, further discussion, further research and further stirring up of murky depths.  In the short term, the working group has plans to pursue the following endeavors:

  • Creating disclosure policy guidelines — both to help site owners write disclosure policies, and for security researchers to understand them.
  • Creating guidelines for creating a "dummy" site.
  • Creating a more complete matrix of Web vulnerability research methods, written with the purpose of helping attorneys, lawmakers and law enforcement officers understand the varying degrees of invasiveness

Jeremiah Grossman, a friend and one of the working group members summarized the report and concluded with the following: "…maybe within the next 3-5 years as more incidents like TJX occur, we’ll have both remedies."  Swell.

Please don’t misunderstand my cynical tone and disappointment as a reflection on any of the folks who participated in this working group — many of whom I know and respect.  It is, however, sadly another example of the hamster wheel of pain we’re all on when the best and brightest we have can’t draw meaningful conclusions against issues such as this.

I was really hoping we’d be further down the path towards getting our arms around the problem so we could present meaningful solutions that would make a dent in the space.  Unfortunately, I think where we are is the collective shoulder shrug shrine of cynicism perched periously on the cliff overlooking the chasm of despair which drops off into the trough of disillusionment.

Gartner needs a magic quadrant for hopelessness. <sigh>  I feel better now, thanks.


Redux: Liability of Security Vulnerability Research…The End is Nigh!

June 10th, 2007 3 comments

I posited the potential risks of vulnerability research in this blog entry here.   Specifically I asked about reverse engineering and implications related to IP law/trademark/copyright, but the focus was ultimately on the liabilities of the researchers engaging in such activities.

Admittedly I’m not a lawyer and my understanding of some of the legal and ethical dynamics are amateur at best, but what was very interesting to me was the breadth of the replies from both the on and off-line responses to my request for opinion on the matter. 

I was contacted by white, gray and blackhats regarding this meme and the results were divergent across legal, political and ideological lines.

KJH (Kelly Jackson Higgins — hey, Kel!) from Dark Reading recently posted an interesting collateral piece titled "Laws Threaten Security Researchers" in which she outlines the results of a CSI working group chartered to investigate and explore the implications that existing and pending legislation would have on vulnerability research and those who conduct it.  Folks like Jeremiah Grossman (who comments on this very story, here) and Billy Hoffman participate on this panel.

What is interesting is the contrast in commentary between how folks responded to my post versus these comments based upon the CSI working group’s findings:

In the report, some Web researchers say that even if they
find a bug accidentally on a site, they are hesitant to disclose it to
the Website’s owner for fear of prosecution. "This opinion grew
stronger the more they learned during dialogue with working group
members from the Department of Justice," the report says.

I believe we’ve all seen the results of some overly-litigious responses on behalf of companies against whom disclosures related to their products or services have been released — for good or bad.

Ask someone like Dave Maynor if the pain is ultimately worth it.  Depending upon your disposition, your mileage may vary. 

That revelation is unnerving to Jeremiah Grossman, CTO and
founder of WhiteHat Security and a member of the working group. "That
means only people that are on the side of the consumer are being
silenced for fear of prosecution," and not the bad guys.

"[Web] researchers are terrified about what they can and
can’t do, and whether they’ll face jail or fines," says Sara Peters,
CSI editor and author of the report. "Having the perspective of legal
people and law enforcement has been incredibly valuable. [And] this is
more complicated than we thought."

This sort of response didn’t come across that way at all from folks who both privately or publicly responded to my blog; most responses were just the opposite, stated with somewhat of a sense of entitlement and immunity.   I expect to query those same folks again on the topic. 

Check this out:

The report discusses several methods of Web research, such as
gathering information off-site about a Website or via social
engineering; testing for cross-site scripting by sending HTML mail from
the site to the researcher’s own Webmail account; purposely causing
errors on the site; and conducting port scans and vulnerability scans.

Interestingly, DOJ representatives say that using just one of
these methods might not be enough for a solid case against a [good or
bad] hacker. It would take several of these activities, as well as
evidence that the researcher tried to "cover his tracks," they say. And
other factors — such as whether the researcher discloses a
vulnerability, writes an exploit, or tries to sell the bug — may
factor in as well, according to the report.

Full disclosure and to whom you disclose it and when could mean the difference between time in the spotlight or time in the pokey!