Home > General Rants & Raves, Information Security, Punditry, Security Innovation & Imagination, Vulnerability Research > Redux: Liability of Security Vulnerability Research…The End is Nigh!

Redux: Liability of Security Vulnerability Research…The End is Nigh!

Hackers_cartoons
I posited the potential risks of vulnerability research in this blog entry here.   Specifically I asked about reverse engineering and implications related to IP law/trademark/copyright, but the focus was ultimately on the liabilities of the researchers engaging in such activities.

Admittedly I’m not a lawyer and my understanding of some of the legal and ethical dynamics are amateur at best, but what was very interesting to me was the breadth of the replies from both the on and off-line responses to my request for opinion on the matter. 

I was contacted by white, gray and blackhats regarding this meme and the results were divergent across legal, political and ideological lines.

KJH (Kelly Jackson Higgins — hey, Kel!) from Dark Reading recently posted an interesting collateral piece titled "Laws Threaten Security Researchers" in which she outlines the results of a CSI working group chartered to investigate and explore the implications that existing and pending legislation would have on vulnerability research and those who conduct it.  Folks like Jeremiah Grossman (who comments on this very story, here) and Billy Hoffman participate on this panel.

What is interesting is the contrast in commentary between how folks responded to my post versus these comments based upon the CSI working group’s findings:

In the report, some Web researchers say that even if they
find a bug accidentally on a site, they are hesitant to disclose it to
the Website’s owner for fear of prosecution. "This opinion grew
stronger the more they learned during dialogue with working group
members from the Department of Justice," the report says.

I believe we’ve all seen the results of some overly-litigious responses on behalf of companies against whom disclosures related to their products or services have been released — for good or bad.

Ask someone like Dave Maynor if the pain is ultimately worth it.  Depending upon your disposition, your mileage may vary. 

That revelation is unnerving to Jeremiah Grossman, CTO and
founder of WhiteHat Security and a member of the working group. "That
means only people that are on the side of the consumer are being
silenced for fear of prosecution," and not the bad guys.

"[Web] researchers are terrified about what they can and
can’t do, and whether they’ll face jail or fines," says Sara Peters,
CSI editor and author of the report. "Having the perspective of legal
people and law enforcement has been incredibly valuable. [And] this is
more complicated than we thought."

This sort of response didn’t come across that way at all from folks who both privately or publicly responded to my blog; most responses were just the opposite, stated with somewhat of a sense of entitlement and immunity.   I expect to query those same folks again on the topic. 

Check this out:

The report discusses several methods of Web research, such as
gathering information off-site about a Website or via social
engineering; testing for cross-site scripting by sending HTML mail from
the site to the researcher’s own Webmail account; purposely causing
errors on the site; and conducting port scans and vulnerability scans.

Interestingly, DOJ representatives say that using just one of
these methods might not be enough for a solid case against a [good or
bad] hacker. It would take several of these activities, as well as
evidence that the researcher tried to "cover his tracks," they say. And
other factors — such as whether the researcher discloses a
vulnerability, writes an exploit, or tries to sell the bug — may
factor in as well, according to the report.

Full disclosure and to whom you disclose it and when could mean the difference between time in the spotlight or time in the pokey!

/Hoff

  1. June 10th, 2007 at 06:56 | #1

    I guess there are two general problems to consider.
    1. The language of the CFAA can be used (and has been used) to cal criminal anything you do to a website that the owner didn't want "authorize" you to do regardless of whether they advertised said fact, had technical measures in place to protect against it, etc.
    2. It isn't clear under normal conditions what sorts of actions would or should *always* be considered unauthorized actions even if we changed the definition in the CFAA.
    The situation is simple in the case of software you own and completely unclear in the case of online software. The sticky point is that the CFAA is poorly constructed with respect to intent and to means/mechanisms. I tried unsuccessfully just now to find a legal paper I'd read some time ago about ways of reforming the CFAA to include the necessity for a technical control implementing the policy being enforced before an action an be called unauthorized.
    Even still in this case you'd be potentially trying to show the ineffectiveness of that control and so presumably you'd still be breaking the law.
    I guess this is just a subtle twist on the age-old debate about security research, hacking other people's systems, etc. The difference I suppose in the webappsec case is that in many types of testing there isn't anything that could be considered "harm." Or, at least in some cases the only damage is reputational for the software/site in question.

  2. June 10th, 2007 at 07:05 | #2

    Ok, I did a little more digging and found a legal articles worth reading to understand the trespass portions of the CFAA and some proposals of how to rethink the law. http://cyber.law.harvard.edu/courses/ltac98/final
    This isn't the article I was trying to find but if I do I'll send a note and maybe post commentary on my site.
    In searching a did find Felten and Halderman's blog. http://www.freedom-to-tinker.com/
    Looks interesting on exactly this topic.

  3. June 10th, 2007 at 20:20 | #3

    Thanks, Andy!
    After all this time, this whole issue still comes down to the analog of a beat-cop's mentality/perspective of a burglar rattling the doorknobs of houses, regardless of whether he/she actually commits trespass.
    The act of the rattle begets intent which is almost always assumed to be malign…
    It's going to be tough going for those researchers who choose out of conscience (or otherwise) to report a vulnerability regarding a vendor who wishes not to have that happen.
    Balance that with the monetary incentives now offered to find these vulnerabilities and perhaps we'll see this sort of corporate bullet-proof shielding becoming the norm to protect the individual freelancers?
    Strange times they are a brewin'
    /hoff

  1. No trackbacks yet.