Rothman Speaks!

June 15th, 2007 4 comments

…You need Flash to see this…

Firefox also seems to have intermittent trouble rendering this.

If you only see 1/2 of Mr. Rothman, right-click and select "show all"
and then click on the little purple play icon.  IE has no issue.

Turn the volume WAY up…I had to whisper this @ work.

/Hoff

Categories: Uncategorized Tags:

McAfee’s Bolin suggests Virtual “Sentinel” Security Model for Uncompromised Security

June 15th, 2007 2 comments

Sentinel
Christopher Bolin, McAfee’s EVP & CTO blogged an interesting perspective on utilizing virtualization technology to instantiate security functionality in an accompanying VM on a host to protect one or more VM’s on the same host.  This approach differs than the standard approach of placing host-based security controls on each VM or the intra-VS IPS models from companies such as Reflex (blogged about that here.)

I want to flush out some of these concepts with a little more meat attached

He defines the concept of running
security software alongside the operating system it is protecting as "the sentinel":

In this approach, the security software itself resides in its own
virtual machine outside and parallel to the system it is meant to
protect, which could be another virtual machine running an operating
system such as Windows. This enables the security technology to look
omnisciently into the subject OS and its operation and take appropriate
action when malware or anomalous behavior is detected.

Understood so far…with some caveats, below.

The security
software would run in an uncompromised environment monitoring in
real-time, and could avoid being disabled, detected or deceived (or
make the bad guys work a lot harder.)

While this supposedly uncompromised/uncompromisable OS could exist, how are you going to ensure that the underlying "routing" traffic flow control actually forces the traffic through the Sentinel VM in the first place? If the house of cards rests on this design element, we’ll be waiting a while…and adding latency.  See below.

This kind of security is not necessarily a one-to-one relationship
between sentinel and OSs. One physical machine can run several virtual
machines, so one virtual sentinel could watch and service many virtual
machines.

I think this is a potentially valid and interesting alternative to deploying more and more host-based security products (which seems odd coming from McAfee) or additional physical appliances, but there are a couple of issues with this premise, some of which Bolin points out, others I’ll focus on here:

  1. Unlike other applications which run in a VM and just require a TCP/IP stack, security applications are extremely topology sensitive.  The ability to integrate sentinels in a VM environment with other applications/VM’s at layer 2 is extremely difficult, especially if these security applications are to act "in-line." 

    Virtualizing transport while maintaining topology context is difficult and when you need to then virtualize the policies based upon this topology, it gets worse.  Blade servers have this problem; they have integrated basic switch/load balancing modules, but implementing policy-driven "serialization" and "parallelization" (which is what we call it @ Crossbeam) is very, very hard.

  2. The notion that the sentinel can "…look
    omnisciently into the subject OS and its operation and take appropriate
    action when malware or anomalous behavior is detected" from a network perspective is confusing.  If you’re outside the VM/Hypervisor, I don’t understand the feasibility of this approach.  This is where Blue Lane’s VirtualShiel ESX plug-in kicks ass — it plugs into the Hypervisor and protects not only directed traffic to the VM but also intra-VM traffic with behavioral detection, not just signatures.

  3. Resource allocation of the sentinel security control as a VM poses a threat vector inasmuch as one could overwhelm/DoS the Sentinel VM and security/availability of the entire system could be compromised; the controls protecting the VMs are competing for the same virtualized resources that the resources are after.
  4. As Bolin rightfully suggests, a vulnerability in the VM/VMM/Chipsets could introduce a serious set of modeling problems.

I maintain that securing virtualization by virtualizing security is nascent at best, but as Bolin rightfully demonstrates, there are many innovative approaches being discussed to address these new technologies.

/Hoff

CSI Working Group on Web Security Reseach Law Concludes…Nothing

June 14th, 2007 1 comment

Hamster3_2
In May I blogged what I thought was an interesting question regarding the legality and liability of reverse engineering in security vulnerability research.  That discussion focused on the reverse engineering and vulnerability research of hardware and software products that were performed locally.

I continued with a follow-on discussion and extended the topic to include security vulnerability research from the web-based perspective in which I was interested to see how different the opinions on the legality and liability were from many of the top security researchers as it relates to the local versus remote vulnerability research and disclosure perspectives.

As part of the last post, I made reference to a working group organized by CSI whose focus and charter were to discuss web security research law.  This group is made up of some really smart people and I was looking forward to the conclusions reached by them on the topic and what might be done to potentially solve the obvious mounting problems associated with vulnerability research and disclosure.

The first report of this group was published yesterday. 

Unfortunately, the conclusions of the working group is an inditement of the sad state of affairs related to the security space and further underscores the sense of utter hopelessness many in the security community experience.

What the group concluded after 14 extremely interesting and well-written pages was absolutely nothing:

The meeting of minds that took place over the past two months advanced the group’s collective knowledge on the issue of Web security research law.  Yet if one assumed that the discussion advanced the group’s collective understanding of this issue, one might be mistaken.

Informative though the work was, it raised more questions than answers.  In the pursuit of clarity, we found, instead, turbidity.

Thus it follows, that there are many opportunities for further thought, further discussion, further research and further stirring up of murky depths.  In the short term, the working group has plans to pursue the following endeavors:

  • Creating disclosure policy guidelines — both to help site owners write disclosure policies, and for security researchers to understand them.
  • Creating guidelines for creating a "dummy" site.
  • Creating a more complete matrix of Web vulnerability research methods, written with the purpose of helping attorneys, lawmakers and law enforcement officers understand the varying degrees of invasiveness

Jeremiah Grossman, a friend and one of the working group members summarized the report and concluded with the following: "…maybe within the next 3-5 years as more incidents like TJX occur, we’ll have both remedies."  Swell.

Please don’t misunderstand my cynical tone and disappointment as a reflection on any of the folks who participated in this working group — many of whom I know and respect.  It is, however, sadly another example of the hamster wheel of pain we’re all on when the best and brightest we have can’t draw meaningful conclusions against issues such as this.

I was really hoping we’d be further down the path towards getting our arms around the problem so we could present meaningful solutions that would make a dent in the space.  Unfortunately, I think where we are is the collective shoulder shrug shrine of cynicism perched periously on the cliff overlooking the chasm of despair which drops off into the trough of disillusionment.

Gartner needs a magic quadrant for hopelessness. <sigh>  I feel better now, thanks.

/Hoff

Evan Kaplan and Co. (Aventail) Take the Next Step

June 12th, 2007 2 comments

Ekaplan_2
So Aventail’s being acquired by SonicWall?  I wish Evan Kaplan and his team well and trust that SonicWall will do their best to integrate the best of Aventail’s technology into their own.  It’s interesting that this news pops up today because I was just thinking about Aventail’s CEO today as part of a retrospective of security over the last 10+ years.

I’ve always admired Evan Kaplan’s messaging from afar and a couple of months ago I got to speak with him for an hour or so.  For someone who has put his stake in the ground the ground for last 11 years as a leader in the SSL VPN market, you might be surprised to know that Evan’s perspective on the world of networking and security isn’t limited to "tunnel vision" as one might expect.

One of my favorite examples of Evan’s beliefs is this article in Network World back in 2005 that resonated so very much with me then and still does today.  The title of the piece is "Smart Networks are Not a Good Investment" and was a "face off" feature between Evan and Cisco’s Rob Redford.

Evan’s point here — and what resonates at the core of what I believe should happen to security — is that the "network" ought to be separated into two strata, the "plumbing" (routers, switches, etc.) and intelligent "service layers" (security being one of them.)

Evan calls these layers "connectivity" and "intelligence."

The plumbing should be fast, resilient, reliable, and robust providing the connectivity and the service layers should be agile, open, interoperable, flexible and focused on delivering service as a core competency.   

Networking vendors who want to leverage the footprint they already have in port density and extend their stranglehold single vendor version of the truth obviously disagree with this approach.  So do those who ultimately suggest that "good enough" is good enough.

Evan bangs the drum:

Network intelligence as promoted by the large network vendors is the
Star Wars defense system of our time – monolithic, vulnerable and
inherently unreliable. Proponents of smart networks want to extend
their hegemony by incorporating application performance and security into a unified, super-intelligent infrastructure. They want to integrate everything into the network and embed security into every node. In theory, you would then have centralized control and strong perimeter defense.

Yup.  As I blogged recently, "Network Intelligence is an Oxymoron."  The port puppets will have you believe that you can put all this intelligence in the routers in switches and solve all the problems these platforms were never designed to solve whilst simultaneously scale performance and features against skyrocketing throughput requirements, extreme latency thresholds, emerging technologies and an avalanche of compounding threats and vulnerabilities…all from one vendor, of course.

While on the surface this sounds reasonable, a deeper look reveals
that this kind of approach presents significant risk for users and
service providers. It runs counter to the clear trends in network
communication, such as today’s radical growth in broadband and wireless networks
, and increased virtualization of corporate networks through use of
public infrastructure. As a result of these trends, much network
traffic is accessing corporate data centers from public networks rather
than the private LAN, and the boundaries of the enterprise are
expanding. Companies must grow by embracing these trends and fully
leveraging public infrastructure and the power of the Internet.

Exactly.  Look at BT’s 21CN network architecture as a clear and unequivocal demonstration of this strategy; a fantastic high-performance, resilient and reliable foundational transport coupled with an open, agile, flexible and equally high-performance and scalable security service layer.  If BT is putting 18 billion pounds of their money investing in a strategy like this and don’t reckon they can rely on "embedded" security, why would you?

Network
vendors are right in recognizing and trying to address the two
fundamental challenges of network communications: application
performance and security. However, they are wrong in believing the best
way to address these concerns is to integrate application performance
and security into the underlying network.

The alternative is to avoid building increasing intelligence into the physical network, which I call the connectivity lane, and building it instead into a higher-level plane I call the intelligence plane.
                     

The connectivity plane covers end-to-end network connectivity in its broadest sense, leveraging IPv4 and eventually IPv6
. This plane’s characteristics are packet-level performance and high
availability. It is inherently insecure but incredibly resilient. The
connectivity plane should be kept highly controlled and standardized,
because it is heavy to manage and expensive to build and update. It
should also be kept dumb, with change happening slowly.

He’s on the money here again.  Let the network evolve at its pace using standards-based technology and allow innovation to deliver service at the higher levels.  The network evolves much more slowly and at a pace that demands stability.  The experientially-focused intelligence layer needs to be much more nimble and agile, taking advantage of the opportunism and the requirements to respond to rapidly emerging technologies and threat/vulnerabilities.

Look at how quickly solutions like DLP and NAC have stormed onto the market.  If we had to wait for Cisco to get their butt in gear and deliver solutions that actually work as an embedded function within the "network," we’d be out of business by now.

I don’t have the time to write it again, but the security implications of having the fox guarding the henhouse by embedding security into the "fabric" is scary.  Just look at the number of security vulnerabilities Cisco has had in their routing, switching, and security products in the last 6 months.  Guess what happens when they’re all one?   I sum it up here and here as examples.

Conversely, the intelligence plane is application centric and policy
driven, and is an overlay to the connectivity plane. The intelligence
plane is where you build relationships, security and policy, because it
is flexible and cost effective. This plane is network independent,
multi-vendor and adaptive, delivering applications and performance
across a variety of environments, systems, users and devices. The
intelligence plane allows you to extend the enterprise boundary using
readily available public infrastructure. Many service and product
vendors offer products that address the core issues of security and
performance on the intelligence plane.

Connectivity vendors should focus their efforts on building faster, easier to manage and more reliable networks. Smart networks
are good for vendors, not customers.

Wiser words have not been spoken…except by me agreeing with them, of course 😉  Not too shabby for an SSL VPN vendor way back in 2005.

Evan, I do hope you won’t disappear and will continue to be an outspoken advocate of flushing the plumbing…best of luck to you and your team as you integrate into SonicWall.

/Hoff

Congratulations to Richard Bejtlich – Good for him, bad for us…

June 11th, 2007 2 comments

BejtlichCongratulations go to Richard Bejtlich as he accepts a position as GE’s Director of Incident Response.  It’s a bittersweet moment as while GE gains an amazing new employee, the public loses one of our best champions, a fantastic teacher, a great wealth of monitoring Tao knowledge and a prolific blogger.

While I am sure Richard won’t stop all elements of what he does, he’s going to have his hands full.  I always privately wondered how he maintained that schedule.  Mine is crazy, his is pure lunacy.

I am grateful that I’m scheduled to attend one of Richard’s last classes — the TCP/IP Weapons School @ Blackhat.  I’ve attended Richard’s classes before and they are excellent.  Get ’em while you still can.

Again, congratulations, Richard.

/Hoff

Categories: Uncategorized Tags:

A Poke in the eEye – First a DoS with Ross and now…?

June 11th, 2007 No comments

Patchtues
The drama continues?

Mitchell blogged last week about the release of the new eEye Preview Service, a "three-tiered security
intelligence program" from eEye and what he describes as an apparent change in focus for the company. 

With Ross Brown’s departure (and subsequent blog p0wnership) and Mitchell’s interesting observation of what appears to be a company migrating away from a product to a service orientation with continued focus on research, my eyebrows (sorry) raised today when I perused one of my syndicated intelligence gathering sites (OK, it’s a rumor mill, but it’s surprisingly reliable) and found this entry:

Is the end in sight for eEye?
Rumor has it more layoffs went down at eEye
this week. Rumor has it the company fired most of their senior
developers, most of the QA staff and demoted their VP of Engineering.
When: 6/8/2007
Company: eEye Digital Security
Severity: 70
Points: 170

Look, this is from f’d Company, so before we start singing hymnals, let’s take a breath and digest a few notes.

This was posted on the 8th. I can’t reasonably tell whether or not this round of RIF is the same to which the InfoSecSellout(s) [they appear to have admitted — accidentally or not — in a post to be plural] referred to on their blog.  I’d hate not to reference them as even a potential source, lest I be skewered in the court of Blogdom…

This appears to be the second round of layoff’s in the last few months or so for eEye and it indicates interesting changes in the VA landscape as well the overall move to SaaS (Security as a Service) for those companies who are looking for differentiation in a crowded room.

Of course, this could also be a realignment of the organization to focus on the new service offerings, so please take the prescribed dose of NaCl with all of this.  We all know how accurate the Internet is…It would be interesting to reconcile this against any substantiated confirmations of this activity.

I hate to see any company thrash.  If the rumors are true, I wish anyone that might have been let go a soft landing and a quick recovery.

This story leads into a forthcoming post on SaaS (Security as a Service.)

/Hoff

Categories: Uncategorized Tags:

Redux: Liability of Security Vulnerability Research…The End is Nigh!

June 10th, 2007 3 comments

Hackers_cartoons
I posited the potential risks of vulnerability research in this blog entry here.   Specifically I asked about reverse engineering and implications related to IP law/trademark/copyright, but the focus was ultimately on the liabilities of the researchers engaging in such activities.

Admittedly I’m not a lawyer and my understanding of some of the legal and ethical dynamics are amateur at best, but what was very interesting to me was the breadth of the replies from both the on and off-line responses to my request for opinion on the matter. 

I was contacted by white, gray and blackhats regarding this meme and the results were divergent across legal, political and ideological lines.

KJH (Kelly Jackson Higgins — hey, Kel!) from Dark Reading recently posted an interesting collateral piece titled "Laws Threaten Security Researchers" in which she outlines the results of a CSI working group chartered to investigate and explore the implications that existing and pending legislation would have on vulnerability research and those who conduct it.  Folks like Jeremiah Grossman (who comments on this very story, here) and Billy Hoffman participate on this panel.

What is interesting is the contrast in commentary between how folks responded to my post versus these comments based upon the CSI working group’s findings:

In the report, some Web researchers say that even if they
find a bug accidentally on a site, they are hesitant to disclose it to
the Website’s owner for fear of prosecution. "This opinion grew
stronger the more they learned during dialogue with working group
members from the Department of Justice," the report says.

I believe we’ve all seen the results of some overly-litigious responses on behalf of companies against whom disclosures related to their products or services have been released — for good or bad.

Ask someone like Dave Maynor if the pain is ultimately worth it.  Depending upon your disposition, your mileage may vary. 

That revelation is unnerving to Jeremiah Grossman, CTO and
founder of WhiteHat Security and a member of the working group. "That
means only people that are on the side of the consumer are being
silenced for fear of prosecution," and not the bad guys.

"[Web] researchers are terrified about what they can and
can’t do, and whether they’ll face jail or fines," says Sara Peters,
CSI editor and author of the report. "Having the perspective of legal
people and law enforcement has been incredibly valuable. [And] this is
more complicated than we thought."

This sort of response didn’t come across that way at all from folks who both privately or publicly responded to my blog; most responses were just the opposite, stated with somewhat of a sense of entitlement and immunity.   I expect to query those same folks again on the topic. 

Check this out:

The report discusses several methods of Web research, such as
gathering information off-site about a Website or via social
engineering; testing for cross-site scripting by sending HTML mail from
the site to the researcher’s own Webmail account; purposely causing
errors on the site; and conducting port scans and vulnerability scans.

Interestingly, DOJ representatives say that using just one of
these methods might not be enough for a solid case against a [good or
bad] hacker. It would take several of these activities, as well as
evidence that the researcher tried to "cover his tracks," they say. And
other factors — such as whether the researcher discloses a
vulnerability, writes an exploit, or tries to sell the bug — may
factor in as well, according to the report.

Full disclosure and to whom you disclose it and when could mean the difference between time in the spotlight or time in the pokey!

/Hoff

Gartner Solutions Expo a Good Gauge of the Security Industry?

June 9th, 2007 No comments

Gartnerparties
Mark Wood from nCircle blogged about his recent experience at the Gartner IT Security Summit in D.C.  Alan Shimel commented on Mark’s summary and both of them make an interesting argument about how Gartner operates as the overall gauge of the security industry.  Given that I was  also there, I thought I’d add some color to Mark’s commentary:

In 2006, there were two types of solutions that seemed to dominate
the floor: network admission control and data leakage (with the old
reliable identity and access management coming in a strong third). This
year, the NAC vendors were almost all gone and there were many fewer
data leakage vendors than I had expected. Nor was there any one type of
solution that really seemed to dominate.

…that’s probably because both of those "markets" are becoming "features" (see here and here) and given how Gartner proselytizes to their clients, features and those who sell them need to spend their hype-budgets wisely and depending upon where one is on the hype cycle (and what I say below,) you’ll see less vendors participating when the $ per lead isn’t stellar.  Lots and lots of vendors in a single quadrant makes it difficult to differentiate.

 

The question is: What does this mean? On the one hand, I continue to
be staggered by the number of new vendors in the security space. They
seem to be like ants in the kitchen — acquire one and two more crawl
out of the cracks in the window sill. It’s madness, I tell you! There
were a good half a dozen names I had never seen before and I wonder if
the number of companies that continue to pop up is good or bad for our
industry. It’s certainly good that technological innovation continues,
but I wonder about the financial status of these companies as funding
for security startups continues to be more difficult to get. There sure
is a lot of money that’s been poured into security and I’m not sure how
investors are going to get it back.

Without waxing on philosophically on the subconscious of the security market, let me offer a far more simple and unfortunate explanation:

Booth space at the Gartner show is one of, if not the most, expensive shows on the planet when you consider how absolutely miserable the scheduling of the expo hours are for the vendors.  They open the vendor expo at lunch time and during track sessions when everyone is usually eating, checking email, or attending the conference sessions!  It’s a purely economic issue, not some great temperature taking of the industry.

I suppose one could argue that if the industry were flush with cash, everyone showing up here would indicate overall "health," but I really do think it’s not such a complex interdependency.  Gartner is a great place for a booth if you’re one of those giant, hamster wheel confab "We Do Everything" vendors like Verisign, IBM or BT.

I spoke to about 5 vendors who had people at the show but no booth.  Why?  Because they would get sucked dry on booth costs and given the exposure (unless you’re a major sponsor with speaking opportunities or a party sponsor) it’s just not worth it.  I spoke with Ted Julian prior to his guest Matasano blog summary, and we looked at each other shaking our heads.

While the quality of the folks visiting are usually decision makers, the foot traffic is limited in the highly-compressed windows of availability.  The thing you really want to do is get some face time with the analysts and key customers and stick and move. 

The best bang for the exposure buck @ Gartner is the party at the end of the second day.  Crossbeam was a platinum sponsor this year; we had a booth (facing a wall in the back,) had two speaking sessions and sponsored a party.  The booth position and visibility sucked for us (and others) while the party had folks lined out the door for food, booze and (believe it or not) temporary tattoos with grown men and women stripping off clothing to get inked.  Even Stiennon showed up to our party! 😉

On the other hand, it seemed that there was much less hysteria than
in years past. No
"we-can-make-every-one-of-your-compliance-problems-vanish-overnight" or
"confidential-data-is-seeping-through-the-cracks-in-your-network-while-you-sleep-Run!-Run!"
pitches this year. There seems to be more maturity in how the industry
is addressing its buying audience and I find this fairly encouraging.
Despite the number of companies, maybe the industry is slowing growing
up after all. It’ll be interesting to see how this plays out.

Well, given the "Security 3.0 theme" which apparently overall trends toward mitigating and managing "risk", a bunch of technology box sprinkling hype doesn’t work well in that arena.  I would also ask whether or not this really does represent maturity or the "natural" byproduct of survival of the fittest — or those with the biggest marketing budgets?  Maybe it’s the same thing?

/Hoff

Alright Kids…It’s a Security Throughput Math Test! Step Right Up!

June 9th, 2007 6 comments

Throughput_2
I’ve got a little quiz for you.  I’ve asked this question 30 times over the last week and received an interesting set of answers.   One set of numbers represent "real world" numbers, the other is a set of "marketing" numbers.

Here’s the deal:

Take an appliance of your choice (let’s say a security appliance like an IPS) that has 10 x 1Gb/s Ethernet interfaces.

Connect five of those interfaces interfaces to the test rig that generates traffic and connect the remaining five interfaces to the receiver.

Let’s say that you send 5 Gb/s from the sender (Avalanche in the example above) across interfaces 1-5.

The traffic passes from the MAC’s up the stack and through the appliance under test and then out through interfaces 6-10 where the traffic is received by the receiver (Reflector in the example above.)

So you’ve got 5Gb/s of traffic into the DUT and 5Gb/s of traffic out of the DUT with zero% loss.

You’re question is as follows:

Using whatever math you desire (Cisco or otherwise,) what is the throughput of the traffic going through the DUT?

I ask this question because of the recent sets of claims by certain vendors over the last few weeks.   Let’s not get into stacking/manipulating the test traffic patterns — I don’t want to cloud the issue.

{Ed: Let me give you some guidance on the two most widely applicable answers to this question that I have received thus far. 85% of those surveyed said that the  answer was 5Gb/s while a smaller minority asserts that it’s 10Gb/s)  It comes down to how one measures "aggregate" throughput.  Please read comments below regarding measurement specifics.

So, what’s your answer?  Please feel free to ‘splain your logic.  I will comment with my response once comments show up so as not to color the results.

/Hoff

Categories: General Rants & Raves Tags:

BeanSec! 10 – June 20th – 6PM to ?

June 7th, 2007 No comments

Beansec3
Yo!  BeanSec! 10 is upon us.

BeanSec! is an informal meetup of information security
professionals, researchers and academics in the Greater Boston area
that meets the third Wednesday of each month.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join
up”, present a zero-day exploit, or defend your dissertation to attend.
Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139


ALERT!:  I Apologize as once again I have will not be able to attend.  I am traveling tomorrow and won’t  be able to make it.  If you’re looking for free food/booze like when I’m there…um, well…
next time? ;(

Categories: BeanSec! Tags: