Archive for the ‘Take5’ Category

Got Cloud [Security]? I’d Like To Talk To You…

October 29th, 2010 No comments

Blogging is very much a broadcast medium.  Sure, people comment every now and then, but I like talking to people; I like to understand what *they* think.

I have some folks I’d like to “interview” for my blog on the topic of cloud – specifically practitioners who have relevant cloud computing experience relevant to ops, compliance, risk, and security. I don’t want anecdotes or take ill-defined polls and I also don’t want to regurgitate my interpretation of what somewhat else said. I want to hear you say it and let others know directly what you said.

Not interested in vendor pitches, thanks.

The structure would be somewhat similar to my Take 5 interviews.  I’d preferably like folks in the architect or CISO/CSO role who have broad exposure to initiatives in their large enterprise or service provider companies.

We can keep it anonymous.

Email me [choff @] if you’re interested.



Categories: Cloud Computing, Cloud Security, Take5 Tags:

Take5 (Episode #7) – Five Questions for Nir Zuk, Founder & CTO Palo Alto Networks

November 26th, 2007 7 comments

It’s been a while since I’ve done a Take5 and this seventh episode interviews Nir Zuk, Founder & CTO of up-start "next-generation firewall" company Palo Alto Networks

There’s been quite a bit of hubbub lately about PAN and I thought I’d see what all the frothing was about.  I reached out to Nir and sent him a couple of questions via email which he was kind enough to answer.  PAN is sending me a box to play with so we’ll see how well it holds up on the Rack.  I’m interested in seeing how this approach addresses the current and the next generation network security concerns.

Despite my soapbox antics regarding technology in the security space, having spent the last two years at a network security startup put me at the cutting-edge of some of the most unique security hardware and software in the business and the PAN solution has some very interesting technology and some very interesting people at its core.

If you’ve used market-leading security kit in your day, you’ve probably appreciated some of Nir’s handywork:

First a little background on the victim:

Nir Zuk brings a wealth of network security expertise and industry
experience to Palo Alto Networks. 

Prior to co-founding Palo Alto
Networks, Nir was CTO at NetScreen Technologies, which was acquired by
Juniper Networks in 2004.

Prior to NetScreen, Nir was co-founder and
CTO at OneSecure, a pioneer in intrusion prevention and detection
appliances.  Nir was also a principal engineer at Check Point Software
Technologies and was one of the developers of stateful inspection

Just to reiterate the Take5 ground-rules: I have zero interest in any
of the companies who are represented by the folks I interview, except
for curiosity.  I send the questions via email and what I get back, I post.  There are no clarifying attempts at messaging or do-overs.  It’s sort of like live radio, but without sound…


1) Your background in the security space is well known and as we take a
look out at the security industry and the breadth of technologies and
products balanced against the needs of the enterprise and service
providers, why did you choose to build another firewall product?

Don't we have a mature set of competitors in this space?  What need is
Palo Alto Networks fulfilling?  Isn't this just UTM?

The reason I have decided to build a new firewall product is quite
similar to the reasons Check Point (one of my previous employers)
decided to build a new firewall product back in the early 90's when
people where using packet filters embedded in routers - that reason
being that existing firewalls are ineffective. Throughout the years,
application developers have learnt how to bypass existing firewalls
using various techniques such as port hopping, tunneling and encryption.
Retrofitting existing firewalls, which use ports to classify traffic,
turned out to be impossible hence a new product had to be developed from
the ground up.

2) As consolidation of security technologies into less boxes continues
to heat up, vendors in the security space add more and more
functionality to their appliances so as not to be replaced as the
box-sprinkling madness continues.  Who do you see as a competitive
threat and who do you see your box replacing/consolidating in the long

I think that a more important trend in network security today is the
move from port-centric to application-centric classification
technologies. This will make most of the existing products obsolete,
similar to the way stateful inspection has made its predecessors
disappear from the world... As for device consolidation, I think that
existing firewall architectures are too old to support real
consolidation, which today is limited to bolting multiple segregated
products on the same device with minimal integration. A new
architecture, which allows multiple network security technologies to
share the same engines, has to emerge before real consolidation happens.
The Palo Alto Networks PA-4000 series is, I believe, the first device to
offer this kind of architecture.

3) The PA-4000 Series uses some really cutting-edge technologies, can
you tell us more about some of them and how the appliance is
differentiated from multi-core x86 based COTS appliances? Why did you go
down the proprietary hardware route instead of just using standard Intel
reference designs and focus on software?

Intel CPUs are very good at crunching numbers, running Excel
spreadsheets and for playing high-end 3D games. They are not so good at
handling packets. For example, the newest quad core Intel CPU can
handle, maybe, 1,500,000 packets per second which amounts to about 1
Gbps with small packets. A single network processor, such as the one of
many that we have in the PA-4000 series, can handle 10 times that -
15,000,000 packets per second. Vendors that claim 10 Gbps throughput
with Intel CPUs, do so with large packet sizes which do not represent
the real world.

4) Your technology focuses on providing extreme levels of application
granularity to be able to identify and control the use of specific applications.   

Application specificity is important as more and more applications use well
known ports (such as port 80) encryption or other methods to obfuscate
themselves to bypass firewalls.  Is this going deep enough?  Don't you need
to inspect and enact dispositions at the content level; after all, it's the
information that's being transmitted that is important.

Inspection needs to happen at two levels. The first one is used to
identify the application. This, usually, does not require going into the
information that's being transmitted but rather merely looking at the
enclosing protocol. Once the application is identified, it needs to be
controlled and secured, both of which require much deeper inspection
into the information itself. Note that simply blocking the application
is not enough - applications need to be controlled - some are always
allowed, some are always blocked but most require granular policy. The
PA-4000 products perform both inspections, on two different
purpose-built hardware engines.

5)  You've architected the PA-4000 Series to depend upon signatures and
you don't use behavioral analysis or behavioral anomaly detection in the
decision fabric to determine how to enact a disposition.  Given the
noise associated with poorly constructed expressions based upon
signatures in products like IDS/IPS systems that don't use context as a
decision point, are you losing anything by relying just on signatures?

The PA-4000 is not limited to signature-based classification of
applications. It is using other techniques as well. As for
false-positive issues, these are usually not associated with traffic
classification but rather with attack detection. Generally, traffic
classification is a very deterministic process that does not suffer from
false positives. As for the IDS/IPS functionality in the PA-4000 product
line, it is providing full context for the IDS/IPS signatures for better
accuracy but the most important reason as to why the PA-4000 products
have better accuracy is because Palo Alto Networks is not a pure IPS
vendor and therefore does not need to play the "who has more signatures"
game which leads to competing products having thousands of useless
signatures that only create false positives.


6)  The current version of the software really positions your solution as
a client-facing, forward proxy that inspects outbound traffic from an end-user

Given this positioning which one would imagine is done mostly at a "perimeter"
choke point, can you elaborate on adding features like DLP or NAC?  Also, if
you're at the "perimeter" what about reverse proxy functionality to inspect
inbound traffic to servers on a DMZ?

The current shipping version of PAN-OS provides NAC-like functionality
with seamless integration with Active Directory and domain controllers.
DLP is not currently a function that our product provides even though
the product architecture does not preclude it. We are evaluating adding
reverse proxy functionality in one of our upcoming software releases.

Categories: Take5 Tags:

Take5 (Episode #6) – Five Questions for Andy Jaquith, Yankee Group Analyst and Metrician…

September 13th, 2007 3 comments

This sixth episode of Take5 interviews Andy Jaquith, Yankee Group analyst and champion of all things Metric…I must tell you that Andy’s answers to my interview questions were amazing to read and I really appreciate the thought and effort he put into this.

First a little background on the victim:

Andrew Jaquith is a program manager in Yankee Group’s Enabling
Technologies Enterprise group with expertise in portable digital
identity and web application security. As Yankee Group’s lead security
analyst, Jaquith drives the company’s security research agenda and
researches disruptive technologies that enable tomorrow’s Anywhere
Enterprise™ to secure its information assets.


has 15 years of IT experience. Before joining Yankee Group, he
co-founded and served as program director at @stake, Inc., a security
consulting pioneer, which Symantec Corporation acquired in 2004. Before
@stake, Jaquith held project manager and business analyst positions at
Cambridge Technology Partners and FedEx Corporation.

His application security and  metrics research has been featured in publications such as CIO, CSO and the IEEE Security & Privacy.
In addition, Jaquith is the co-developer of a popular open source wiki
software package. He is also the author of the recently released
Pearson Addison-Wesley book, Security  Metrics: Replacing Fear, Uncertainty and Doubt. It has been praised by  reviewers as both ”sparking and witty“ and ”one of the best written security  books ever.”
  Jaquith holds a B.A. degree in  economics and political science from Yale University.


1) Metrics.  Why is this such a contentious topic?  Isn't the basic axiom of "you can't 
manage what you don't measure" just common sense?  A discussion on metrics evokes very
passionate discussion amongst both proponents and opponents alike.  Why are we still
debating the utility of measurement?

The arguments over metrics are overstated, but to the extent they are 
contentious, it is because "metrics" means different things to 
different people. For some people, who take a risk-centric view of 
security, metrics are about estimating risk based on a model. I'd put 
Pete Lindstrom, Russell Cameron Thomas and Alex Hutton in this camp. 
For those with an IT operations background, metrics are what you get 
when you measure ongoing activities. Rich Bejtlich and I are 
probably closer to this view of the world. And there is a third camp 
that feels metrics should be all about financial measures, which 
brings us into the whole "return on security investment" topic. A lot 
of the ALE crowd thinks this is what metrics ought to be about. Just 
about every security certification course (SANS, CISSP) talks about 
ALE, for reasons I cannot fathom.

Once you understand that a person's point of view of "metrics" is 
going to be different depending on the camp they are in -- risk, 
operations or financial -- you can see why there might be some 
controversy between these three camps. There's also a fourth group 
that takes a look at the fracas and says, "I know why measuring 
things matter, but I don't believe a word any of you are talking 
about." That's Mike Rothman's view, I suspect.

Personally, I have always taken the view that metrics should measure 
things as they are (the second perspective), not as you imagine, 
model or expect them to be. That's another way of saying that I am an 
epiricist. If you collect data on things and swirl them around in a 
blender, interesting things will stratify out.

Putting it another way: I am a measurer rather than a modeler. I 
don't claim to know what the most important security metrics are. But 
I do know that people measure certain things, and that those things 
gives them insights into their firms performance. To that end, I've 
got about 100 metrics documented in my book; these are largely based 
what people tell me they measure. Dan Geer likes to say, "it almost 
doesn't matter what you measure, but get started and measure 
something." The point of my book, largely, is to give some ideas 
about what those somethings might be, and to suggest techniques for 
analyzing the data once you have them.

Metrics aren't really that contentious. Just about everyone in the community is pretty friendly and courteous. It's 
a "big tent." Most of the differences are with respect to 
inclination. But, outside of the "metrics community" it really comes 
down to a basic question of belief: you either believe that security 
can be measured or you don't.

The way you phrased your question, by the way, implies that you 
probably align a little more closely with my operational/empiricist 
view of metrics. But I'd expect that, Chris -- you've been a CSO, and 
in charge of operational stuff before. :)

2) You've got a storied background from FedEx to @Stake to the Yankee Group. 
I see your experience trending from the operational to the analytical.  How much of your
operational experience lends  itself to the practical collection and presentation of
metrics -- specifically security metrics?  Does your broad experience help you in
choosing what to measure and how?

That's a keen insight, and one I haven't thought of before. You've 
caused me to get all introspective all of a sudden. Let me see if I 
can unravel the winding path that's gotten me to where I am today.

My early career was spent as an IT analyst at Roadway, a serious, 
operationally-focused trucking firm. You know those large trailers 
you see on the highways with "ROADWAY" on them? That's the company I 
was with. They had a reputation as being like the Marines. Now, I 
wasn't involved in the actual day-to-day operations side of the 
business, but when you work in IT for a company like that you get to 
know the business side. As part of my training I had to do "ride 
alongs," morning deliveries and customer visits. Later, I moved to 
the contract logistics side of the house, where I helped plan IT 
systems for transportation brokerage services and contract warehouses 
the company ran. The logistics division was the part of Roadway that 
was actually acquired by FedEx.

I think warehouses are just fascinating. They are one hell of a lot 
more IT intensive than you might think. I don't just mean bar code 
readers, forklifts and inventory control systems; I mean also the 
decision support systems that produce metrics used for analysis. For 
example, warehouses measure an overall metric for efficiency called 
"inventory turns" that describes how fast your stock moves through 
the warehouse. If you put something in on January 1 and move it out 
on December 31 of the same year, that part has a "velocity" of 1 turn 
per year. Because warehouses are real estate like any other, you can 
spread out your fixed costs by increasing the number of turns through 
the warehouse.

For example, one of the reasons why Dell -- a former customer of mine 
at Roadway-- was successful was that they figured out how to make 
their suppliers hold their inventory for them and deliver it to final 
assembly on a "just-in-time" (JIT) basis, instead of keeping lots of 
inventory on hand themselves. That enabled them to increase the 
number of turns through their warehouses to something like 40 per 
year, when the average for manufacturing was like 12. That efficiency 
gain translated directly to profitability. (Digression: Apple, by the 
way, has lately been doing about 50 turns a year through their 
warehouses. Any wonder why they make as much money on PCs as HP, who 
has 6 times more market share? It's not *just* because they choose 
their markets so that they don't get suckered into the low margin 
part of the business; it's also because their supply chain operations 
are phenomenally efficient.)

Another thing I think is fascinating about warehouses is how you 
account for inventory. Most operators divide their inventory into 
"A", "B" and "C" goods based on how fast they turn through the 
warehouse. The "A" parts might circulate 10-50x faster than the C 
parts. So, a direct consequence is that when you lay out a warehouse 
you do it so that you can pick and ship your A parts fastest. The 
faster you do that, the more efficient your labor force and the less 
it costs you to run things. Neat, huh?

Now, I mention these things not strictly speaking to show you what a 
smartypants I am about supp ly chain operations. The real point is to 
show how serious operational decisions are made based on deep 
analytics. Everything I just mentioned can be modeled and measured: 
where you site the warehouses themselves, how you design the 
warehouse to maximize your ability to pick and ship the highest-
velocity items, and what your key indicators are. There's a virtuous 
feedback loop in place that helps operators understand where they are 
spending their time and money, and that in turn drives new 
innovations that increase efficiencies.

In supply chain, the analytics are the key to absolutely everything. 
And they are critical in an industry where costs matter. In that 
regard, manufacturing shares a lot with investment banking: leading 
firms are willing to invest a phenomenal amount of time and money to 
shave off a few basis points on a Treasury bill derivative. But this 
is done with experiential, analytical data used for decision support 
that is matched with a model. You'd never be able to justify 
redesigning a warehouse or hiring all of Yale's graduating math 
majors unless you could quantify and measure the impact of those 
investments on the processes themselves. Experiential data feeds, and 
improves, the model.

In contrast, when you look at security you see nothing like this. 
Fear and uncertainty rule, and almost every spending decision is made 
on the basis of intuition rather than facts. Acquisition costs 
matter, but operational costs don't. Can you imagine what would 
happen if you plucked a seasoned supply-chain operations manager out 
of a warehouse, plonked him down in a chair, and asked him to watch 
his security counterpart in action? Bear in mind this is a world 
where his CSO friend is told that the answer to everything is "buy 
our software" or "install our appliance." The warehouse guy would 
look at him like he had two heads. Because in his world, you don't 
spray dollars all over the place until you have a detailed, grounded, 
empirical view of what your processes are all about. You simply don't 
have the budget to do it any other way.

But in security, the operations side of things is so immature, and 
the gullibility of CIOs and CEOs so high, that they are willing to 
write checks without knowing whether the things they are buying 
actually work. And by "work," I don't mean "can be shown to stop a 
certain number of viruses at the border"; I mean "can be shown to 
decrease time required to respond to a security incident" or "has 
increased the company's ability to share information without 
incurring extra costs," or "has cut the pro-rata share of our IT 
operations spent on rebuilding desktops."

Putting myself in the warehouse manager's shoes again: for security, 
I'd like to know why nobody talks about activity-based costing. Or 
about process metrics -- that is, cycle times for everyday security 
activities -- in a serious way. Or benchmarking -- does my firm have 
twice as many security defects in our web applications as yours? Are 
we in the first, second, third or fourth quartiles?

If the large security companies were serious, we'd have a firmer grip 
on the activity, impact and cost side of the ledger. For example, why 
won't AV companies disclose how much malware is actually circulating 
within their customer bases, despite their promises of "total 
protection"? When the WMF zero-day exploit came out, how come none of 
the security companies knew how many of their customers were 
infected? And how much the cleanup efforts cost? Either nobody knew, 
or nobody wanted to tell. I think it's the former. If I were in the 
shoes of my Roadway operational friend, I'd be pissed off about the 
complete lack of feedback between spending, activities, impact and cost.

If this sounds like a very odd take on security, it is. My mentor and 
former boss, Dan Geer, likes to say that there are a lot of people 
who don't have classical security training, but who bring "hybrid 
vigor" to the field. I identify with that. With my metrics research, 
I just want to see if we can bring serious analytic rigor to a field 
that has resisted it for so long. And I mean that in an operational 
way, not a risk-equation way.

So, that's an exceptionally long-winded way of saying "yes" to your 
question -- I've trended from operational to analytical. I'm not sure 
that my past experience has necessarily helped me pick particular 
security metrics per se, but it has definitely biased me towards 
those that are operational rather than risk-based.

3) You've recently published your book.  I think it was a great appetite whetter but
I was left -- as were I think many of us who are members of the "lazy guild"-- wanting
more.   Do you plan to follow-up with a metrics toolkit of sorts?  You know, a templated
guide -- Metrics for Dummies?

You know, that's a great point. The fabulous blogger Layer 8, who 
gave my book an otherwise stunning review that I am very grateful for 
("I tucked myself into bed, hoping to sleep—but I could not sleep 
until I had read Security Metrics cover to cover. It was That 
Good."), also had that same reservation. Her comment was, "that final 
chapter just stopped short and dumped me off the end of the book, 
without so much as a fare-thee-well Final Overall Summary. It just 
stopped, and without another word, put on its clothes and went home". 
Comparing my prose to a one night stand is pretty funny, and a 

Ironically, as the deadline for the book drew near, I had this great 
idea that I'd put in a little cheat-sheet in the back, either as an 
appendix or as part of the endpapers. But as with many things, I 
simply ran out of time. I did what Microsoft did to get Vista out the 
door -- I had to cut features and ship the fargin' bastid.

One of the great things about writing a book is that people write you 
letters when they like or loathe something they read. Just about all 
of my feedback has been very positive, and I have received a number 
of very thoughtful comments that shed light on what readers' 
companies are doing with metrics. I hope to use the feedback I've 
gotten to help me put together a "cheat sheet" that will boil the 
metrics I discuss in the book into something easier to digest.

4) You've written about the impending death of traditional Anti-Virus technology and its
evolution to combat the greater threats from adaptive Malware.  What role do you think
virtualization technology that provides a sandboxed browsing environment will have in
this space, specifically on client-side security?

It's pretty obvious that we need to do something to shore up the 
shortcomings of signature-based anti- malware software. I regularly 
check out a few of the anti-virus benchmarking services, like the 
OITC site that aggregates the Virustotal scans. And I talk to a 
number of anti-malware companies who tell me things they are seeing. 
It's pretty clear that current approaches are running out of gas. All 
you have to do is look at the numbers: unique malware samples are 
doubling every year, and detection rates for previously-unseen 
malware range from the single digits to the 80% mark. For an industry 
that has long said they offered "total protection," anything less 
than 100% is a black eye.

Virtualization is one of several alternative approaches that vendors 
are using to help boost detection rates. The idea with virtualization 
is to run a piece of suspected malware in a virtual machine to see 
what it does. If, after the fact, you determine that it did something 
naughty, you can block it from running in the real environment. It 
sounds like a good approach to me, and is best used in combination 
with other technologies.

Now, I'm not positive how pervasive this is going to be on the 
desktop. Existing products are already pretty resource-hungry. 
Virtualization would add to the burden. You've probably heard people 
joke: "thank God computers are dual-core these days, because we need 
one of 'em to run the security software on." But I do think that 
virtualized environments used for malware detection will become a 
fixture in gateways and appliances.

Other emergent ideas that complement virtualization are behavior 
blocking and herd intelligence. Herd intelligence -- a huge malware 
blacklist-in-the-sky -- is a natural services play, and I believe all 
successful anti-malware companies will have to embrace something like 
this to survive.

5) We've see the emergence of some fairly important back-office critical applications
make their way to the Web (CRM, ERP, Financials) and now GoogleApps is staking a hold
for the SMB.  How do you see the SaaS model affecting the management of security -- 
and ultimately risk --  over time?

Software as a service for security is already here. We've already 
seen fairly pervasive managed firewall service offerings -- the 
carriers and companies like IBM Global Services have been offering 
them for years. Firewalls still matter, but they are nowhere near as 
important to the overall defense posture as before. That's partly 
because companies need to put a lot of holes in the firewall. But 
it's also because some ports, like HTTP/HTTPS, are overloaded with 
lots of other things: web services, instant messaging, VPN tunnels 
and the like. It's a bit like the old college prank of filling a 
paper bag with shaving cream, sliding it under a shut door, then jumping 
on it and spraying the payload all over the room's occupants. HTTP is 
today's paper bag.

In the services realm, for more exciting action, look at what 
MessageLabs and Postini have done with the message hygiene space. At 
Yankee we've been telling our customers that there's no reason why an 
enterprise should bother to build bespoke gateway anti-spam and anti-
malware infrastructures any more. That's not just because we like 
MessageLabs or Postini. It's also because the managed services have a 
wider view of traffic than a single enterprise will ever have, and 
benefit from economies of scale on the research side, not to mention 
the actual operations.

Managed services have another hidden benefit; you can also change 
services pretty easily if you're unhappy. It puts the service 
provider's incentives in the right place. Qualys, for example, 
understands this point very well; they know that customers will leave 
them in an instant if they stop innovating. And, of course, whenever 
you accumulate large amounts of performance data across your customer 
base, you can benchmark things. (A subject near and dear to my heart, 
as you know.)

With regards to the question about risk, I think managed services do 
change the risk posture a bit. On the one hand, the act of 
outsourcing an activity to an external party moves a portion of the 
operational risk to that party. This is the "transfer" option of the 
classic "ignore, mitigate, transfer" set of choices that risk 
management presents. Managed services also reduce political risk in a 
"cover your ass" sense, too, because if something goes wrong you can 
always point out that, for instance, lots of other people use the 
same vendor you use, which puts you all in the same risk category. 
This is, if you will, the "generally accepted practice" defense.

That said, particular managed services with large customer bases 
could accrue more risk by virtue of the fact that they are bigger 
targets for mischief. Do I think, for example, that spammers target 
some of their evasive techniques towards Postini and MessageLabs? I 
am sure they do. But I would still feel safer outsourcing to them 
rather than maintaining my own custom infrastructure.

Overall, I feel that managed services will have a "smoothing" or 
dampening effect on the risk postures of enterprises taken in 
aggregate, in the sense that they will decrease the volatility in 
risk relative to the broader set of enterprises (the "alpha", if you 
will). Ideally, this should also mean a decrease in the *absolute* 
amount of risk. Putting this another way: if you're a rifle shooter, 
it's always better to see your bullets clustered closely together, 
even if they don't hit near the bull's eye, rather than seeing them 
near the center, but dispersed. Managed services, it seems to me, can 
help enterprises converge their overall levels of security -- put the 
bullets a little closer together instead of all over the place. 
Regulation, in cases where it is prescriptive, tends to do that too.

Bonus Question:
6) If you had one magical dashboard that could display 5 critical security metrics
to the Board/Executive Management, regardless ofindustry, what would those elements

I would use the Balanced Scorecard, a creation of Harvard professors 
Kaplan and Norton. It divides executive management metrics into four 
perspectives: financial, internal operations, customer, and learning 
and growth. The idea is to create a dashboard that incorporates 6-8 
metrics into each perspective. The Balanced Scorecard is well known 
to the corner office, and is something that I think every security 
person should learn about. With a little work, I believe quite 
strongly that security metrics can be made to fit into this framework.

Now, you might ask yourself, I've spent all of this work organizing 
my IT security policies along the lines of ISO 17799/2700x, or COBIT, 
or ITIL. So why can't I put together a dashboard that organizes the 
measurements in those terms? What's wrong with the frameworks I've 
been using? Nothing, really, if you are a security person. But i f you 
really want a "magic dashboard" that crosses over to the business 
units, I think basing scorecards on security frameworks is a bad 
idea. That's not because the frameworks are bad (in fact most of them 
are quite good), but because they aren't aligned with the business. 
I'd rather use a taxonomy the rest of the executive team can 
understand. Rather than make them understand a security or IT 
framework, I'd rather try to meet them halfway and frame things in 
terms of the way they think.

So, for example: for Financial metrics, I'd measure how much my IT 
security infrastructure is costing, straight up, and on an activity-
based perspective. I'd want to know how much it costs to secure each 
revenue-generating transaction; quick-and-dirty risk scores for 
revenue-generating and revenue/cost-accounting systems; DDOS downtime 
costs. For the Customer perspective I'd want to know the percentage 
and number of customers who have access to internal systems; cycle 
times for onboarding/offloading customer accounts; "toxicity rates" 
of customer data I manage; the number of privacy issues we've had; 
the percentage of customers who have consulted with the security 
team; number and kind of remediation costs of audit items that are 
customer-related; number and kind of regulatory audits completed per 
period, etc. The Internal Process perspective has of the really easy 
things to measure, and is all about security ops: patching 
efficiency, coverage and control metrics, and the like. For Learning 
and Growth, it would be about threat/planning horizon metrics, 
security team consultations, employee training effectiveness and 
latency, and other issues that measure whether we're getting 
employees to exhibit the right behaviors and acquire the right skills.

That's meant to be an illustrative list rather than definitive, and I 
confess it is rather dense. At the risk of getting all Schneier on 
you, I'd refer your readers to the book for more details. Readers can 
pick and choose from the "catalog" and find metrics that work for 
their organizations.

Overall, I do think that we need to think a whole lot less about 
things like ISO and a whole lot more about things like the Balanced 
Scorecard. We need to stop erecting temples to Securityness that 
executives don't give a damn about and won't be persuaded to enter. 
And when we focus just on dollars, ALE and "security ROI", we make 
things too simple. We obscure the richness of the data that we can 
gather, empirically, from the systems we already own. Ironically, the 
Balanced Scorecard itself was created to encourage executives to move 
beyond purely financial measures. Fifteen years later, you'd think we 
security practitioners would have taken the hint.
Categories: Risk Management, Security Metrics, Take5 Tags:

Take5 (Episode #5) – Five Questions for Allwyn Sequeira, SVP of Product Operations, Blue Lane

August 21st, 2007 18 comments

This fifth episode of Take5 interviews Allwyn Sequeira, SVP of Product Operations for Blue Lane.  

First a little background on the victim:

Allwyn Sequeira is Senior Vice President of Product Operations at Blue
Lane Technologies, responsible for managing the overall product life
cycle, from concept through research, development and test, to delivery
and support. He was previously the Senior Vice President of Technology
and Operations at netVmg, an intelligent route control company acquired
by InterNap in 2003, where he was responsible for the architecture,
development and deployment of the industry-leading flow control
platform. Prior to netVmg, he was founder, Chief Technology Officer and
Executive Vice President of Products and Operations at First Virtual
Corporation (FVC), a multi-service networking company that had a
successful IPO in 1998. Prior to FVC, he was Director of the Network
Management Business Unit at Ungermann-Bass, the first independent local
area network company. Mr. Sequeira has previously served as a Director
on the boards of FVC and netVmg.

Mr. Sequeira started his career as a software developer at HP in the
Information Networks Division, working on the development of TCP/IP
protocols. During the early 1980’s, he worked on the CSNET project, an
early realization of the Internet concept. Mr. Sequeira is a recognized
expert in data networking, with twenty five years of experience in the
industry, and has been a featured speaker at industry leading forums
like Networld+Interop, Next Generation Networks, ISP Con and RSA

Mr. Sequeira holds a Bachelor of Technology degree in Computer
Science from the Indian Institute of Technology, Bombay, and a Master
of Science in Computer Science from the University of Wisconsin,

Allwyn, despite all this good schoolin’ forgot to send me a picture, so he gets what he deserves 😉
(Ed: Yes, those of you quick enough were smart enough to detect that the previous picture was of Brad Pitt and not Allwyn.  I apologize for the unnecessary froth-factor.)


1) Blue Lane has two distinct product lines, VirtualShield and PatchPoint.  The former is a software-based solution which provides protection for VMware Infrastructure 3 virtual servers as an ESX VM plug-in whilst the latter offers a network appliance-based solution for physical servers.  How are these products different than either virtual switch IPS’ like Virtual Iron or in-line network-based IPS’s?

IPS technologies have been charged with the incredible mission of trying to protect everything from anything.  Overall they’ve done well, considering how much the perimeter of the network has changed and how sophisticated hackers have become. Much of their core technology, however, was relevant and useful when hackers could be easily identified by their signatures. As many have proclaimed, those days are coming to an end.

A defense department official recently quipped, "If you offer the same protection for your toothbrushes and your diamonds you are bound to lose fewer toothbrushes and more diamonds."  We think that data center security similarly demands specialized solutions.  The concept of an enterprise network has become so ambiguous when it comes to endpoints and devices and supply chain partners, etc. we think its time to think more realistically in terms of trusted, yet highly available zones within the data center.

It seems clear at this point that different parts of the network need very different security capabilities.  Servers, for example need highly accurate solutions that do not block or impede good traffic and can correct bad traffic, especially when it comes to closing network-facing vulnerability windows.  They need to maintain availability with minimal latency for starters; and that has been a sort of Achilles heel for signature-based approaches.  Of course, signatures also bring considerable management burdens over and beyond their security capabilities.

No one is advocating turning off the IPS, but rather approaching servers with more specialized capabilities.  We started focusing on servers years ago and established very sophisticated application and protocol intelligence, which has allowed us to correct traffic inline without the noise, suspense and delay that general purpose network security appliance users have come to expect.

IPS solutions depend on deep packet inspection typically at the perimeter based on regexp pattern matching for exploits.  Emerging challenges with this approach have made alert and block modes absolutely necessary as most IPS solutions aren’t accurate enough to be trusted in full library block. 

Blue Lane uses a vastly different approach.  We call it deep flow inspection/correction for known server vulnerabilities based on stateful decoding up to layer 7.  We can alert, block and correct, but most of are deployments are in correct mode, with our full capabilities enabled. From an operational standpoint we have substantially different impacts.

A typical IPS may have 10K signatures while experts recommend turning on just a few hundred.  That kind of marketing shell game (find out what really works) means that there will be plenty of false alarms, false positives and negatives and plenty of tuning.  With polymorphic attacks signature libraries can increase exponentially while not delivering meaningful improvements in protection. 

Blue Lane supports about 1000 inline security patches across dozens of very specific server vulnerabilities, applications and operating systems.  We generate very few false alarms and minimal latency.  We don’t require ANY tuning.  Our customers run our solution in automated, correct mode.

The traditional static signature IPS category has evolved into an ASIC war between some very capable players for the reasons we just discussed.Exploding variations of exploits and vectors means that exploit-centric approaches will require more processing power.

Virtualization is pulling the data center into an entirely different direction, driven by commodity processors.  So of course our VirtualShield solution was a much cleaner setup with a hypervisor; we can plug into the hypervisor layer and run on top of existing hardware, again with minimal latency and footprint.

You don’t have to be a Metasploit genius to evade IPS signatures.  Our higher layer 7 stateful decoding is much more resilient. 

2) With zero-days on the rise, pay-for-play vulnerability research and now Zero-Bay (WabiSabiLabi) vulnerability auctions and the like, do you see an uptake in customer demand for vulnerability shielding solutions?

Exploit-signature technologies are meaningless in the face of evanescent, polymorphic threats, resulting in 0-day exploits. Slight modifications to signatures can bypass IPSes, even against known vulnerabilities.  Blue Lane technology provides 0-day protection for any variant of an exploit against known vulnerabilities.  No technology can provide ultimate protection against 0-day exploits based on 0-day vulnerabilities. However, this requires a different class of hacker.

3) As large companies start to put their virtualization strategies in play, how do you see customers addressing securing their virtualized infrastructure?  Do they try to adapt existing layered security methodologies and where do these fall down in a virtualized world?

I’ve explored this topic in depth at the Next Generation Data Center conference last week. Also, your readers might be interested in listening to a recent podcast: The Myths and Realities of Virtualization Security: An Interview. 

To summarize, there are a few things that change with virtualization, that folks need to be aware of.  It represents a new architecture.  The hypervisor layer represents the un-tethering and clustering of VMs, and centralized control.  It introduces a new virtual network layer.  There are entirely new states of servers, not anticipated by traditional static security approaches (like instant create, destroy, clone, suspend, snapshot and revert to snapshot). 

Then you’ll see unprecedented levels of mobility and new virtual appliances and black boxing of complex stacks including embedded databases.  Organizations will have to work out who is responsible for securing this very fluid environment.  We’ll also see unprecedented scalability with Infiniband cores attaching LAN/SAN out to 100’s of ESX hypervisors and thousands of VMs.

Organizations will need the capability to shield these complex, fluid environments; because trying to keep track of individual VMs, states, patch levels, locations will make tuning an IPS for polymorphic attacks look like childs play in comparison.   Effective solutions will need to be highly accurate, low latency solutions deployed in correct mode. Gone will be the days of man-to-man blocking and tuning.  Here to stay are the days of zone defense.

4) VMware just purchased Determina and intends to integrate their memory firewall IPS product as an ESX VM plug-in.  Given your early partnership with VMware, are you surprised by this move?  Doesn’t this directly compete with the VirtualSheild offering?

I wouldn’t read too much into this. Determina hit the wall on sales, primarily because it’s original memory wall technology was too intrusive, and fell short of handling new vulnerabilities/exploits.

This necessitated the LiveShield product, which required ongoing updates, destroying the value proposition of not having to touch servers, once installed. So, this is a technology/people acquisition, not a product line/customer-base acquisition.

VMware was smart to get a very bright set of folks, with deep memory/paging/OS, and a core technology that would do well to be integrated into the hypervisor for the purpose of hypervisor hardening, and interVM isolation. I don’t see VMware entering the security content business soon (A/V, vulnerabilities, etc.). I see Blue Lane’s VirtualShield technology integrated into the virtual networking layer (vSwitch), as a perfect complement to anything that will come out of the Determina acquisition.

5) Citrix just acquired XenSource.  Do you have plans to offer VirtualShield for Xen? 

A smart move on Citrix’s part to get back into the game. Temporary market caps don’t matter. Virtualization matters. If Citrix can make this a two or three horse race, it will keep the VMware, Citrix, Microsoft triumvirate on their toes, delivering better products, and net good for the customer.

Regarding BlueLane, and Citrix/Xensource, we will continue to pay attention to what customers are buying as they virtualize their data centers. For now, this is a one horse show 🙂

Take5- Five Questions for Chris Wysopal, CTO Veracode

June 19th, 2007 No comments

In this first installment of Take5, I interview Chris Wysopal, the CTO of Veracode about his new company, secure coding, vulnerability research and the recent forays into application security by IBM and HP.

This entire interview was actually piped over a point-to-point TCP/IP connection using command-line redirection through netcat.  No packets were harmed during the making of this interview…

First, a little background on the victim, Chris Wysopal:

Chris Wysopal is
co-founder and CTO of Veracode. He has testified on Capitol Hill on the subjects of government
computer security and how vulnerabilities are discovered in software. Chris
co-authored the password auditing tool L0phtCrack, wrote the windows version of
netcat, and was a researcher at the security think tank, L0pht Heavy
Industries, which was acquired by @stake. He was VP of R&D at @stake
and later director of development at Symantec, where he led a
team developing binary static analysis technology.

He was influential in
the creation of responsible vulnerability disclosure guidelines and a founder of
the Organization for Internet Safety.  Chris wrote "The Art of
Software Security Testing: Identifying Security Flaws", published by Addison
Wesley and Symantec Press in December 2006. He earned his Bachelor of Science
degree in Computer and Systems Engineering from Rensselaer Polytechnic

1) You’re a founder of Veracode
which is described as the industry’s first provider
of automated, on-demand
application security solutions.  What sort of application
services does Veracode provide?  Binary analysis, Web Apps?
Veracode currently offers binary static analysis of C/C++ applications
for Windows and Solaris and for Java applications.  This allows us to find
the classes of vulnerabilities that source code analysis tools can find but on
the entire codebase including the libraries which you probably don’t have source
code for. Our product roadmap includes support for C/C++ on Linux and C# on
.Net.  We will also be adding additional analysis techniques to our
flagship binary static analysis.
2) Is this a SaaS model?
How do you charge for your services?  Do you see
using your services or enterprises?

Customers upload their binaries to us and we deliver an analysis of their
security flaws via our web portal.  We charge by the megabyte of
code.  We have both software vendors and enterprises who write or outsource
their own custom software using our services.  We also have
enterprises who are purchasing software ask the software vendors to submit their
binaries to us for a 3rd party analysis.  They use this analysis as a
factor in their purchasing decision. It can lead to a "go/no go" decision, a
promise by the vendor to remediate the issues found, or a reduction in price to
compensate for the cost of additional controls or the cost of incident
response that insecure software necessitates.
3) I was a Qualys customer
— a VA/VM SaaS company.  Qualys had to spend quite
a bit of time
convincing customers that allowing for the storage of their VA data
secure.  How does Veracode address a customer’s security concerns when
uploading their

We are
absolutely fanatical about the security of our customers data.  I look back
at the days when I was a security consultant where we had vulnerability
data on laptops and corporate file shares and I say, "what were we
thinking?"  All customer data at Veracode is encrypted in storage and at
rest with a unique key per application and customer.  Everyone at Veracode
uses 2 factor authentication to log in and 2 factor is the default for
customers.  Our data center is a SAS 70 Type II facility. All data
access is logged so we know exactly who looked at what and when. As security
people we are professionally paranoid and I think it shows through in the system
we built.  We also believe in 3rd party verification so we have had a top
security boutique do a security review our portal
4) With IBM’s acquisition
of Watchfire and today’s announcement that HP will buy
SPI Dynamics, how does
Veracode stand to play in this market of giants who will
be competing to
drive service revenues?

have designed our solution from the ground up to have the Web 2.0 ease of
use and experience and we have the quality of analysis that I feel is the best
in the market today.  An advantage is Veracode is an independent
assessment company that customers can trust to not play favorites to other
software companies because of partnerships or alliances. Would Moody’s or
Consumer Reports be trusted as a 3rd party if they were part of a big financial
or technology conglomerate? We feel a 3rd party assessment is important in the
security world.
5) Do you see the latest
developments in vulnerability research with the drive for
initiatives pressuring developers to produce secure code out of the box
fear of exploit or is it driving the activity to companies like yours?

think the real driver for developers to produce secure code and for developers
and customers to seek code assessments is the reality that the costs of insecure
code goes up everyday and its adding to the operational risk of companies that
use software.  People exploiting vulnerabilities are not going away
and there is no way to police the internet of vulnerability
information.  The only solution is for customers to demand more secure
code, and proof of it, and for developers to deliver more secure code in