Archive

Archive for the ‘Security Metrics’ Category

Take5 (Episode #6) – Five Questions for Andy Jaquith, Yankee Group Analyst and Metrician…

September 13th, 2007 3 comments

This sixth episode of Take5 interviews Andy Jaquith, Yankee Group analyst and champion of all things Metric…I must tell you that Andy’s answers to my interview questions were amazing to read and I really appreciate the thought and effort he put into this.

First a little background on the victim:

Jaquitha
Andrew Jaquith is a program manager in Yankee Group’s Enabling
Technologies Enterprise group with expertise in portable digital
identity and web application security. As Yankee Group’s lead security
analyst, Jaquith drives the company’s security research agenda and
researches disruptive technologies that enable tomorrow’s Anywhere
Enterprise™ to secure its information assets.

       

Jaquith
has 15 years of IT experience. Before joining Yankee Group, he
co-founded and served as program director at @stake, Inc., a security
consulting pioneer, which Symantec Corporation acquired in 2004. Before
@stake, Jaquith held project manager and business analyst positions at
Cambridge Technology Partners and FedEx Corporation.

His application security and  metrics research has been featured in publications such as CIO, CSO and the IEEE Security & Privacy.
In addition, Jaquith is the co-developer of a popular open source wiki
software package. He is also the author of the recently released
Pearson Addison-Wesley book, Security  Metrics: Replacing Fear, Uncertainty and Doubt. It has been praised by  reviewers as both ”sparking and witty“ and ”one of the best written security  books ever.”
  Jaquith holds a B.A. degree in  economics and political science from Yale University.

Questions:

1) Metrics.  Why is this such a contentious topic?  Isn't the basic axiom of "you can't 
manage what you don't measure" just common sense?  A discussion on metrics evokes very
passionate discussion amongst both proponents and opponents alike.  Why are we still
debating the utility of measurement?


The arguments over metrics are overstated, but to the extent they are 
contentious, it is because "metrics" means different things to 
different people. For some people, who take a risk-centric view of 
security, metrics are about estimating risk based on a model. I'd put 
Pete Lindstrom, Russell Cameron Thomas and Alex Hutton in this camp. 
For those with an IT operations background, metrics are what you get 
when you measure ongoing activities. Rich Bejtlich and I are 
probably closer to this view of the world. And there is a third camp 
that feels metrics should be all about financial measures, which 
brings us into the whole "return on security investment" topic. A lot 
of the ALE crowd thinks this is what metrics ought to be about. Just 
about every security certification course (SANS, CISSP) talks about 
ALE, for reasons I cannot fathom.

Once you understand that a person's point of view of "metrics" is 
going to be different depending on the camp they are in -- risk, 
operations or financial -- you can see why there might be some 
controversy between these three camps. There's also a fourth group 
that takes a look at the fracas and says, "I know why measuring 
things matter, but I don't believe a word any of you are talking 
about." That's Mike Rothman's view, I suspect.

Personally, I have always taken the view that metrics should measure 
things as they are (the second perspective), not as you imagine, 
model or expect them to be. That's another way of saying that I am an 
epiricist. If you collect data on things and swirl them around in a 
blender, interesting things will stratify out.

Putting it another way: I am a measurer rather than a modeler. I 
don't claim to know what the most important security metrics are. But 
I do know that people measure certain things, and that those things 
gives them insights into their firms performance. To that end, I've 
got about 100 metrics documented in my book; these are largely based 
what people tell me they measure. Dan Geer likes to say, "it almost 
doesn't matter what you measure, but get started and measure 
something." The point of my book, largely, is to give some ideas 
about what those somethings might be, and to suggest techniques for 
analyzing the data once you have them.

Metrics aren't really that contentious. Just about everyone in the 
securitymetrics.org community is pretty friendly and courteous. It's 
a "big tent." Most of the differences are with respect to 
inclination. But, outside of the "metrics community" it really comes 
down to a basic question of belief: you either believe that security 
can be measured or you don't.

The way you phrased your question, by the way, implies that you 
probably align a little more closely with my operational/empiricist 
view of metrics. But I'd expect that, Chris -- you've been a CSO, and 
in charge of operational stuff before. :)

2) You've got a storied background from FedEx to @Stake to the Yankee Group. 
I see your experience trending from the operational to the analytical.  How much of your
operational experience lends  itself to the practical collection and presentation of
metrics -- specifically security metrics?  Does your broad experience help you in
choosing what to measure and how?


That's a keen insight, and one I haven't thought of before. You've 
caused me to get all introspective all of a sudden. Let me see if I 
can unravel the winding path that's gotten me to where I am today.

My early career was spent as an IT analyst at Roadway, a serious, 
operationally-focused trucking firm. You know those large trailers 
you see on the highways with "ROADWAY" on them? That's the company I 
was with. They had a reputation as being like the Marines. Now, I 
wasn't involved in the actual day-to-day operations side of the 
business, but when you work in IT for a company like that you get to 
know the business side. As part of my training I had to do "ride 
alongs," morning deliveries and customer visits. Later, I moved to 
the contract logistics side of the house, where I helped plan IT 
systems for transportation brokerage services and contract warehouses 
the company ran. The logistics division was the part of Roadway that 
was actually acquired by FedEx.

I think warehouses are just fascinating. They are one hell of a lot 
more IT intensive than you might think. I don't just mean bar code 
readers, forklifts and inventory control systems; I mean also the 
decision support systems that produce metrics used for analysis. For 
example, warehouses measure an overall metric for efficiency called 
"inventory turns" that describes how fast your stock moves through 
the warehouse. If you put something in on January 1 and move it out 
on December 31 of the same year, that part has a "velocity" of 1 turn 
per year. Because warehouses are real estate like any other, you can 
spread out your fixed costs by increasing the number of turns through 
the warehouse.

For example, one of the reasons why Dell -- a former customer of mine 
at Roadway-- was successful was that they figured out how to make 
their suppliers hold their inventory for them and deliver it to final 
assembly on a "just-in-time" (JIT) basis, instead of keeping lots of 
inventory on hand themselves. That enabled them to increase the 
number of turns through their warehouses to something like 40 per 
year, when the average for manufacturing was like 12. That efficiency 
gain translated directly to profitability. (Digression: Apple, by the 
way, has lately been doing about 50 turns a year through their 
warehouses. Any wonder why they make as much money on PCs as HP, who 
has 6 times more market share? It's not *just* because they choose 
their markets so that they don't get suckered into the low margin 
part of the business; it's also because their supply chain operations 
are phenomenally efficient.)

Another thing I think is fascinating about warehouses is how you 
account for inventory. Most operators divide their inventory into 
"A", "B" and "C" goods based on how fast they turn through the 
warehouse. The "A" parts might circulate 10-50x faster than the C 
parts. So, a direct consequence is that when you lay out a warehouse 
you do it so that you can pick and ship your A parts fastest. The 
faster you do that, the more efficient your labor force and the less 
it costs you to run things. Neat, huh?

Now, I mention these things not strictly speaking to show you what a 
smartypants I am about supp ly chain operations. The real point is to 
show how serious operational decisions are made based on deep 
analytics. Everything I just mentioned can be modeled and measured: 
where you site the warehouses themselves, how you design the 
warehouse to maximize your ability to pick and ship the highest-
velocity items, and what your key indicators are. There's a virtuous 
feedback loop in place that helps operators understand where they are 
spending their time and money, and that in turn drives new 
innovations that increase efficiencies.

In supply chain, the analytics are the key to absolutely everything. 
And they are critical in an industry where costs matter. In that 
regard, manufacturing shares a lot with investment banking: leading 
firms are willing to invest a phenomenal amount of time and money to 
shave off a few basis points on a Treasury bill derivative. But this 
is done with experiential, analytical data used for decision support 
that is matched with a model. You'd never be able to justify 
redesigning a warehouse or hiring all of Yale's graduating math 
majors unless you could quantify and measure the impact of those 
investments on the processes themselves. Experiential data feeds, and 
improves, the model.

In contrast, when you look at security you see nothing like this. 
Fear and uncertainty rule, and almost every spending decision is made 
on the basis of intuition rather than facts. Acquisition costs 
matter, but operational costs don't. Can you imagine what would 
happen if you plucked a seasoned supply-chain operations manager out 
of a warehouse, plonked him down in a chair, and asked him to watch 
his security counterpart in action? Bear in mind this is a world 
where his CSO friend is told that the answer to everything is "buy 
our software" or "install our appliance." The warehouse guy would 
look at him like he had two heads. Because in his world, you don't 
spray dollars all over the place until you have a detailed, grounded, 
empirical view of what your processes are all about. You simply don't 
have the budget to do it any other way.

But in security, the operations side of things is so immature, and 
the gullibility of CIOs and CEOs so high, that they are willing to 
write checks without knowing whether the things they are buying 
actually work. And by "work," I don't mean "can be shown to stop a 
certain number of viruses at the border"; I mean "can be shown to 
decrease time required to respond to a security incident" or "has 
increased the company's ability to share information without 
incurring extra costs," or "has cut the pro-rata share of our IT 
operations spent on rebuilding desktops."

Putting myself in the warehouse manager's shoes again: for security, 
I'd like to know why nobody talks about activity-based costing. Or 
about process metrics -- that is, cycle times for everyday security 
activities -- in a serious way. Or benchmarking -- does my firm have 
twice as many security defects in our web applications as yours? Are 
we in the first, second, third or fourth quartiles?

If the large security companies were serious, we'd have a firmer grip 
on the activity, impact and cost side of the ledger. For example, why 
won't AV companies disclose how much malware is actually circulating 
within their customer bases, despite their promises of "total 
protection"? When the WMF zero-day exploit came out, how come none of 
the security companies knew how many of their customers were 
infected? And how much the cleanup efforts cost? Either nobody knew, 
or nobody wanted to tell. I think it's the former. If I were in the 
shoes of my Roadway operational friend, I'd be pissed off about the 
complete lack of feedback between spending, activities, impact and cost.

If this sounds like a very odd take on security, it is. My mentor and 
former boss, Dan Geer, likes to say that there are a lot of people 
who don't have classical security training, but who bring "hybrid 
vigor" to the field. I identify with that. With my metrics research, 
I just want to see if we can bring serious analytic rigor to a field 
that has resisted it for so long. And I mean that in an operational 
way, not a risk-equation way.

So, that's an exceptionally long-winded way of saying "yes" to your 
question -- I've trended from operational to analytical. I'm not sure 
that my past experience has necessarily helped me pick particular 
security metrics per se, but it has definitely biased me towards 
those that are operational rather than risk-based.

3) You've recently published your book.  I think it was a great appetite whetter but
I was left -- as were I think many of us who are members of the "lazy guild"-- wanting
more.   Do you plan to follow-up with a metrics toolkit of sorts?  You know, a templated
guide -- Metrics for Dummies?


You know, that's a great point. The fabulous blogger Layer 8, who 
gave my book an otherwise stunning review that I am very grateful for 
("I tucked myself into bed, hoping to sleep—but I could not sleep 
until I had read Security Metrics cover to cover. It was That 
Good."), also had that same reservation. Her comment was, "that final 
chapter just stopped short and dumped me off the end of the book, 
without so much as a fare-thee-well Final Overall Summary. It just 
stopped, and without another word, put on its clothes and went home". 
Comparing my prose to a one night stand is pretty funny, and a 
compliment.

Ironically, as the deadline for the book drew near, I had this great 
idea that I'd put in a little cheat-sheet in the back, either as an 
appendix or as part of the endpapers. But as with many things, I 
simply ran out of time. I did what Microsoft did to get Vista out the 
door -- I had to cut features and ship the fargin' bastid.

One of the great things about writing a book is that people write you 
letters when they like or loathe something they read. Just about all 
of my feedback has been very positive, and I have received a number 
of very thoughtful comments that shed light on what readers' 
companies are doing with metrics. I hope to use the feedback I've 
gotten to help me put together a "cheat sheet" that will boil the 
metrics I discuss in the book into something easier to digest.

4) You've written about the impending death of traditional Anti-Virus technology and its
evolution to combat the greater threats from adaptive Malware.  What role do you think
virtualization technology that provides a sandboxed browsing environment will have in
this space, specifically on client-side security?


It's pretty obvious that we need to do something to shore up the 
shortcomings of signature-based anti- malware software. I regularly 
check out a few of the anti-virus benchmarking services, like the 
OITC site that aggregates the Virustotal scans. And I talk to a 
number of anti-malware companies who tell me things they are seeing. 
It's pretty clear that current approaches are running out of gas. All 
you have to do is look at the numbers: unique malware samples are 
doubling every year, and detection rates for previously-unseen 
malware range from the single digits to the 80% mark. For an industry 
that has long said they offered "total protection," anything less 
than 100% is a black eye.

Virtualization is one of several alternative approaches that vendors 
are using to help boost detection rates. The idea with virtualization 
is to run a piece of suspected malware in a virtual machine to see 
what it does. If, after the fact, you determine that it did something 
naughty, you can block it from running in the real environment. It 
sounds like a good approach to me, and is best used in combination 
with other technologies.

Now, I'm not positive how pervasive this is going to be on the 
desktop. Existing products are already pretty resource-hungry. 
Virtualization would add to the burden. You've probably heard people 
joke: "thank God computers are dual-core these days, because we need 
one of 'em to run the security software on." But I do think that 
virtualized environments used for malware detection will become a 
fixture in gateways and appliances.

Other emergent ideas that complement virtualization are behavior 
blocking and herd intelligence. Herd intelligence -- a huge malware 
blacklist-in-the-sky -- is a natural services play, and I believe all 
successful anti-malware companies will have to embrace something like 
this to survive.

5) We've see the emergence of some fairly important back-office critical applications
make their way to the Web (CRM, ERP, Financials) and now GoogleApps is staking a hold
for the SMB.  How do you see the SaaS model affecting the management of security -- 
and ultimately risk --  over time?


Software as a service for security is already here. We've already 
seen fairly pervasive managed firewall service offerings -- the 
carriers and companies like IBM Global Services have been offering 
them for years. Firewalls still matter, but they are nowhere near as 
important to the overall defense posture as before. That's partly 
because companies need to put a lot of holes in the firewall. But 
it's also because some ports, like HTTP/HTTPS, are overloaded with 
lots of other things: web services, instant messaging, VPN tunnels 
and the like. It's a bit like the old college prank of filling a 
paper bag with shaving cream, sliding it under a shut door, then jumping 
on it and spraying the payload all over the room's occupants. HTTP is 
today's paper bag.

In the services realm, for more exciting action, look at what 
MessageLabs and Postini have done with the message hygiene space. At 
Yankee we've been telling our customers that there's no reason why an 
enterprise should bother to build bespoke gateway anti-spam and anti-
malware infrastructures any more. That's not just because we like 
MessageLabs or Postini. It's also because the managed services have a 
wider view of traffic than a single enterprise will ever have, and 
benefit from economies of scale on the research side, not to mention 
the actual operations.

Managed services have another hidden benefit; you can also change 
services pretty easily if you're unhappy. It puts the service 
provider's incentives in the right place. Qualys, for example, 
understands this point very well; they know that customers will leave 
them in an instant if they stop innovating. And, of course, whenever 
you accumulate large amounts of performance data across your customer 
base, you can benchmark things. (A subject near and dear to my heart, 
as you know.)

With regards to the question about risk, I think managed services do 
change the risk posture a bit. On the one hand, the act of 
outsourcing an activity to an external party moves a portion of the 
operational risk to that party. This is the "transfer" option of the 
classic "ignore, mitigate, transfer" set of choices that risk 
management presents. Managed services also reduce political risk in a 
"cover your ass" sense, too, because if something goes wrong you can 
always point out that, for instance, lots of other people use the 
same vendor you use, which puts you all in the same risk category. 
This is, if you will, the "generally accepted practice" defense.

That said, particular managed services with large customer bases 
could accrue more risk by virtue of the fact that they are bigger 
targets for mischief. Do I think, for example, that spammers target 
some of their evasive techniques towards Postini and MessageLabs? I 
am sure they do. But I would still feel safer outsourcing to them 
rather than maintaining my own custom infrastructure.

Overall, I feel that managed services will have a "smoothing" or 
dampening effect on the risk postures of enterprises taken in 
aggregate, in the sense that they will decrease the volatility in 
risk relative to the broader set of enterprises (the "alpha", if you 
will). Ideally, this should also mean a decrease in the *absolute* 
amount of risk. Putting this another way: if you're a rifle shooter, 
it's always better to see your bullets clustered closely together, 
even if they don't hit near the bull's eye, rather than seeing them 
near the center, but dispersed. Managed services, it seems to me, can 
help enterprises converge their overall levels of security -- put the 
bullets a little closer together instead of all over the place. 
Regulation, in cases where it is prescriptive, tends to do that too.

Bonus Question:
6) If you had one magical dashboard that could display 5 critical security metrics
to the Board/Executive Management, regardless ofindustry, what would those elements
be?


I would use the Balanced Scorecard, a creation of Harvard professors 
Kaplan and Norton. It divides executive management metrics into four 
perspectives: financial, internal operations, customer, and learning 
and growth. The idea is to create a dashboard that incorporates 6-8 
metrics into each perspective. The Balanced Scorecard is well known 
to the corner office, and is something that I think every security 
person should learn about. With a little work, I believe quite 
strongly that security metrics can be made to fit into this framework.

Now, you might ask yourself, I've spent all of this work organizing 
my IT security policies along the lines of ISO 17799/2700x, or COBIT, 
or ITIL. So why can't I put together a dashboard that organizes the 
measurements in those terms? What's wrong with the frameworks I've 
been using? Nothing, really, if you are a security person. But i f you 
really want a "magic dashboard" that crosses over to the business 
units, I think basing scorecards on security frameworks is a bad 
idea. That's not because the frameworks are bad (in fact most of them 
are quite good), but because they aren't aligned with the business. 
I'd rather use a taxonomy the rest of the executive team can 
understand. Rather than make them understand a security or IT 
framework, I'd rather try to meet them halfway and frame things in 
terms of the way they think.

So, for example: for Financial metrics, I'd measure how much my IT 
security infrastructure is costing, straight up, and on an activity-
based perspective. I'd want to know how much it costs to secure each 
revenue-generating transaction; quick-and-dirty risk scores for 
revenue-generating and revenue/cost-accounting systems; DDOS downtime 
costs. For the Customer perspective I'd want to know the percentage 
and number of customers who have access to internal systems; cycle 
times for onboarding/offloading customer accounts; "toxicity rates" 
of customer data I manage; the number of privacy issues we've had; 
the percentage of customers who have consulted with the security 
team; number and kind of remediation costs of audit items that are 
customer-related; number and kind of regulatory audits completed per 
period, etc. The Internal Process perspective has of the really easy 
things to measure, and is all about security ops: patching 
efficiency, coverage and control metrics, and the like. For Learning 
and Growth, it would be about threat/planning horizon metrics, 
security team consultations, employee training effectiveness and 
latency, and other issues that measure whether we're getting 
employees to exhibit the right behaviors and acquire the right skills.

That's meant to be an illustrative list rather than definitive, and I 
confess it is rather dense. At the risk of getting all Schneier on 
you, I'd refer your readers to the book for more details. Readers can 
pick and choose from the "catalog" and find metrics that work for 
their organizations.

Overall, I do think that we need to think a whole lot less about 
things like ISO and a whole lot more about things like the Balanced 
Scorecard. We need to stop erecting temples to Securityness that 
executives don't give a damn about and won't be persuaded to enter. 
And when we focus just on dollars, ALE and "security ROI", we make 
things too simple. We obscure the richness of the data that we can 
gather, empirically, from the systems we already own. Ironically, the 
Balanced Scorecard itself was created to encourage executives to move 
beyond purely financial measures. Fifteen years later, you'd think we 
security practitioners would have taken the hint.
Categories: Risk Management, Security Metrics, Take5 Tags: