Search Results

Keyword: ‘taxonomy’

Cloud Computing Not Ready For Prime Time?

March 9th, 2009 4 comments

I just read another in a never-ending series of articles that takes a polarized view of Cloud Computing and its readiness for critical applications and data.

In the ComputerWorld article titled "Cloud computing not ready for critical apps,", Craig Steadman and Patrick Thibodeau present some very telling quotes from CIO's of some large enterprises regarding their reticence toward utilizing "Cloud Computing" and it's readiness for their mission critical needs.

The reasons are actually quite compelling, and I speak to them (and more) in my latest Cloud Computing presentation which I am giving at Source Boston this week:


Reliability, availability and manageability are all potential show-stoppers for the CIO's in this article, but these are issues of economic and adoptive context that don't present the entire picture. 

What do I mean?

At the New England Cloud Computing Users' Group, a Cloud-based startup called Pixily presented on their use of Amazon's AWS services. They painted an eye-opening business case which detailed the agility and tremendous cost savings that the "Cloud" offers.  "The Cloud" provides them with reduced time-to-market, no up-front capital expenditures and allows them to focus on their core competencies. 

All awesome stuff.

I asked them about how their use of AWS and what amounted to a sole-source service provider did to their disaster recovery, redundancy/resiliency and risk management processes.  They had to admit that the day they went live with feature coverage on the front page of several newspapers also happened to be the day that Amazon suffered an 8 hour outage, and thus, so did they.

Now, for a startup, the benefits often outweigh the risks associated for downtime and vendor lock-in. For an established enterprise with cutthroat service levels, regulatory pressures and demanding customers who won't/can't tolerate outages, this is not the case.

Today we're suffering from issues surrounding the fact that emerging offerings in Cloud Computing are simply not mature if what you're looking for involves the holistic and cohesive management, reliability, resilience and transparency across suppliers of Cloud services.

We will get there as adoption increases and businesses start to lean on providers to create and adopt standards that answer the issues above, but today if you're an enterprise who needs five 9's, you may come to the same conclusion as the CIO's in the CW article.  If you're an SME/SMB/Startup, you may find everything you need in the Cloud.

It's important, however, to keep a balanced, realistic and contextual perspective when addressing Cloud Computing and its readiness — and yours — for critical applications.  Polarizing the discussion to one hyperbolic end or the other is not really helpful.


Categories: Cloud Computing, Cloud Security Tags:

What People REALLY Mean When They Say “THE Cloud” Is More Secure…

February 20th, 2009 6 comments

Over the last two days, I've seen a plethora (yes, Jefe, a plethora) of trade rag and blog articles espousing that The Cloud is more secure than an enterprise's datacenter and that Cloud security concerns are overblown.  I'd pick these things apart, but honestly, I've got work to do.


Here's the problem with these generalizations, even when some of the issues these people describe are actually reasonably good points:

Almost all of these references to "better security through Cloudistry" are drawn against examples of Software as a Service (SaaS) offerings.  SaaS is not THE Cloud to the exclusion of everything else.  Keep defining SaaS as THE Cloud and you're being intellectually dishonest (and ignorant.)

But since people continue to attest to SaaS==Cloud, let me point out something relevant.

There are two classes of SaaS vendors: those that own the entire stack including the platform and underlying infrastructure and those those that don't.  

Those that have control/ownership over the entire stack naturally have the opportunity for much tighter control over the "security" of their offerings.  Why?  because they run their business and the datacenters and applications housed in them with the same level of diligence that an enterprise would.

They have context.  They have visibility.  They have control.  They have ownership of the entire stack.  

The HUGE difference is that in many cases, they only have to deal with supporting a limited number of applications.  This reflects positively on those who say "Cloud SaaS providers are "more secure," mostly because they have less to secure.

Meanwhile those SaaS providers that simply run their appstack atop someone else's platform and infrastructure are, in turn, at the mercy of their providers.  The information and applications are abstracted from the underlying platforms and infrastructure to the point that there is no unified telemetry or context between the two.  Further, add in the multi-tenancy issue and we're now talking about trust boundaries that get very fuzzy and hard to define: who is responsible for securing what.

Just. Like. An. Enterprise. 🙁

Check out the Cloud model below which shows the demarcation between the various layers of the SPI model of which SaaS is but ONE:

The further up the offering stack you go, the more control you have over your information and the security thereof. Oh, and just one other thing.  The notion that Cloud offerings diminish attack surfaces is in many cases a good thing for sophisticated attackers as much as it may act as a deterrent.  Why?  Because now they have a more clearly defined set of attack surfaces — usually at the application layer — that makes their job easier.

Next time one of these word monkeys makes a case for how much more secure The Cloud is and references a SaaS vendor like (a single application) in comparison to an enterprise running (and securing) hundreds of applications, remind them about this and this, both Cloud providers. I wrote about this last year in an article humorously titled "Cloud Providers Are Better At Securing Your Data Than You Are."

Like I said on Twitter this morning "I *love* the Cloud. I just don't trust it.  Sort of like why I don't give my wife the keys to my motorcycles."

We done now?


Categories: Cloud Computing, Cloud Security Tags:

Incomplete Thought: Separating Virtualization From Cloud?

February 18th, 2009 18 comments

I was referenced in a CSO article recently titled "Four Questions On Google App Security." I wasn't interviewed for the story directly, but Bill Brenner simply referenced our prior interviews and my skepticism for virtualization security and cloud Security as a discussion point.

Google's response was interesting and a little tricky given how they immediately set about driving a wedge between virtualization and Cloud.  I think I understand why, but if the article featured someone like Amazon, I'm not convinced it would go the same way…

As I understand it, Google doesn't really leverage much in the way of virtualization (from the classical compute/hypervisor perspective) for their "cloud" offerings as compared to Amazon. That may be in large part due to the fact of the differences in models and classification — Amazon AWS is an IaaS play while GoogleApps is a SaaS offering.

You can see why I made the abstraction layer in the cloud taxonomy/ontology model "optional."

This post dovetails nicely with Lori MacVittie's article today titled "Dynamic Infrastructure: The Cloud Within the Cloud" wherein she highlights how the obfuscation of infrastructure isn't always a good thing. Given my role, what's in that cloudy bubble *does* matter.

So here's my incomplete thought — a question, really:

How many of you assume that virtualization is an integral part of cloud computing? From your perspective do you assume one includes the other?  Should you care?

Yes, it's intentionally vague.  Have at it.


Dear Mr. Oberlin: Here’s Your Sign…

February 11th, 2009 4 comments

No Good Deed Goes Unpunished…

I've had some fantastic conversations with folks over the last couple of weeks as we collaborated from the perspective of how a network and security professional might map/model/classify various elements of Cloud Computing.

I just spent several hours with folks at ShmooCon (a security conference) winding through the model with my peers getting excellent feedback.  

Prior to that, I've had many people say that the collaboration has yielded a much simpler view on what the Cloud means to them and how to align solutions sets they already have and find gaps with those they don't.

My goal was to share my thinking in a way which helps folks with a similar bent get a grasp on what this means to them.  I'm happy with the results.

And then….one day at Cloud Camp…

However, it seems I chose an unfortunate way of describing what I was doing in calling it a taxonomy/ontology, despite what I still feel is a clear definition of these words as they apply to the work.

I say unfortunate because I came across a post by Steve Oberlin, Cassat's Chief Scientist on his "Cloudology" blog titled "Cloud Burst" that resonates with me as the most acerbic, condescending and pompous contributions to nothingness I have read in a long time.

Steve took 9 paragraphs and 7,814 characters to basically say that he doesn't like people using the words taxonomy or ontology to describe efforts to discuss and model Cloud Computing and that we're all idiots and have provided nothing of use.

The most egregiously offensive comment was one of his last points:

I do think some blame (a mild chastisement) is owed to anyone participating in the cloud taxonomy conversation that is not exercising appropriately-high levels of skepticism and insisting on well-defined and valid standards in their frameworks.  Taxonomies are thought-shaping tools and bad tools make for bad thinking.   One commenter on one of the many blogs echoing/amplifying the taxonomy conversation remarked that some of the diagrams were mere “marketecture” and others warned against special interests warping the framework to suit their own ends.  We should all be such critical thinkers.

What exactly in any of my efforts (since I'm not speaking for anyone else) suggests that in collaborating and opening up the discussion for unfettered review and critique, constitutes anything other than high-levels of skepticism?  The reason I built the model in the first place was because I didn't feel the others accurately conveyed what was relevant and important from my perspective.  I was, gasp!, skeptical. 

We definitely don't want to have discussions that might "shape thought."  That would be dangerous.  Shall we start burning books too?

From the Department of I've Had My Digits Trampled..

So what I extracted from Oberlin's whine is that we are all to be chided because somehow only he possesses the yardstick against which critical thought can be measured?  I loved this bit as he reviewed my contribution:

I might find more constructive criticism to offer, but the dearth of description and discussion of what it really means (beyond the blog’s comments, which were apparently truncated by TypePad) make the diagram something of a Rorschach test.  Anyone discussing it may be revealing more about themselves than what the concepts suggested by the diagram might actually mean.

Interestingly, over 60 other people have stooped low enough to add their criticism and input without me "directing" their interpretation so as not to be constraining, but again, somehow this is a bad thing.

So after sentencing to death all those poor electrons that go into rendering his rant about how the rest of us are pissing into the wind, what did Oberlin do to actually help clarify Cloud Computing?  What wisdom did he impart to set us all straight?  How did he contribute to the community effort — no matter how misdirected we may be — to make sense of all this madness?

Let me be much more concise than the 7,814 characters Oberlin needed and sum it up in 8:


So it is with an appropriate level of reciprocity that I thank him for it accordingly.


P.S. Not to be outdone, William Vanbenepe has decided to bestow upon Oberlin a level of credibility not due to his credentials or his conclusions, but because (and I quote) "...[he] just love[s] sites that don't feel the need to use decorative pictures. His doesn't have a single image file which means that even if he didn't have superb credentials (which he does) he'd get my respect by default."

Yup, we bottom feeders who have to resort to images really are only in it for the decoration. Nice, jackass.

Update: The reason for the strikethrough above — and my public apology here — is that William contacted me and clarified he was not referring to me and my pretty drawings (my words,) although within context it appeared like he was.  I apologize, William and instead of simply deleting it, I am admitting my error, apologizing and hanging it out to dry for all to see.  William is not a jackass. As is readily apparent, I am however. 😉

Categories: Cloud Computing, Cloud Security Tags:

Virtual Jot Pad: The Cloud As a Fluffy Offering In the Consumerization Of IT?

December 2nd, 2008 1 comment

This a post that's bigger than a thought on Twitter but almost doesn't deserve a blog, but for some reason, I just felt the need to write it down.  This may be one of those "well, duh" sorts of posts, but I can't quite verbalize what is tickling my noggin here.

As far as I can tell, the juicy bits stem from the intersection of cloud cost models, cloud adopter profile by company size/maturity and the concept of the consumerization of IT.

I think 😉

This thought was spawned by a couple of interesting blog posts:

  1. James Urquhart's blog titled "The Enterprise barrier-to-exit in cloud computing" and "What is the value of IT convenience" which led me to…
  2. Billy Marshall from rPath and his blog titled "The Virtual Machine Tsunami."

These blogs are about different things entirely but come full circle around to the same point.

James first shed some interesting light on the business taxonomy, the sorts of IT use cases and classes of applications and operations that drive businesses and their IT operations to the cloud, distinguishing between what can be described as the economically-driven early adopters of the cloud in SMB's versus mature larger enterprises in his discussion with George Reese from O'Reilly via Twitter:

George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can't justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal.

I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

This existing investment in infrastructure therefore acts almost as a "barrier-to-exit" for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that

That's a really interesting differentiation that hasn't been made as much as it should, quite honestly.  In the marketing madness that has ensued, you get the feeling that everyone, including large enterprises, are rushing willy-nilly to the cloud and outsourcing the majority of their compute loads, not the cloudbursting overflow.

Billy Marshall's post offers some profound points including one that highlights the oft-reported and oft-harder-to-prove concept of VM sprawl and the so-called "frictionless" model of IT, but with a decidedly cloud perspective. 

What was really interesting was the little incandescent bulb that began to glow when I read the following after reading James' post:

Amazon EC2
demand continues to skyrocket. It seems that business units are quickly
sidestepping those IT departments that have not yet found a way to say
“yes” to requests for new capacity due to capital spending constraints
and high friction processes for getting applications into production
(i.e. the legacy approach of provisioning servers with a general
purpose OS and then attempting to install/configure the app to work on
the production implementation which is no doubt different than the
development environment).

I heard a rumor that a new datacenter in
Oregon was underway to support this burgeoning EC2 demand. I also saw
our most recent EC2 bill, and I nearly hit the roof. Turns out when you
provide frictionless capacity via the hypervisor, virtual machine
deployment, and variable cost payment, demand explodes. Trust me.

I've yet to figure out if the notion of frictionless capacity is a good thing or not if your ability to capacity plan is outpaced by a consumption model and a capacity yield that can just continue to climb without constraint.  At what point does the crossover between cost savings from infrastructure that bounded costs by resource constraints of physical servers become eclipsed by runaway use?

I guess I'll have to wait to see his bill 😉

Back to James' post, he references an interchange on Twitter with George Reese (whose post on "20 Rules for Amazon Cloud Security" I am waiting to fully comment on) in which George commented:

"IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier."

…which is basically the same thing Billy said in a Nick Carr kind of way.  The key question here is for whom?  As it relates to the SMB, I'd agree with this statement, but the thing that really sunk it was that statement just doesn't yet jive for larger enterprises.  In James' second post, he drives this home:

I think these examples demonstrate an important decision point for IT organizations, especially during these times of financial strife. What is the value of IT convenience? When is it wise to choose to pay more dollars (or euros, or yen, or whatever) to gain some level of simplicity or focus or comfort? In the case of virtualization, is it always wise to leverage positive economic changes to expand service coverage? In the case of cloud computing, is it always wise to accept relatively high price points per CPU hour over managing your own cheaper compute loads?

Is the cloud about convenience or true business value?  Is any opportunity to eliminate a barrier — whether that barrier actually acts as a logical check and balance within the system — simply enough to drive business to the cloud?

I know the side-stepping IT bit has been spoken about ad nauseum within the context of cloud; namely when describing agility, flexibility, and economics, but it never really occurred to me that the cloud — much in the way you might talk about an iPhone — is now being marketed itself as another instantiation of the democratization, commoditization and consumerization of IT — almost as an application — and not just a means to an end.

I think the thing that was interesting to me in looking at this issue from two perspectives is that differentiation between the SMB and the larger enterprise and their respective "how, what and why" cloud use cases are very much different.  That's probably old news to most, but I usually don't think about the SMB in my daily goings-on.

Just like the iPhone and its adoption for "business use," the larger enterprise is exercising discretion in what's being dumped onto the cloud with a more measured approach due, in part, to managing risk and existing sunk costs, while the SMB is running to embrace it it at full speed, not necessarily realizing the hidden costs.


Categories: Cloud Computing Tags:

Endpoint Security vs. DLP? That’s Part Of the Problem…

March 31st, 2008 6 comments

Larry Walsh wrote something (Defining the Difference Between Endpoint Security and Data Loss Prevention) that sparked an interesting debate based upon a vendor presentation given to him on "endpoint security" by SanDisk.

SanDisk is bringing to market a set of high-capacity USB flash drives that feature built-in filesystem encryption as well as strong authentication and access control.  If the device gets lost with the data on it, it’s "safe and secure" because it’s encrypted.  They are positioning this as an "endpoint security" solution.

I’m not going to debate the merits/downsides of that approach because I haven’t seen their pitch, but suffice it to say, I think it’s missing a "couple" of pieces to solve anything other than a very specific set of business problems.

Larry’s dilemma stems from the fact that he maintains that this capability and functionality is really about data loss protection and doesn’t have much to do with "endpoint security" at all:

We debated that in my office for a few minutes. From my perspective, this solution seems more like a data loss prevention solution than endpoint security. Admittedly, there are many flavors of endpoint security. When I think of endpoint security, I think of network access control (NAC), configuration management, vulnerability management and security policy enforcement. While this solution is designed for the endpoint client, it doesn’t do any of the above tasks. Rather, it forces users to use one type of portable media and transparently applies security protection to the data. To me, that’s DLP.

In today’s market taxonomy, I would agree with Larry.  However, what Larry is struggling with is not really the current state of DLP versus "endpoint security," but rather the future state of converged information-centric governance.  He’s describing the problem that will drive the solution as well as the inevitable market consolidation to follow.

This is actually the whole reason Mogull and I are talking about the evolution of DLP as it exists today to a converged solution we call CMMP — Content Management, Monitoring and Protection. {Yes, I just added another M for Management in there…}

What CMMP represents is the evolved and converged end-state technology integration of solutions that today provide a point solution but "tomorrow" will be combined/converged into a larger suite of services.

Off the cuff, I’d expect that we will see at a minimum the following technologies being integrated to deliver CMMP as a pervasive function across the information lifecycle and across platforms in flight/motion and at rest:

  • Data leakage/loss protection (DLP)
  • Identity and access management (IAM)
  • Network Admission/Access Control (NAC)
  • Digital rights/Enterprise rights management (DRM/ERM)
  • Seamless encryption based upon "communities of interest"
  • Information classification and profiling
  • Metadata
  • Deep Packet Inspection (DPI)
  • Vulnerability Management
  • Configuration Management
  • Database Activity Monitoring (DAM)
  • Application and Database Monitoring and Protection (ADMP)
  • etc…

That’s not to say they’ll all end up as a single software install or network appliance, but rather a consolidated family of solutions from a few top-tier vendors who have coverage across the application, host and network space. 

If you were to look at any enterprise today struggling with this problem, they likely have or are planning to have most of the point solutions above anyway.  The difficulty is that they’re all from different vendors.  In the future, we’ll see larger suites from fewer vendors providing a more cohesive solution.

This really gives us the "cross domain information protection" that Rich talks about.

We may never achieve the end-state described above in its entirety, but it’s safe to say that the more we focus on the "endpoint" rather than the "information on the endpoint," the bigger the problem we will have.


Security and Disruptive Innovation Part IV: Embracing Disruptive Innovation by Mapping to a Strategic Innovation Framework

November 29th, 2007 4 comments

This is the last of the series on the topic of "Security and Disruptive Innovation."

In Part I we talked about the definition of innovation, cited some examples of general technology innovation/disruption, discussed technology taxonomies and lifecycles and what initiatives and technologies CIO’s are investing in.

In Parts II and III we started to drill down and highlight some very specific disruptive technologies that were impacting Information Security.

In this last part, we
will explore how to take these and future examples of emerging
disruptive innovation and map them to a framework which will allow you
to begin embracing them rather that reacting to disruptive innovation after the fact.

21. So How Can we embrace disruptive technology?
Most folks in an InfoSec role find themselves overwhelmed juggling the day-to-day operational requirements of the job against the onslaught of evolving technology, business, culture, and economic "progress"  thrown their way.

In most cases this means that they’re rather busy mitigating the latest threats and remediating vulnerabilities in a tactical fashion and find it difficult to think strategically and across the horizon.

What’s missing in many cases is the element of business impact and how in conjunction with those threats and vulnerabilities, the resultant impact should drive the decision on what to focus on and how to prioritize actions by whether they actually matter to your most important assets.

Rather than managing threats and vulnerabilities without context and just deploy more technology blindly, we need to find a way to better manage risk.

We’ll talk about getting closer to assessing and managing risk in a short while, but if we look at what entails managing threats and vulnerabilities as described above, we usually end up in a discussion focused on technology.  Accepting this common practice today, we need a way to effectively leverage our investment in that technology to get the best bang for our buck.

That means we need to actively invest in and manage a strategic security portfolio — like an investor might buy/sell stocks.  Some items you identify and invest in for the short term and others for the long term.  Accordingly, the taxonomy of those investments would also align to the "foundational, commoditizing, distinguished" model previously discussed so that the diversity of the solutions sets can be associated, timed and managed across the continuum of investment.

This means that we need to understand how the intersection of technology, business, culture and economics intersect to affect the behavior of adopters of disruptive innovation so we can understand where, when, how and if to invest.

If this is done rationally, we will be able to demonstrate how a formalized innovation lifecycle management process delivers transparency and provides a RROI (reduction of risk on investment) over the life of the investment strategy. 

It means we will have a much more leveraged ability to proactively invest in the necessary people, process and technology ahead of the mainstream emergence of the disruptor by building a business case to do so.

Let’s see how we can do that…

22. Understand Technology Adoption Lifecycle

This model is what we use to map the classical adoption cycle of disruptive innovation/technology and align it to a formalized strategic innovation lifecycle management process.

If you look at the model on the top/right, it shows how innovators initially adopt "bleeding edge" technologies/products which through uptake ultimately drive early adopters to pay attention.

It’s at this point that within the strategic innovation framework that we identify and prioritize investment in these technologies as they begin to evolve and mature.  As business opportunities avail themselves and these identified and screened disruptive technologies are vetted, certain of them are incubated and seeded as they become an emerging solution which adds value and merits further investment.

As they mature and "cross the chasm" then the early majority begins to adopt them and these technologies become part of the portfolio development process.  Some of these solutions will, over time, go away due to natural product and market behaviors, while others go through the entire area under the curve and are managed accordingly.

Pairing the appetite of the "consumer" against the maturity of the product/technology is a really important point.  Constantly reassessing the value brought to the mat by the solution and whether a better, faster, cheaper mousetrap may be present already on your radar is critical.

This isn’t rocket science, but it does take discipline and a formal process.  Understanding how the dynamics of culture, economy, technology and business are changing will only make your decisions more informed and accurate and your investments more appropriately aligned to the business needs.

23. Manage Your Innovation Pipeline

This slide is another example of the various mechanisms of managing your innovation pipeline.  It is a representation of how one might classify and describe the maturation of a technology over time as it matures into a portfolio solution:

     * Sensing
     * Screening
     * Developing
     * Commercializing

In a non-commerical setting, the last stage might be described as "blessed" or something along those lines. 

The inputs to this pipeline as just as important as the outputs; taking cues from customers, internal and external market elements is critical for a rounded decision fabric.  This is where that intersection of forces comes into play again.  Looking at all the elements and evaluating your efforts, the portfolio and the business needs formally yields a really interesting by-product: Transparency… 

24. Provide Transparency in portfolio effectiveness

I didn’t invent this graph, but it’s one of my favorite ways of visualizing my investment portfolio by measuring in three dimensions: business impact, security impact and monetized investment.  All of these definitions are subjective within your organization (as well as how you might measure them.)

The Y-axis represents the "security impact" that the solution provides.  The X-axis represents the "business impact" that the  solution provides while the size of the dot represents the capex/opex investment made in the solution.

Each of the dots represents a specific solution in the portfolio.

If you have a solution that is a large dot toward the bottom-left of the graph, one has to question the reason for continued investment since it provides little in the way of perceived security and business value with high cost.   On the flipside, if a solution is represented by a small dot in the upper-right, the bang for the buck is high as is the impact it has on the organization.

The goal would be to get as many of your investments in your portfolio from the bottom-left to the top-right with the smallest dots possible.

This transparency and the process by which the portfolio is assessed is delivered as an output of the strategic innovation framework which is really comprised of part art and part science.

25. Balancing Art and Science

Andy Jaquith, champion of all things measured, who is now at Yankee but previously at security consultancy @Stake, wrote a very interesting paper that suggested that we might learn quite a bit about managing a security portfolio from the investment community on Wall Street.

Andy suggested, as I alluded to above that, this portfolio management concept — while not exactly aligned — is indeed as much art as it is science and elegantly suggested that using a framework to define a security strategy over time is enabled by a mature process:

"While the analogy is imperfect, security managers should be able to use the tools of unique and systematic management to create more-balanced security strategies."

I couldn’t agree more 😉

26. How Are you doing?


If your CEO/CIO/CFO came to you today and put in front of you this list of disruptive innovation/technology and asked how these might impact your existing security strategy and what you were doing about it, what would your answer be?

Again, many of the security practitioners I have spoken to can articulate in some form how their existing technology investments might be able to absorb some impact this disruption delivers, but many have no formalized process to describe why or how.

Luck?  Serendipity?  Good choices?  Common sense?

Unfortunately, without a formalized process that provides the transparency described above it becomes very difficult to credibly demonstrate that the appropriate amount of long term strategic planning has been provided for and will likely cause angst and concern in the next budget cycle when monies for new technology is asked for.

27. Ranum for President
At a minimum, what the business wants to know is whether, given the investment made, they are more or less at risk than they were before the investment was made (see here for what they really want to know.)

That’s a heady question and without transparency and process, one most folks would — without relying purely on instinct — have a difficult time answering.  "I guess" doesn’t count.

To make matters worse, people often confuse being "secure" with being less at risk, and I’m not sure that’s always a good thing.  You can be very secure, but unfortunately make the ability for the business to conduct business very difficult.  This elevates risk, which is bad. 

What we really seek to do is balance information sharing with the need to manage risk to an acceptable level.  So when folks ask if the future will be more "secure," I love to refer them to Marcus Ranum’s quote in the slide above: "…it will be just as insecure as it possibly can, while still continuing to function.  Just like it is today."

What this really means is that if we’re doing our job in the world of security, we’ll use the lens that a strategic innovation framework provides and pair it with the needs of the business to deliver a "security supply chain" that is just-in-time and with a level — no less and no more — than what is needed to manage risk to an acceptable level.

I do hope that this presentation gives you some ideas as to how you might take a longer term approach to delivering a strategic service even in the face of disruptive innovation/technology.


Categories: Disruptive Innovation Tags:

Security and Disruptive Innovation Part I: The Setup

November 8th, 2007 14 comments

As a follow-on to my post on security and innovation here, I’m going to do a series based upon my keynote from ISD titled "Why Security Should Embrace Disruptive Technology" with a brief narrative of each slide’s talking points

The setup for the the talk was summarized nicely:

IT departments have spent the last 10+ years enabling users by delivering revolutionary technology and
delegating ownership and control of intellectual property and information
in order to promote agility, innovation and competitive advantage on
behalf of the business. Meanwhile IT Security has traditionally
focused on reigning in the limits of this technology in a belated
compliance-driven game of tug-of-war to apply control over the same sets
of infrastructure, intellectual property and data that is utilized freely
by the business.
  Christofer Hoff, chief architect for Security Innovation at Unisys and
former Security 7 winner, will highlight several areas of emerging and
disruptive technologies and practices that should be embraced, addressed,
and integrated into the security portfolios and strategic dashboards of
all forward looking, business-aligned risk managers. Many of these topics
are contentious when discussing their impact on security:

  • Outsourcing of Security
  • Consumerization of IT
  • Software as a Service (SaaS)
  • Virtualization
  • De-perimeterization
  • Information Centricity
  • Next Generation Distributed Data Centers

Hoff will discuss what you ought to already have thought about and how to
map these examples to predict what is coming next and explore this
classical illustration of the cyclical patterns of how history, evolving
business requirements, technology and culture repeatedly intersect on a
never-ending continuum and how this convergence ought to be analyzed as
part of the strategic security program of any company.

I will be highlighting each of the seven examples above as a series on how we should embrace disruptive innovation and integrate it into our strategic planning process so we can manage it as opposed to the other way around.  First the setup of the presentation:

1. What is Innovation?

Innovation can simply be defined as people implementing new ideas to
creatively solve problems and add value. 

How you choose to define
"value" really depends upon your goal and how you choose to measure the
impact on the business you

Within the context of this discussion while there is certainly technical innovation in the security field — how to make security "better," "faster," or "cheaper," rather than focus on the latest piece of kit, I’m interested in exploring how disruptive technologies and innovative drivers from the intersection of business, culture, and economics can profoundly impact how, what, why and when you do what you do.

We are going to discuss how Security can and should embrace disruptive technology and innovation in a formulaic and process-oriented way with the lovely side effect of becoming more innovative in the process.

2. What is Disruptive Technology/Innovation?

Isd2007008Clayton Christensen coined this term and is known for his series of work in this realm.  He is perhaps best known for his books: The Innovator’s Solution and The Innovator’s Dilemma.

Christensen defined disruptive technology/innovation as "a technology, product or service
that ultimately overturns the dominant market leader, technology or

This sort of event can happen quickly or gradually and can be
evolutionary or revolutionary in execution.  In many cases, the
technology itself is not the disruptive catalyst, but rather the
strategy, business model or marketing/messaging creates the disruptive
impact.  It can also be radical or evolutionary in nature.

3. Examples of Disruptive Technology

Here are some examples from a general technology perspective that highlights disruptive technologies/innovation.

Mainframe computing was disrupted by mini computers and ultimately client-server desktop computing.  Long distance telephony was been broadly impacted by Internet telephony such as Skype and Vonage.  Apple’s iTunes has dramatically impacted the way music is purchased and enjoyed.  The list goes on.

The key takeaway here is that the dominant technologies and industries on the left often times didn’t see the forces on the right coming and when they did, it was already too late.   What’s really important is that we find a framework and a process by which we can understand how disruptive technology/innovation emerges.  This will allow us to try and tame the impact and harness disruption positively by managing it and our response to it.

4. Technology Evolution: The Theory of Punctuated Equilibrium

I’m a really visual person, so I like to model things by analogy that spark non-linear connections for me to reinforce a point.  When I was searching for an analogy that described the evolution of technology and innovation, it became clear to me that this process was not linear at all.

Bob Warfield over at the SmoothSpan blog gave me this idea for an evolution analogy called the Theory of Punctuated Equilibrium that describes how development and evolution of reproducing species actually happens in big bursts followed by periods of little change rather than constant, gradual transformation.

This is really important because innovation happens in spurts and is then absorbed and assimilated, but forecasting the timing of these events is really important.

5.  Mobius Strips and the Cyclic Security Continuum (aka the Hamster Wheel of Pain)

Isd2007012 If we look at innovation within the Information Security space as an example, we see evidence of this punctuated equilibrium distributed across what appears to be a never ending continuum.  Some might suggest that it’s like a never-ending Mobius strip.

Security innovation (mostly in technology) has manifested itself over time by offering a diverse set of solutions for a particular problem which ultimately settles down over time with solution conformity and functional democratization.  A classic example is NAC or DLP; lots of vendors spool up in a frenzy and ultimately thin down when the problem becomes defined and solution diversity thins.

Warfield described this as a classic damped oscillation where big swings in thinking ultimately settle down until everything looks and sounds the same…until the next "big thing" occurs.

What is problematic, however, is when we have overlays of timing curves of technology, economics, business requirements and culture.  Take for example the (cyclic) evolution of compute models: we started with the mainframe which were displaced my minis, desktops and mobile endpoints.  This changed the models of computing and how data was produced, consumed, stored and managed.

Interestingly as data has become more and more distributed, we’re now trending back to centralizing the computing experience with big honking centralied virtualized servers, storage and desktops.  The applications and protocols remain somewhere in between…

So while one set of oscillations are dampening, another is peaking.  It’s no wonder why we find it difficult to arrive at a static model in a dynamic instance.

6. Using Projections/Studies/Surveys to Gain Clarified Guidance

Trying to visualize this intersection of curves can be very taxing, so I like to use industry projections/surveys/studies to help clear the fog. Some folks love these things, others hate them.  We all use them for budget, however 😉

I like Gartner’s thematic consistency of their presentations, so I’m going to use several of their example snippets to highlight a more business-focused logical presentation of how impending business requirements will drive innovation and disruptive technology right to your doorstop.

As security practitioners we can use this information to stay ahead of the curve and not get caught flat-footed when disruptive innovation shows up because you’ll be prepared for it.

7. What CIO’s see as the Top 10 Strategic Technologies for 2008-2011

Gartner defines  a strategic technology as  "…one with the potential for significant impact on the enterprise in the next three years. Factors that denote significant impact include a high potential for disruption to IT or the business, the need for a major dollar investment, or the risk of being late to adopt."

Check out this list of technologies that your CIO has said are the technology categories that will provide significant impact to their enterprise.  How many of them can you  identify as being addressed in alignment to the business as part of your security strategy for the next three years?

Of the roughly 50 security professionals queried by me thus far, most can only honestly answer that they are doing their best to get in front of at most 1 to 2 of them…rot roh.

8. What those same CIO’s see as their Top 10 Priorities for 2007

Isd2007015 If we drill down a level and investigate what business-focused priorities CIO’s have for 2007, the lump in most security manager’s throats becomes bigger.

Of these top ten business priorities, almost all of those same 50 CISO’s I polled had real difficulty in demonstrating how their efforts were in alignment to these priorities, except as a menial "insurance purchase" acting as a grudge-based cost of business.

It becomes readily apparent to most that being a cost of business does not put one in the light of being strategic.  In fact, the bottom line impact caused by the never-ending profit draining by security is often in direct competition with some of these initiatives.  Security contributing to revenue growth, customer retention, controlling operating costs?


9. And here’s how those CIO’s are investing their Technology Dollars in 2007…

So now the story gets even more interesting.  If we take the Top 10 Strategic Technologies and hold that up against the Top 10 CIO Priorities, what we should see is a business-focused alignment of how one supports the other.

This is exactly what we get when we take a look at the investments in technology that CIO’s are making in 2007.

By the way, last year, "Security" was number one.  Now it’s number six.  I bet that next year, it may not even make the top ten.

This means that security is being classified as being less and less strategically important and is being seen as a feature being included in these other purchase/cost centers.  That means that unless you start thinking differently about how and what you do, you run the risk of becoming obsolete from a stand-alone budget perspective.

That lump in your throat’s getting pretty big now, huh?

10.  How Do I Start to Think About What/How My Security Investment Maps to the Business?  Cajun Food, Of Course!

Isd2007017 This is my patented demonstration of how I classify my security investments into a taxonomy that is based upon Cajun food recipes.

It’s referred to as "Hoff’s Jumbalaya Model" by those who have been victimized by its demonstration.  Mock it if you must, but it recently helped secure $21MM in late-stage VC funding…

Almost all savory Cajun dishes are made up of three classes of ingredients which I call: Foundational, Commodities and Distinguished.

Foundational ingredients are mature, high-quality and time-tested items that are used as the base for a dish.  You can’t make a recipe without using them and your choice of ingredients, preparation and cooking precision matter very much. 

Commodity ingredients are needed because without them, a dish would be bland.  However, the source of these ingredients is less of a concern given the diversity of choice and availability.  Furthermore, salt is salt — sure, you could use Fleur de Sel or Morton’s Kosher, but there’s not a lot of difference here.  One supplier could vanish and you’d have an alternative without much thought.

Distinguished ingredients are really what set a dish off.  If you’ve got a fantastic foundation combined with the requisite seasoning of commodity spices, adding a specific distinguished ingredient to the mix will complete the effort.  Andouille sausage, Crawfish, Alligator, Tasso or (if you’re from the South) Squirrel are excellent examples.  Some of these ingredients are hard to find and for certain dishes, very specific ingredients are needed for that big bang.

Bear with me now…

11. So What the Hell Does Jambalaya Have to Do with Security Technology?

Isd2007018 Our recipes for deploying security technology are just like making a pot of Jambalaya, of course! 

Today when we think about how we organize our spending and our deployment methodologies for security solutions, we’re actually following a recipe…even if it’s not conscious.

I’m going to use two large markets in intersection to demonstrate this.  Let’s overlay the service provider/mobile operator/telco. market and their security needs with that of the common commercial enterprise.

As with the Cajun recipe example, the go-to foundational ingredients that we based our efforts around are the mature, end-to-end, time-tested firewall and intrusion detection/prevention suites.  These ingredients have benefited from decades of evolution and are stable, mature and well-understood.  Quality is important as is the source.

In the case of either market space, short of scaling requirements, the SP/MSSP/MO/Telco and Enterprise markets both utilize common approaches and choices to satisfy their requirements.

Both markets also have many common overlapping sets of requirements and solution choices for the commoditizing ingredients.  In this case, except separated by scale and performance, there’s little difference the AV, Anti-Spam, or URL filtering functionality offered by the many vendors in the pool who supply these functions.  Vendor A could go out of business tomorrow and for the most part, Vendor B’s product could be substituted with the same functionality without much fuss.

Now, when we look at distinguished "ingredients," this is where we witness a bit of a divergence.  In the SP/MSSP/MO/Telco space, they have very specific requirements for solutions that are unique beyond just scale and performance.  Session Border Controllers and DDoS tools are an example.  In the enterprise, XML gateways and web application firewalls are key.  The point here is that these solutions are quite unique and are often the source of innovation and disruption.

Properly classifying your solutions into these categories allows one to demonstrate an investment strategy inline with the value it brings.  Some of these solutions start off being distinguished and can either become commoditzied quickly or ultimately make their way as features into the more stable and mature foundational ingredient class.

Keep this model handy…

12.  Mapping the Solution Classes (Ingredients) to a Technology/Innovation Curve: The Hype Cycle!

So, remember the Theory of Punctuated Equilibrium and it’s damped oscillation visual?  Check out Gartner’s Hype Cycle…it’s basically the same waveform.

I use the Hype Cycle slightly differently than Gartner does.  The G-Men use this to demonstrate how technology can appear and transform in terms of visibility and maturity over time.  Technology can appear almost anywhere along this curve; some are born commoditized and/or never make it.  Some take a long time to become recognized as a mature technology for adoption.

Ultimately, you’d like to see a new set of innovative or disruptive solutions/technologies appear on the left, get an uptake, mellow out over time and ultimately transform from diversity to conformity.  You can use the cute little names for the blips and bunkers if you like, but keep this motion across the curve top of mind.

Now, I map the classifications of Foundational, Commodities and Distinguished across this map and lo and behold, what we see is that most of the examples I gave (and that you can come up with) can be classified and qualified across this curve.  This allows a security manager/CISO to take technology hype cycle overlays and map them to an easily demonstrated/visualized class of solutions and investment strategies that also can speak to their lifecycle.

The things you really need to keep an eye on from an emerging innovation/disruption perspective are those distinguished solutions over on the left, climbing the "Technology Trigger" and aiming for the "Peak of Inflated Expectations" prior to sliding down to the "Trough of Disillusionment."  I think Gartner missed a perfect opportunity by not including the "Chasm of Eternal Despair" 😉

We’re going to talk more about this later, but you can essentially take your portfolio of technology solutions and start to map those business drivers/technologies prioritized by your CIO and see how you measure up.  When you need to talk budget, you can easily demonstrate how you’re keeping pulse with the dynamics of the industry, managing innovation and how that translates to your spend and depreciation cycles. 

You shore up your investment in Foundational components, manage the Commodities over time (they should get cheaper) and as business sees fit, put money into incubating emerging technologies and innovation.

Up Next…Some Really Interesting Examples of Disruptive Technology/Innovation and how they impact Security…

Categories: Disruptive Innovation Tags:

Take5 (Episode #6) – Five Questions for Andy Jaquith, Yankee Group Analyst and Metrician…

September 13th, 2007 3 comments

This sixth episode of Take5 interviews Andy Jaquith, Yankee Group analyst and champion of all things Metric…I must tell you that Andy’s answers to my interview questions were amazing to read and I really appreciate the thought and effort he put into this.

First a little background on the victim:

Andrew Jaquith is a program manager in Yankee Group’s Enabling
Technologies Enterprise group with expertise in portable digital
identity and web application security. As Yankee Group’s lead security
analyst, Jaquith drives the company’s security research agenda and
researches disruptive technologies that enable tomorrow’s Anywhere
Enterprise™ to secure its information assets.


has 15 years of IT experience. Before joining Yankee Group, he
co-founded and served as program director at @stake, Inc., a security
consulting pioneer, which Symantec Corporation acquired in 2004. Before
@stake, Jaquith held project manager and business analyst positions at
Cambridge Technology Partners and FedEx Corporation.

His application security and  metrics research has been featured in publications such as CIO, CSO and the IEEE Security & Privacy.
In addition, Jaquith is the co-developer of a popular open source wiki
software package. He is also the author of the recently released
Pearson Addison-Wesley book, Security  Metrics: Replacing Fear, Uncertainty and Doubt. It has been praised by  reviewers as both ”sparking and witty“ and ”one of the best written security  books ever.”
  Jaquith holds a B.A. degree in  economics and political science from Yale University.


1) Metrics.  Why is this such a contentious topic?  Isn't the basic axiom of "you can't 
manage what you don't measure" just common sense?  A discussion on metrics evokes very
passionate discussion amongst both proponents and opponents alike.  Why are we still
debating the utility of measurement?

The arguments over metrics are overstated, but to the extent they are 
contentious, it is because "metrics" means different things to 
different people. For some people, who take a risk-centric view of 
security, metrics are about estimating risk based on a model. I'd put 
Pete Lindstrom, Russell Cameron Thomas and Alex Hutton in this camp. 
For those with an IT operations background, metrics are what you get 
when you measure ongoing activities. Rich Bejtlich and I are 
probably closer to this view of the world. And there is a third camp 
that feels metrics should be all about financial measures, which 
brings us into the whole "return on security investment" topic. A lot 
of the ALE crowd thinks this is what metrics ought to be about. Just 
about every security certification course (SANS, CISSP) talks about 
ALE, for reasons I cannot fathom.

Once you understand that a person's point of view of "metrics" is 
going to be different depending on the camp they are in -- risk, 
operations or financial -- you can see why there might be some 
controversy between these three camps. There's also a fourth group 
that takes a look at the fracas and says, "I know why measuring 
things matter, but I don't believe a word any of you are talking 
about." That's Mike Rothman's view, I suspect.

Personally, I have always taken the view that metrics should measure 
things as they are (the second perspective), not as you imagine, 
model or expect them to be. That's another way of saying that I am an 
epiricist. If you collect data on things and swirl them around in a 
blender, interesting things will stratify out.

Putting it another way: I am a measurer rather than a modeler. I 
don't claim to know what the most important security metrics are. But 
I do know that people measure certain things, and that those things 
gives them insights into their firms performance. To that end, I've 
got about 100 metrics documented in my book; these are largely based 
what people tell me they measure. Dan Geer likes to say, "it almost 
doesn't matter what you measure, but get started and measure 
something." The point of my book, largely, is to give some ideas 
about what those somethings might be, and to suggest techniques for 
analyzing the data once you have them.

Metrics aren't really that contentious. Just about everyone in the community is pretty friendly and courteous. It's 
a "big tent." Most of the differences are with respect to 
inclination. But, outside of the "metrics community" it really comes 
down to a basic question of belief: you either believe that security 
can be measured or you don't.

The way you phrased your question, by the way, implies that you 
probably align a little more closely with my operational/empiricist 
view of metrics. But I'd expect that, Chris -- you've been a CSO, and 
in charge of operational stuff before. :)

2) You've got a storied background from FedEx to @Stake to the Yankee Group. 
I see your experience trending from the operational to the analytical.  How much of your
operational experience lends  itself to the practical collection and presentation of
metrics -- specifically security metrics?  Does your broad experience help you in
choosing what to measure and how?

That's a keen insight, and one I haven't thought of before. You've 
caused me to get all introspective all of a sudden. Let me see if I 
can unravel the winding path that's gotten me to where I am today.

My early career was spent as an IT analyst at Roadway, a serious, 
operationally-focused trucking firm. You know those large trailers 
you see on the highways with "ROADWAY" on them? That's the company I 
was with. They had a reputation as being like the Marines. Now, I 
wasn't involved in the actual day-to-day operations side of the 
business, but when you work in IT for a company like that you get to 
know the business side. As part of my training I had to do "ride 
alongs," morning deliveries and customer visits. Later, I moved to 
the contract logistics side of the house, where I helped plan IT 
systems for transportation brokerage services and contract warehouses 
the company ran. The logistics division was the part of Roadway that 
was actually acquired by FedEx.

I think warehouses are just fascinating. They are one hell of a lot 
more IT intensive than you might think. I don't just mean bar code 
readers, forklifts and inventory control systems; I mean also the 
decision support systems that produce metrics used for analysis. For 
example, warehouses measure an overall metric for efficiency called 
"inventory turns" that describes how fast your stock moves through 
the warehouse. If you put something in on January 1 and move it out 
on December 31 of the same year, that part has a "velocity" of 1 turn 
per year. Because warehouses are real estate like any other, you can 
spread out your fixed costs by increasing the number of turns through 
the warehouse.

For example, one of the reasons why Dell -- a former customer of mine 
at Roadway-- was successful was that they figured out how to make 
their suppliers hold their inventory for them and deliver it to final 
assembly on a "just-in-time" (JIT) basis, instead of keeping lots of 
inventory on hand themselves. That enabled them to increase the 
number of turns through their warehouses to something like 40 per 
year, when the average for manufacturing was like 12. That efficiency 
gain translated directly to profitability. (Digression: Apple, by the 
way, has lately been doing about 50 turns a year through their 
warehouses. Any wonder why they make as much money on PCs as HP, who 
has 6 times more market share? It's not *just* because they choose 
their markets so that they don't get suckered into the low margin 
part of the business; it's also because their supply chain operations 
are phenomenally efficient.)

Another thing I think is fascinating about warehouses is how you 
account for inventory. Most operators divide their inventory into 
"A", "B" and "C" goods based on how fast they turn through the 
warehouse. The "A" parts might circulate 10-50x faster than the C 
parts. So, a direct consequence is that when you lay out a warehouse 
you do it so that you can pick and ship your A parts fastest. The 
faster you do that, the more efficient your labor force and the less 
it costs you to run things. Neat, huh?

Now, I mention these things not strictly speaking to show you what a 
smartypants I am about supp ly chain operations. The real point is to 
show how serious operational decisions are made based on deep 
analytics. Everything I just mentioned can be modeled and measured: 
where you site the warehouses themselves, how you design the 
warehouse to maximize your ability to pick and ship the highest-
velocity items, and what your key indicators are. There's a virtuous 
feedback loop in place that helps operators understand where they are 
spending their time and money, and that in turn drives new 
innovations that increase efficiencies.

In supply chain, the analytics are the key to absolutely everything. 
And they are critical in an industry where costs matter. In that 
regard, manufacturing shares a lot with investment banking: leading 
firms are willing to invest a phenomenal amount of time and money to 
shave off a few basis points on a Treasury bill derivative. But this 
is done with experiential, analytical data used for decision support 
that is matched with a model. You'd never be able to justify 
redesigning a warehouse or hiring all of Yale's graduating math 
majors unless you could quantify and measure the impact of those 
investments on the processes themselves. Experiential data feeds, and 
improves, the model.

In contrast, when you look at security you see nothing like this. 
Fear and uncertainty rule, and almost every spending decision is made 
on the basis of intuition rather than facts. Acquisition costs 
matter, but operational costs don't. Can you imagine what would 
happen if you plucked a seasoned supply-chain operations manager out 
of a warehouse, plonked him down in a chair, and asked him to watch 
his security counterpart in action? Bear in mind this is a world 
where his CSO friend is told that the answer to everything is "buy 
our software" or "install our appliance." The warehouse guy would 
look at him like he had two heads. Because in his world, you don't 
spray dollars all over the place until you have a detailed, grounded, 
empirical view of what your processes are all about. You simply don't 
have the budget to do it any other way.

But in security, the operations side of things is so immature, and 
the gullibility of CIOs and CEOs so high, that they are willing to 
write checks without knowing whether the things they are buying 
actually work. And by "work," I don't mean "can be shown to stop a 
certain number of viruses at the border"; I mean "can be shown to 
decrease time required to respond to a security incident" or "has 
increased the company's ability to share information without 
incurring extra costs," or "has cut the pro-rata share of our IT 
operations spent on rebuilding desktops."

Putting myself in the warehouse manager's shoes again: for security, 
I'd like to know why nobody talks about activity-based costing. Or 
about process metrics -- that is, cycle times for everyday security 
activities -- in a serious way. Or benchmarking -- does my firm have 
twice as many security defects in our web applications as yours? Are 
we in the first, second, third or fourth quartiles?

If the large security companies were serious, we'd have a firmer grip 
on the activity, impact and cost side of the ledger. For example, why 
won't AV companies disclose how much malware is actually circulating 
within their customer bases, despite their promises of "total 
protection"? When the WMF zero-day exploit came out, how come none of 
the security companies knew how many of their customers were 
infected? And how much the cleanup efforts cost? Either nobody knew, 
or nobody wanted to tell. I think it's the former. If I were in the 
shoes of my Roadway operational friend, I'd be pissed off about the 
complete lack of feedback between spending, activities, impact and cost.

If this sounds like a very odd take on security, it is. My mentor and 
former boss, Dan Geer, likes to say that there are a lot of people 
who don't have classical security training, but who bring "hybrid 
vigor" to the field. I identify with that. With my metrics research, 
I just want to see if we can bring serious analytic rigor to a field 
that has resisted it for so long. And I mean that in an operational 
way, not a risk-equation way.

So, that's an exceptionally long-winded way of saying "yes" to your 
question -- I've trended from operational to analytical. I'm not sure 
that my past experience has necessarily helped me pick particular 
security metrics per se, but it has definitely biased me towards 
those that are operational rather than risk-based.

3) You've recently published your book.  I think it was a great appetite whetter but
I was left -- as were I think many of us who are members of the "lazy guild"-- wanting
more.   Do you plan to follow-up with a metrics toolkit of sorts?  You know, a templated
guide -- Metrics for Dummies?

You know, that's a great point. The fabulous blogger Layer 8, who 
gave my book an otherwise stunning review that I am very grateful for 
("I tucked myself into bed, hoping to sleep—but I could not sleep 
until I had read Security Metrics cover to cover. It was That 
Good."), also had that same reservation. Her comment was, "that final 
chapter just stopped short and dumped me off the end of the book, 
without so much as a fare-thee-well Final Overall Summary. It just 
stopped, and without another word, put on its clothes and went home". 
Comparing my prose to a one night stand is pretty funny, and a 

Ironically, as the deadline for the book drew near, I had this great 
idea that I'd put in a little cheat-sheet in the back, either as an 
appendix or as part of the endpapers. But as with many things, I 
simply ran out of time. I did what Microsoft did to get Vista out the 
door -- I had to cut features and ship the fargin' bastid.

One of the great things about writing a book is that people write you 
letters when they like or loathe something they read. Just about all 
of my feedback has been very positive, and I have received a number 
of very thoughtful comments that shed light on what readers' 
companies are doing with metrics. I hope to use the feedback I've 
gotten to help me put together a "cheat sheet" that will boil the 
metrics I discuss in the book into something easier to digest.

4) You've written about the impending death of traditional Anti-Virus technology and its
evolution to combat the greater threats from adaptive Malware.  What role do you think
virtualization technology that provides a sandboxed browsing environment will have in
this space, specifically on client-side security?

It's pretty obvious that we need to do something to shore up the 
shortcomings of signature-based anti- malware software. I regularly 
check out a few of the anti-virus benchmarking services, like the 
OITC site that aggregates the Virustotal scans. And I talk to a 
number of anti-malware companies who tell me things they are seeing. 
It's pretty clear that current approaches are running out of gas. All 
you have to do is look at the numbers: unique malware samples are 
doubling every year, and detection rates for previously-unseen 
malware range from the single digits to the 80% mark. For an industry 
that has long said they offered "total protection," anything less 
than 100% is a black eye.

Virtualization is one of several alternative approaches that vendors 
are using to help boost detection rates. The idea with virtualization 
is to run a piece of suspected malware in a virtual machine to see 
what it does. If, after the fact, you determine that it did something 
naughty, you can block it from running in the real environment. It 
sounds like a good approach to me, and is best used in combination 
with other technologies.

Now, I'm not positive how pervasive this is going to be on the 
desktop. Existing products are already pretty resource-hungry. 
Virtualization would add to the burden. You've probably heard people 
joke: "thank God computers are dual-core these days, because we need 
one of 'em to run the security software on." But I do think that 
virtualized environments used for malware detection will become a 
fixture in gateways and appliances.

Other emergent ideas that complement virtualization are behavior 
blocking and herd intelligence. Herd intelligence -- a huge malware 
blacklist-in-the-sky -- is a natural services play, and I believe all 
successful anti-malware companies will have to embrace something like 
this to survive.

5) We've see the emergence of some fairly important back-office critical applications
make their way to the Web (CRM, ERP, Financials) and now GoogleApps is staking a hold
for the SMB.  How do you see the SaaS model affecting the management of security -- 
and ultimately risk --  over time?

Software as a service for security is already here. We've already 
seen fairly pervasive managed firewall service offerings -- the 
carriers and companies like IBM Global Services have been offering 
them for years. Firewalls still matter, but they are nowhere near as 
important to the overall defense posture as before. That's partly 
because companies need to put a lot of holes in the firewall. But 
it's also because some ports, like HTTP/HTTPS, are overloaded with 
lots of other things: web services, instant messaging, VPN tunnels 
and the like. It's a bit like the old college prank of filling a 
paper bag with shaving cream, sliding it under a shut door, then jumping 
on it and spraying the payload all over the room's occupants. HTTP is 
today's paper bag.

In the services realm, for more exciting action, look at what 
MessageLabs and Postini have done with the message hygiene space. At 
Yankee we've been telling our customers that there's no reason why an 
enterprise should bother to build bespoke gateway anti-spam and anti-
malware infrastructures any more. That's not just because we like 
MessageLabs or Postini. It's also because the managed services have a 
wider view of traffic than a single enterprise will ever have, and 
benefit from economies of scale on the research side, not to mention 
the actual operations.

Managed services have another hidden benefit; you can also change 
services pretty easily if you're unhappy. It puts the service 
provider's incentives in the right place. Qualys, for example, 
understands this point very well; they know that customers will leave 
them in an instant if they stop innovating. And, of course, whenever 
you accumulate large amounts of performance data across your customer 
base, you can benchmark things. (A subject near and dear to my heart, 
as you know.)

With regards to the question about risk, I think managed services do 
change the risk posture a bit. On the one hand, the act of 
outsourcing an activity to an external party moves a portion of the 
operational risk to that party. This is the "transfer" option of the 
classic "ignore, mitigate, transfer" set of choices that risk 
management presents. Managed services also reduce political risk in a 
"cover your ass" sense, too, because if something goes wrong you can 
always point out that, for instance, lots of other people use the 
same vendor you use, which puts you all in the same risk category. 
This is, if you will, the "generally accepted practice" defense.

That said, particular managed services with large customer bases 
could accrue more risk by virtue of the fact that they are bigger 
targets for mischief. Do I think, for example, that spammers target 
some of their evasive techniques towards Postini and MessageLabs? I 
am sure they do. But I would still feel safer outsourcing to them 
rather than maintaining my own custom infrastructure.

Overall, I feel that managed services will have a "smoothing" or 
dampening effect on the risk postures of enterprises taken in 
aggregate, in the sense that they will decrease the volatility in 
risk relative to the broader set of enterprises (the "alpha", if you 
will). Ideally, this should also mean a decrease in the *absolute* 
amount of risk. Putting this another way: if you're a rifle shooter, 
it's always better to see your bullets clustered closely together, 
even if they don't hit near the bull's eye, rather than seeing them 
near the center, but dispersed. Managed services, it seems to me, can 
help enterprises converge their overall levels of security -- put the 
bullets a little closer together instead of all over the place. 
Regulation, in cases where it is prescriptive, tends to do that too.

Bonus Question:
6) If you had one magical dashboard that could display 5 critical security metrics
to the Board/Executive Management, regardless ofindustry, what would those elements

I would use the Balanced Scorecard, a creation of Harvard professors 
Kaplan and Norton. It divides executive management metrics into four 
perspectives: financial, internal operations, customer, and learning 
and growth. The idea is to create a dashboard that incorporates 6-8 
metrics into each perspective. The Balanced Scorecard is well known 
to the corner office, and is something that I think every security 
person should learn about. With a little work, I believe quite 
strongly that security metrics can be made to fit into this framework.

Now, you might ask yourself, I've spent all of this work organizing 
my IT security policies along the lines of ISO 17799/2700x, or COBIT, 
or ITIL. So why can't I put together a dashboard that organizes the 
measurements in those terms? What's wrong with the frameworks I've 
been using? Nothing, really, if you are a security person. But i f you 
really want a "magic dashboard" that crosses over to the business 
units, I think basing scorecards on security frameworks is a bad 
idea. That's not because the frameworks are bad (in fact most of them 
are quite good), but because they aren't aligned with the business. 
I'd rather use a taxonomy the rest of the executive team can 
understand. Rather than make them understand a security or IT 
framework, I'd rather try to meet them halfway and frame things in 
terms of the way they think.

So, for example: for Financial metrics, I'd measure how much my IT 
security infrastructure is costing, straight up, and on an activity-
based perspective. I'd want to know how much it costs to secure each 
revenue-generating transaction; quick-and-dirty risk scores for 
revenue-generating and revenue/cost-accounting systems; DDOS downtime 
costs. For the Customer perspective I'd want to know the percentage 
and number of customers who have access to internal systems; cycle 
times for onboarding/offloading customer accounts; "toxicity rates" 
of customer data I manage; the number of privacy issues we've had; 
the percentage of customers who have consulted with the security 
team; number and kind of remediation costs of audit items that are 
customer-related; number and kind of regulatory audits completed per 
period, etc. The Internal Process perspective has of the really easy 
things to measure, and is all about security ops: patching 
efficiency, coverage and control metrics, and the like. For Learning 
and Growth, it would be about threat/planning horizon metrics, 
security team consultations, employee training effectiveness and 
latency, and other issues that measure whether we're getting 
employees to exhibit the right behaviors and acquire the right skills.

That's meant to be an illustrative list rather than definitive, and I 
confess it is rather dense. At the risk of getting all Schneier on 
you, I'd refer your readers to the book for more details. Readers can 
pick and choose from the "catalog" and find metrics that work for 
their organizations.

Overall, I do think that we need to think a whole lot less about 
things like ISO and a whole lot more about things like the Balanced 
Scorecard. We need to stop erecting temples to Securityness that 
executives don't give a damn about and won't be persuaded to enter. 
And when we focus just on dollars, ALE and "security ROI", we make 
things too simple. We obscure the richness of the data that we can 
gather, empirically, from the systems we already own. Ironically, the 
Balanced Scorecard itself was created to encourage executives to move 
beyond purely financial measures. Fifteen years later, you'd think we 
security practitioners would have taken the hint.
Categories: Risk Management, Security Metrics, Take5 Tags:

Wells Fargo System “Crash” Spools Up Phishing Attempts But Did It Also Allow for Bypassing Credit/Debit Card Anti-Fraud Systems?

August 22nd, 2007 3 comments

Serendipity is a wonderful thing.  I was in my local MA bank branch on Monday arranging for a wire transfer from my local account to a Wells Fargo account I maintain in CA.  I realized that I didn’t have the special ABA Routing Code that WF uses for wire transfers so I hopped on the phone to call customer service to get it.  We don’t use this account much at all but wanted to put some money in it to keep up the balance which negates the service fee.

The wait time for customer service was higher than normal and I sat for about 20 minutes until I was connected to a live operator.  I told him what I wanted and he was able to give me the routing code but I also needed the physical address of the branch that my account calls home.  He informed me that he couldn’t give me that information.

The reason he couldn’t give me that information was that the WF "…computer systems have been down for the last 18 hours."  He also told me that "…we lost a server somewhere; people couldn’t even use their ATM cards yesterday."

This story was covered here on Computerworld and was followed up with another article which described how Phishers and the criminal element were spooling up their attacks to take advantage of this issue:

August 21, 2007   (IDG News Service)  — Wells Fargo & Co.
customers may have a hard time getting an up-to-date balance statement
today, as the nation’s fifth-largest bank continues to iron out service
problems related to a Sunday computer failure.

The outage knocked the company’s Internet, telephone and ATM banking
services offline for several hours, and Wells Fargo customers continued
to experience problems today.

Wells Fargo didn’t offer many details about the system failure, but
it was serious enough that the company had to restore from backup.

"Using our backup facilities, we restored Internet banking service in about one hour and 40 minutes," the company said in a statement today. "We thank the hundreds of team members in our technology group for working so hard to resolve this problem."

Other banking services such as point-of-sale transactions, loan
processing and wire transfers were also affected by the outage, and
while all systems are now fully operational, some customers may
continue to see their Friday bank balances until the end of the day,
Wells Fargo said.

I chuckled uneasily because I continue to be directly impacted by critical computer systems failures such as two airline failures (the United Airlines and the TSA/ICE failure at LAX,) the Skype outage, and now this one.  I didn’t get a chance to blog about it other than a comment on another blog, but if I were you, I’d not stand next to me in a lightning storm anytime soon!  I guess this is what happens when you’re a convenient subscriber to World 2.0?

I’m sure WF will suggest this is because of Microsoft and Patch Tuesday, too… 😉

So I thought this would be the end of this little story (until the next time.)  However, the very next day, my wife came to me alarmed because she found a $375 charge on the same account as she was validating that the wire went through.

She asked me if I made a purchase on the WF account recently and I had not as we don’t use this account much.  Then I asked her who the vendor was.  The charge was from

Huh?  I asked her to show me the statement; there was no reference transaction number, no phone number and the purchase description was "general merchandise."

My wife immediately called WF anti-fraud and filed a fraudulent activity report.  The anti-fraud representative described the transaction as "odd" because there was no contact information available for the vendor.

She mentioned that she was able to see that the vendor executed both an auth. (testing to see that funds were available) followed then a capture (actually charging) but told us that unfortunately she couldn’t get any more details because the computer systems were experiencing issues due to the recent outage!

This is highly suspicious to me.

Whilst the charge has been backed out, I am concerned that this is a little more than serendipity and coincidence. 

Were the WF anti-fraud and charge validation processes compromised during this "crash" and/or did their failure allow for fraudulent activity to occur?

Check your credit/debit card bills if you are a Wells Fargo customer!