Archive

Archive for the ‘Information Security’ Category

The Seesaw CISO…Changing Places But Similar Faces…

December 8th, 2007 1 comment

Seesaw_shadow
…from geek to business speak…

Dennis Fisher has nice writeup over at the SearchSecurity Security Bytes Blog about the changing role and reporting structure of the CISO.

Specifically, Dennis notes that he was surprised by the number of CISOs who recently told him that they no longer report to the CIO and aren’t a part of IT at all.  Moreover, these same CISOs noted that the skillset and focus is also changing from a technical to a business role:

In the last few months I’ve been hearing more and more from CEOs,
CIOs and CSOs about the changing role of the CSO (or CISO, depending on
your org chart) in the enterprise. In the past, the CSO has nearly
always been a technically minded person who has risen through the IT
ranks and then made the jump to the executive ranks. That lineage
sometimes got in the way when it came time to deal with other upper
managers who typically had little or no technical knowledge and weren’t
interested in the minutiae of authentication schemes, NAC and unified
threat management. They simply wanted things to work and to avoid
seeing the company’s name in the papers for a security breach.

But that seems to be changing rather rapidly. Last month I was on a
panel in Chicago with Howard Schmidt, Lloyd Hession, the CSO of BT
Radianz, and Bill Santille, CIO of Uline, and the conversation quickly
turned to the ways in which the increased focus on risk management in
enterprises has forced CSOs to adapt and expand their skill sets. A
knowledge of IDS, firewalls and PKI is not nearly enough these days,
and in some cases is not even required to be a CSO. One member of the
audience said that the CSO position in his company is rotated regularly
among senior managers, most of whom have no technical background and
are supported by a senior IT staff member who serves as CISO. The CSO
slot is seen as a necessary stop on the management circuit, in other
words. Several other CSOs in the audience said that they no longer
report to the CIO and are not even part of the IT organization.
Instead, they report to the CFO, the chief legal counsel, or in one
case, the ethics officer.

I’ve talked about the fact that "security" should be a business function and not a technical one and quite frankly what Dennis is hearing has been a trend on the uptick for the last 3-4 years as "information security" becomes less relevant and managing risk becomes the focus.  To wit:

The number of organizations making this kind of change surprised me
at the time. But, in thinking more about it, it makes a lot of sense,
given that the daily technical security tasks are handled by people
well below the CSO’s office. And many of the CSOs I know say they spend
most of their time these days dealing with policy issues such as
regulatory compliance. Patrick Conte, the CEO of software maker
Agiliance, which put on the panel, told me that these comments fit with
what he was hearing from his customers, as well. Some of this shift is
clearly attributable to the changing priorities inside these
enterprises. But some of it also is a result of the maturation of the
security industry as a whole, which has translated into less of a focus
on technology and more attention being paid to policies, procedures and
other non-technical matters.

How this plays out in the coming months and years will be quite
interesting. My guess is that as security continues to be absorbed into
the larger IT and operations functions, the CSO’s job will continue to
morph into more of a business role.

I still maintain that "compliance" is nothing more than a gap-filler.  As I said here, we have compliance as an industry [and measurement] today because we manage technology
threats and vulnerabilities and don’t manage risk.  Compliance is
actually nothing more than a way of forcing transparency and plugging a
gap between the two.  For most, it’s the best they’ve got.

Once organizationally we’ve got our act together, compliance will become the floor, not the ceiling and we’ll really start to see the "…maturation of the security industry as a whole."

/Hoff

And Now Some Useful 2008 Information Survivability Predictions…

December 7th, 2007 1 comment

Noculars
So, after the obligatory dispatch of gloom and doom as described in my
2008 (in)Security Predictions, I’m actually going to highlight some of
the more useful things in the realm of Information Security that I
think are emerging as we round the corner toward next year.

They’re not really so much predictions as rather some things to watch.

Unlike folks who can only seem to talk about desperation, futility
and manifest destiny or (worse yet) "anti-pundit pundits" who try to
suggest that predictions and forecasting are useless (usually because
they suck at it,) I gladly offer a practical roundup of impending
development, innovation and some incremental evolution for your
enjoyment. 

You know, good news.

As Mogull mentioned,
I don’t require a Cray XMP48, chicken bones & voodoo or a
prehensile tail to make my picks.  Rather I grab a nice cold glass of
Vitamin G (Guiness) and sit down and think for a minute or two,
dwelling on my super l33t powers of common sense and pragmatism with just a
pinch of futurist wit.

Many of these items have been underway for some time, but 2008 will
be a banner year for these topics as well as the previously-described
"opportunities for improvement…"

That said, let’s roll with some of the goodness we can look forward to in the coming year.  This is not an exhaustive list by any means, but some examples I thought were important and interesting:

  1. More robust virtualization security toolsets with more native hypervisor/vmm accessibility
    Though
    it didn’t start with the notion of security baked in, virtualization
    for all of its rush-to-production bravado will actually yield some
    interesting security solutions that help tackle some very serious
    challenges.  As the hypervisors become thinner, we’re going to see the
    management and security toolsets gain increased access to the guts of
    the sausage machine in order to effect security appropriately and this
    will be the year we see the virtual switch open up to third parties and
    more robust APIs for security visibility and disposition appear.
     
  2. The focus on information centric security survivability graduates from v1.0 to v1.1
    Trying
    to secure the network and the endpoint is like herding cats and folks
    are tired of dumping precious effort on deploying kitty litter around
    the Enterprise to soak up the stinky spots.  Rather, we’re going to see
    folks really start to pay attention to information classification,
    extensible and portable policy definition, cradle-to-grave lifecycle
    management, and invest in technology to help get them there.

    Interestingly
    the current maturity of features/functions such as NAC and DLP have
    actually helped us get closer to managing our information and
    information-related risks.  The next generation of these offerings in
    combination with many of the other elements I describe herein and their
    consolidation into the larger landscape of management suites will
    actually start to deliver on the promise of focusing on what matters —
    the information.
     

  3. Robust Role-based policy, Identity and access management coupled with entitlement, geo-location and federation…oh and infrastructure, too!
    We’re
    getting closer to being able to affect policy not only based upon just
    source/destination IP address, switch and router topology and the odd entry in active directory on
    a per-application basis, but rather holistically based upon robust
    lifecycle-focused role-based policy engines that allow us to tie in all of the major
    enterprise components that sit along the information supply-chain.

    Who, what, where, when, how and ultimately why will be the decision
    points considered with the next generation of solutions in this space.
    Combine the advancements here with item #2 above, and someone might
    actually start smiling.

    If you need any evidence of the convergence/collision of the application-oriented with the network-oriented approach and a healthy overlay of user entitlement provisioning, just look at the about-face Cisco just made regarding TrustSec.  Of course, we all know that it’s not a *real* security concern/market until Cisco announces they’ve created the solution for it 😉
     

  4. Next Generation Networks gain visibility as they redefine the compute model of today
    Just
    as there exists a Moore’s curve for computing, there exists an
    overlapping version for networking, it just moves slower given the
    footprint.  We’re seeing the slope of this curve starting to trend up
    this coming year, and it’s much more than bigger pipes, although that
    doesn’t hurt either…

    These next generation networks will
    really start to emerge visibly in the next year as the existing
    networking models start to stretch the capabilities and capacities of
    existing architecture and new paradigms drive requirements that dictate
    a much more modular, scalable, resilient, high-performance, secure and
    open transport upon which to build distributed service layers.

    How
    networks and service layers are designed, composed, provisioned,
    deployed and managed — and how that intersects with virtualization and
    grid/utility computing — will start to really sink home the message
    that "in the cloud" computing has arrived.  Expect service providers
    and very large enterprises to adapt these new computing climates first
    with a trickle-down to smaller business via SaaS and hosted service
    operators to follow.

    BT’s 21CN
    (21st Century Network) is a fantastic example of what we can expect
    from NGN as the demand for higher speed, more secure, more resilient and more extensible interconnectivity really
    takes off.
     

  5. Grid and distributed utility computing models will start to creep into security
    A
    really interesting by-product of the "cloud compute" model is that as
    data, storage, networking, processing, etc. get distributed, so shall
    security.  In the grid model, one doesn’t care where the actions take
    place so long as service levels are met and the experiential and
    business requirements are delivered.  Security should be thought of in
    exactly the same way. 

    The notion that you can point to a
    physical box and say it performs function ‘X’ is so last Tuesday.
    Virtualization already tells us this.  So, imagine if your security
    processing isn’t performed by a monolithic appliance but instead is
    contributed to in a self-organizing fashion wherein the entire
    ecosystem (network, hosts, platforms, etc.) all contribute in the
    identification of threats and vulnerabilities as well as function to
    contain, quarantine and remediate policy exceptions.

    Sort of sounds like that "self-defending network" schpiel, but not focused on the network and with common telemetry and distributed processing of the problem.

    Check out Red Lambda’s cGrid technology for an interesting view of this model.
     

  6. Precision versus accuracy will start to legitimize prevention as
    the technology starts to allow us the confidence to start turning the
    corner beyond detection

    In a sad commentary on the last few
    years of the security technology grind, we’ve seen the prognostication
    that intrusion detection is dead and the deadpan urging of the security
    vendor cesspool convincing us that we must deploy intrusion prevention
    in its stead. 
       
    Since there really aren’t many pure-play intrusion detection systems
    left anyway, the reality is that most folks who have purchased IPSs
    seldom put them in in-line mode and when they do, they seldom turn on
    the "prevention" policies and instead just have them detect attacks,
    blink a bit and get on with it.

    Why?  Mostly because while the
    threats have evolved the technology implemented to mitigate them hasn’t
    — we’re either stuck with giant port/protocol colanders or
    signature-driven IPSs that are nothing more than IDSs with the ability
    to send RST packets.

    So the "new" generation of technology has
    arrived and may offer some hope of bridging that gap.  This is due to
    not only really good COTS hardware but also really good network
    processors and better software written (or re-written) to take
    advantage of both.  Performance, efficacy and efficiency have begun to
    give us greater visibility as we get away from making decisions based
    on ports/protocols (feel free to debate proxies vs. ACLs vs. stateful
    inspection…) and move to identifying application usage and getting us
    close to being able to make "real time" decisions on content in context
    by examining the payload and data.  See #2 above.

    The
    precision versus accuracy discussion is focused around being able to
    really start trusting in the ability for prevention technology to
    detect, defend and deter against "bad things" with a fidelity and
    resolution that has very low false positive rates.

    We’re getting closer with the arrival of technology such as Palo Alto Network’s solutions
    — you can call them whatever you like, but enforcing both detection
    and prevention using easy-to-define policies based on application (and
    telling the difference between any number of apps all using port
    80/443) is a step in the right direction.
     

  7. The consumerization of IT will cause security and IT as we know it to die radically change
    I know it’s heretical but 2008 is going to really push the limits of
    the existing IT and security architectures to their breaking points, which is
    going to mean that instead of saying "no," we’re going to have to focus
    on how to say "yes, but with this incremental risk" and find solutions for an every increasingly mobile and consumerist enterprise. 

    We’ve talked about this before, and most security folks curl up into a fetal position when you start mentioning the adoption by the enterprise of social
    neworking, powerful smartphones, collaboration tools, etc.  The fact is that the favorable economics, agility , flexibility and efficiencies gained with the adoption of consumerization of IT outweigh the downsides in the long run.  Let’s not forget the new generation of workers entering the workforce. 

    So, since information is going to be leaking from our Enterprises like a sieve on all manners of devices and by all manner of methods, it’s going to force our hands and cause us to focus on being information centric and stop worrying about the "perimeter problem," stop focusing on the network and the host, and start dealing with managing the truly important assets while allowing our employees to do their jobs in the most effective, collaborative and efficient methods possible.

    This disruption will be a good thing, I promise.  If you don’t believe me, ask BP — one of the largest enterprises on the planet.  Since 2006 they’ve put some amazing initiatives into play:

    like this little gem:

    Oil giant BP is pioneering a "digital consumer" initiative
    that will give some employees an allowance to buy their own IT
    equipment and take care of their own support needs.

    The
    project, which is still at the pilot stage, gives select BP staff an
    annual allowance — believed to be around $1,000 — to buy their own
    computing equipment and use their own expertise and the manufacturer’s
    warranty and support instead of using BP’s IT support team.

    Access
    to the scheme is tightly controlled and those employees taking part
    must demonstrate a certain level of IT proficiency through a computer
    driving licence-style certification, as well as signing a diligent use
    agreement.

    …combined with this:

    Rather
    than rely on a strong network perimeter to secure its systems, BP has
    decided that these laptops have to be capable of coping with the worst
    that malicious hackers can throw at it, without relying on a network
    firewall.

    Ken Douglas, technology director of BP, told the UK
    Technology Innovation & Growth Forum in London on Monday that
    18,000 of BP’s 85,000 laptops now connect straight to the internet even
    when they’re in the office.

  8. Desktop Operating Systems become even more resilient
    The first steps taken by Microsoft and Apple in Vista and OS X (Leopard) as examples have begun to
    chip away at plugging up some of the security holes that
    have plagued them due to the architectural "feature" that providing an open execution runtime model delivers.  Honestly, nothing short of a do-over will ultimately mitigate this problem, so instead of suggesting that incremental improvement is worthless, we should recognize that our dark overlords are trying to makethings better.

    Elements in Vista such as ASLR, NX, and UAC combined with integrated firewalling, anti-spyware/anti-phishing, disk encryption, integrated rights management, protected mode IE mode, etc. are all good steps in a "more right" direction than previous offerings.  They’re in response to lessons learned.

    On the Mac, we also see ASLR, sandboxing, input management, better firewalling, better disk encryption, which are also notable improvements.  Yes, we’ve got a long way to go, but this means that OS vendors are paying more attention which will lead to more stable and secure platforms upon which developers can write more secure code.

    It will be interesting to see how the intersection of these "more secure" OS’s factor with virtualization security discussed in #1 above.

    Vista SP1 is due to ship in 2008 and will include APIs through which third-party security products can work with kernel patch protection on Vista
    x64, more secure BitLocker drive encryption and a better Elliptical Curve Cryptography PRNG (pseudo-random number generator.)  Follow-on releases to Leopard will likely feature security enhancements to those delivered this year.
     

  9. Compliance stops being a dirty word  & Risk Management moves beyond buzzword
    Today
    we typically see the role of information security described as blocking and tackling; focused on managing threats and
    vulnerabilities balanced against the need to be "compliant" to some
    arbitrary set of internal and external policies.  In many people’s
    assessment then, compliance equals security.  This is an inaccurate and
    unfortunate misunderstanding.

    In 2008, we’ll see many of the functions of security — administrative, policy and operational — become much more visible and transparent to the business and we’ll see a renewed effort placed on compliance within the scope of managing risk because the former is actually a by-product of a well-executed risk management strategy.

    We have compliance as an industry today because we manage technology threats and vulnerabilities and don’t manage risk.  Compliance is actually nothing more than a way of forcing transparency and plugging a gap between the two.  For most, it’s the best they’ve got.

    What’s traditionally preventing the transition from threat/vulnerability management to risk management is the principal focus on technology with a lack of a good risk assessment framework and thus a lack of understanding of business impact.

    The availability of mature risk assessment frameworks (OCTAVE, FAIR, etc.) combined with the maturity of IT and governance frameworks (CoBIT, ITIL) and the readiness of the business and IT/Security cultures to accept risk management as a language and actionset with which they need to be conversant will yield huge benefits this year.

    Couple that with solutions like Skybox and you’ve got the makings of a strategic risk management strategy that can bring the security more closely aligned to the business.
     

  10. Rich Mogull will, indeed, move in with his mom and start speaking Klingon
    ’nuff said.

So, there we have it.  A little bit of sunshine in your otherwise gloomy day.

/Hoff

Running With Scissors…Security, Survivability, Management, Resilience…Whatever!

October 26th, 2007 4 comments

Runningscissors_3
Pointy Things Can Hurt

Mom always told me not to run with scissors because she knew that ugly things might happen if I did.  I seem to have blocked this advice out of my psyche.  Running with scissors can be exhilarating.

My latest set of posts represent the equivalent of blogging with scissors, it seems. 

Sadly, sometimes one of the only ways to get people to
intelligently engage in contentious discourse on a critical element of our
profession is to bait them into a game of definitional semantics;
basically pushing buttons and debating nuance to finally arrive at the
destination of an "AHA! moment."

Either that, or I just suck at making a point and have to go through all the machinations to arrive at consensus.  I’m the first to admit that I often times find myself misunderstood, but I’ve come to take this with a grain of salt and try to learn from my mistakes.

I don’t conspire to be tricky or otherwise employ cunning or guile to goad people with the goal of somehow making them look foolish, but rather have discussions that need to be had.  You’ll just have to take my word on that.  Or not.

You Say Potato, I say Po-ta-toe…

There are a number of you smart cookies who have been reading my posts on Information Survivability and have asked a set of very similar questions that are really insightful and provoke exactly the sort of discussion I had hoped for.

Interestingly, folks continue to argue definitional semantics without realizing that we’re mostly saying the same thing.  Most of you bristling really aren’t pushing back on the functional aspects of Information Security vs. Information Survivability.  Rather, it seems that you’ve become mired in the selection of words rather than the meme.

What do I mean?  Folks are spending far too much time debating which verb/noun to use to describe what we do and we’re talking past each other.  Granted, a lot of this is my fault for the way I choose to stage the debate and given this medium, it’s hard to sometimes re-focus the conversation because it becomes so fragmented.

Rich Mogull posted a great set of commentary on this titled "Information Security vs. Information Survivability: Retaking Our Vocabulary." wherein he eloquently rounds this out:

The problem is that we’ve lost control of our own vocabulary.
“Information security” as a term has come to define merely a fraction
of its intended scope.

Thus we have to use terms like security risk management and
information survivability to re-define ourselves, despite having a
completely suitable term available to us. It’s like the battle between
the words “hacker” and “cracker”. We’ve lost that fight with
“information security”, and thus need to use new language to advance
the discussion of our field.

When Chris, myself, and others talk about “information
survivability” or whatever other terms we’ll come up with, it’s not
because we’re trying to redefine our practice or industry, it’s because
we’re trying to bring security back to its core principles. Since we’ve
lost control of the vocabulary we should be using, we need to introduce
a new vocabulary just to get people thinking differently.

As usual, Rich follows up and tries to smooth this all out.  I’m really glad he did because the commentary that followed showed exactly the behavior I am referring to in two parts.  This was from a comment left on Rich’s post.  It’s not meant to single out the author but is awkwardly profound in its relevance:

[1] This is the crux of the biscuit. Thanks for saying this. I don’t
like the word “survivability” for the pessimistic connotations it has,
as you pointed out. I also think it’s a subset of information security,
not the other way around.

I can’t possibly fathom how one would suggest that Survivability, which encompasses risk management, resilience and classical CIA assurance with an overarching shift in business-integrated perspective, can be thought of as a subset of a narrow, technically-focused practice like that which Information Security has become.  There’s not much I can say more than I already have on this topic.

[2] Now, if you wanted to go up a level to *information management*,
where you were concerned not only with getting the data to where it
needs to be at the right time, but also with getting *enough* data, and
the *right* data, then I would buy that as a superset of information
security. Information management also includes the practices of
retaining the right information for as long as it’s needed and no
longer, and reducing duplication of information. It includes deciding
which information to release and which to keep private. It includes a
whole lot more than just security.

Um, that’s what Information Survivability *is.*  That’s not what Information Security has become, however, as the author clearly points out.  This is like some sort of weird passive-aggressive recursion.

So what this really means is that people are really not disagreeing that the functional definition of Information Security is outmoded, but they just don’t like the term survivability.  Fine! Call it what you will: Information Resilience, Information Management, Information Assurance, but here’s why:
you can’t call it Information Security (from Lance’s comment here):

It seems like the focus here is less on technology, and more on process
and risk management. How is this approach from ISO 27000, or any other
ISMS? You use the word survivability instead of business process,
however other then that it seems more similar then different.

That’s right.  It’s not a technology-only focus.  Survivability (or whatever you’d like to call it) focuses on integrating risk assessment and risk management concepts with business blueprinting/business process modeling and applying just the right amount of Information Security where, when and how needed to align to the risk tolerance (I dare not say "appetite") of the business.

In a "scientific" taste test, 7/10 information security programs are focused on compliance and managing threats and vulnerabilities.  They don’t holistically integrate and manage risk.  They deploy a bunch of boxes using a cost-model that absolutely sucks donkey…  See Gunnar’s posts on the matter.

LipstickpigThere are more similarities than differences in many cases, but the reality is that most people today in our profession completely abuse the use of the term "risk."  Not intentionally, mind you, but due to the same reason Information Security has been bastardized and spread liberally like some great mitigation marmalade across the toasty canvas of our profession. 

The short of this is that you can playfully toy with putting lipstick on a pig (which I did for argument’s sake) and call what you do anything you like.

However, unless what you do, regardless of what you call it and no matter how much "difference" you seem to think you make, isn’t in alignment with the strategic initiatives of the company, your function over time becomes irrelevant.  Or at least a giant speedbump.

Time for Jiu Jitsu practice!  With Scissors!

/Hoff

Why Security Should Embrace Disruptive Innovation — and Become Innovative In the Process

October 24th, 2007 No comments

Innovationrotated
One of the more interesting things I get to do in my job is steer discussions with customers and within industry on the topic of innovation.  After all, the ‘I’ word is in my official title: Chief Architect, Security Innovation.  You don’t often see those two words utilized in union.

Specifically, I get my jollies discussing with folks up and down the stack how "Information Security" can and should embrace disruptive technology/innovation and actually become innovative in the process.

It’s all a matter of perspective — and clever management of how, what and why you do what you do…and as we’ve discovered, how you communicate that.

Innovation can simply be defined as people implementing new ideas to creatively solve problems and add value.  How you choose to define "value" really depends upon your goal and how you choose to measure the impact (or difference as some like to describe it) on the business you serve.  We don’t need to get into that debate for the moment, however.

Disruptive technology/innovation is a technology, product or service that ultimately overturns the dominant market leader, technology or product.  This sort of event can happen quickly or gradually and can be evolutionary or revolutionary in execution.  In many cases, the technology itself is not the disruptive catalyst, but rather the strategy, business model or marketing/messaging creates the disruptive impact.

It’s really an interesting topic and an important one at this period in time; we’ve got a rough patch to hoe in the "Information Security" world.  The perception of what we do and what value we add is again being called into question.  This is happening because while the business innovates to gain competitive advantage, we present bigger bills that suckle profit away from the bottom line without being viewed as contributing to the innovative process but rather strictly as a cost of doing business.

I’m delivering my keynote at the Information Security Decisions conference on this very topic. The focus of the presentation will demonstrate that how even with emerging disruptive innovations that have profound impact upon what we do such as SaaS, the consumerization of IT and virtualization, "Information Security" practitioners and managers can not only embrace these technologies in a prescribed and rational manner, but do so in a way that provides alignment to the business and turns disruptive technology into an opportunity rather than a curse.

If you’re in Chicago on November 5th at the ISD conference, come throw stuff at me…they’ve got a great cast of speakers queued up: Bruce Schneier, Howard Schmidt, Eugene Spafford, David Litchfield, Dave Dittrich, David Mortman, Stephen Bonner, Pete Lindstrom, and many more.  It’ll be a good conference.

/Hoff

On Castles: Moats, Machicolations, Burning Oil and Berms Vs. The Trebuchet (or DMZ’s teh Sux0r!)

October 16th, 2007 1 comment

TrebuchetCheck out the comments in the last post regarding my review of the recently released film titled "Me and My DMZ – ‘Til Death Do Us Part"

Carrying forward the mental exercise of debating the application of the classical DMZ deployment and it’s traceable heritage to the concentric levels of defense-in-depth from ye olde "castle/moat" security analogy, I’d like to admit into evidence one interesting example of disruptive technology that changed the course of medieval castle siege warfare, battlefield mechanics and history forever: the Trebuchet.

The folks that advocated concentric circles of architectural defense-in-depth as their strategy would love to tell you about the Trebuchet and its impact.  The problem is, they’re all dead.  Yet I digress.

The Trebuchet represented a quantum leap in the application of battlefield weaponry and strategy that all but ended the utility of defense-in-depth for castle dwellers.  Trebuchets were amazingly powerful weapons and could launch projectiles up to several hundred pounds with a range of up to about 300 yards!

The Trebuchet allowed for the application of technology that put the advantages of time, superior firepower and targeted precision squarely in the hands of the attacker and left the victim to simply wait until the end came. 

To review the basics, a castle is a defensive structure built around a keep or center
structure.  The castle is a fortress, a base from which to mount a
defense against a siege or center of operations from which to conduct
an attack.  The goal of these defenses is to repel, delay, deny, disrupt, and incapacitate the enemy. However, the castle on its own will not provide
a defense against a determined siege force.

One interesting point is that the assumption holds true that all the insiders are "friendlies…"

Castledefenses_2
Here we have an illustration of a well-fortified castle with the Keep in the center surrounded by multiple cascading levels of defense spiraling outward.  Presumably as an attacker breached one defensive boundary, they would encounter yet another with the added annoyance of defensive/offensive tactics such as archers, spiked pits, burning oil, etc.

Breaching one of these things took a long time, cost a lot of lives and meant a significant investment in time, materials, and effort.  Imagine what is was like for the defenders!

Enter the Trebuchet.  You wheel one of these bad boys within effective strike range, but out of range of the castle defender’s archers, and launched hurling masses of projectiles toward and over walls, obliterating them and most anything nearby.  You have a BBQ while a rotated crew of bombardiers merrily flung hunks of pain toward the hapless and helpless defenders.  They can either stay and die or run out the gate and die.

Pythondeadcow
This goes on and on.  Stuff inside starts to burn.  Walls crumble.  People start to starve.  The attackers then start flinging over dead corpses — animals, humans, whatever.  Days and perhaps weeks pass.  Disease sets in.  The bombardment continues until there are no defenses left, most of the defenders have either died or plan to and the enemy marches in and dispatches the rest.

What’s the defense against a Trebuchet?  You mean besides rebuilding your castle in the middle of a lake out of range making it exceedingly difficult to live as an inhabitant?  Not a lot.  In short, artillery meant the end of the castle as a defensive measure.  It simply stopped working.

Let’s be intellectually honest here within the context of this argument.  We’re facing our own version of the Trebuchet with the tactics, motivation, skill, tools and directed force of how attackers engage us today.  Most modern day castle technology apologists are content to simply sit in their keeps, playing Parcheesi and extolling the virtues of their fortifications, while the determined leviathan force smashes down the surrounding substructure.

There came a point in the illustration above wherein the art of warfare and the technology involved completely changed the playing field.  We’ve reached that point now in information warfare, yet people still want to build castles.

What I think people really want to say privately in their stoic defense of the DMZ and defense-in-depth is that they can’t think of anything else that’s better at the moment and they’re simply trying to wait out the bombardment.  Too bad the attackers aren’t governed by such motivating encouragement.

Look, I’m not trying to be abrasively critical of what people have done — I’ve done it, too.  I’m also not suggesting that we immediately forklift what’s in place now; that’s not feasible or practical.  However, I am being critical of people who continue to hang onto and defend outmoded concepts and suggest it’s an acceptable idea to fight a conventional war against a force using guerilla tactics and superior tools with nothing but time and resources on their hands.

There has to be a better way than just waiting to die.

If you don’t think differently about how you’re going to focus your efforts and with what, here’s what you have to look forward to:

Castleruins

/Hoff

The DMZ Isn’t Dead…It’s Merely Catatonic

October 16th, 2007 5 comments

Headinsand
Joel Espenschied over at Computerworld wrote a topical today titled "The DMZ’s not dead…whatever the vendors are telling you."  Joel basically suggests that due to poorly written software, complex technology such as Web Services and SOA and poor operational models, that the DMZ provides the requisite layers of defense in depth to provide the security we need.

I’m not so sure I’d suggest that DMZ’s provide "defense in depth."  I’d suggest they provide segmentation and isolation, but if you look at most DMZ deployments they represent the typical Octopus approach to security; a bunch of single segments isolated by one (or a cluster) or firewalls.  It’s the crap surrounding these segments that is appropriately tagged with the DiD moniker.

A DMZ is an abstracted representation of a security architecture, while I argue that defense in depth is an control implementation strategy…and one I think needs to be dealt with as honestly by security/network teams as it is by Enterprise Architects.  My simple truth is that there are now hundreds if not thousands of "micro-perimeterized single host" DMZ’s in most enterprise networks today and we lean on defense in depth as a crutch and a bad habit because we’re treating the symptom and not the problem — and it’s the only thing that most people know.

Defenseindepth
By the way, defense in depth doesn’t mean 15 network security boxes piled on top of one another.  Defense in depth really spoke to this model which entailed a holistic view of the "stack" — but in a coordinated manner.  You must focus on data, applications, host and networking as equal and objective recipients of investment in a protection strategy, not just one.

Too idealistic?  Waiting for me to run out of air holding my breath for secure applications, operating systems and protocols?  Good.  We’ll see who plays chicken first. 

You keep designing for obsolescence and the way things were 10 years ago while I look at what the business needs and where its priorities are and how best to balance risk with sharing information.  We’ll see who’s better prepared in the next three year refresh cycle to tackle the problems that arise as the business continues to embrace disruptive technology while you become the former by focusing on the latter.

There’s a real difference between managing threats and vulnerabilities versus managing risk.  Back to the article.

Two quotes stand out in the bunch, and I’ll focus on them:

The philosophy of Defense in Depth is based on the idea that stuff
invariably fails or is cracked, and it ought to take more than one
breach event before control is lost over data or processes. But with
this "dead DMZ" talk, the industry seems to be inching away from that
idea — and toward potential trouble.

Right.  I see how effective that’s been with all the breaches thus far.  Please demonstrate how defense in depth has protected us against XSS, CSRF, SQL Injection and fuzzing so far?  How about basic wireless security issues?  How about data leakage?  Your precious design anachronism isn’t looking so good at this point.  You spend hundreds of thousands of dollars and are still completely vulnerable.

That’s because your defense in depth is really defense in breadth and it’s being applied to the wrong sets of problems.  Where’s the security value in that?

The talking heads may say the DMZ is dead, but those actually managing
enterprise IT installations shouldn’t give it up so easily. Until no
mistakes are made in application coding, placement, operations and
other processes — and you know better than to hold your breath —
layered network security controls still provide a significant barrier
to loss of data or other breach. The advice regarding application
configuration and optimization is useful and developers’ efforts to
make that work are encouraging, but when it comes to the real-world
network, organizations can’t just ignore the reality of undiscovered
vulnerabilities and older systems still lurking in the corners.

Look, the reality is that "THE DMZ" is dead, but it doesn’t mean "the DMZ" is…it simply means you have to reassess and redefine both your description and expectation of what a DMZ and defense in depth really mean to your security posture given today’s attack surfaces.

Keep your firewalled DMZ Octopi for now, but realize that with the convergence of technologies such as virtualization, mobility, Mashups, SaaS, etc., the reality is that a process or data could show up running somewhere other than where you thought it was — VMotion is a classic example.

If security policies don’t/can’t travel with affinity to the resources they protect, your DMZ doesn’t mean squat if I just VMotioned a VM to a segment that doesn’t have a firewall, IDS, IPS, WAF and Proxy in front of it.

THAT’S what these talking heads are talking about while you’re intent on sticking yours in the sand.  If you don’t start thinking about how these disruptive technologies will impact you in the next 12 months, you’ll be reading about yourself in the blogosphere breach headlines soon enough.

Think different.

/Hoff

Captain Stupendous — Making the Obvious…Obvious! Jericho Redux…

September 19th, 2007 8 comments

Captstupendous
Sometimes you have to hurt the ones you love. 

I’m sorry, Rich.  This hurts me more than it hurts you…honest.

The Mogull decides that rather than contribute meaningful dialog to discuss the meat of the topic at hand, he would rather contribute to the FUD regarding the messaging of the Jericho Forum that I was actually trying to wade through.

…and he tried to be funny.  Sober.  Painful combination.

In a deliciously ironic underscore to his BlogSlog, Rich caps off his post with a brilliant gem of obviousness of his own whilst chiding everyone else to politely "stay on message" even when he leaves the reservation himself:

"I formally
submit “buy secure stuff” as a really good one to keep us busy for a
while."

<phhhhhht> Kettle, come in over, this is Pot. <phhhhhhttt> Kettle, do you read, over? <phhhhhhht>  It’s really dark in here <phhhhhhttt>

So if we hit the rewind button for a second, let’s revisit Captain Stupendous’ illuminating commentary.  Yessir.  Captain Stupendous it is, Rich, since the franchise on Captain Obvious is plainly over-subscribed.

I spent my time in my last post suggesting that the Jericho Forum’s message is NOT that one should toss away their firewall.  I spent my time suggesting that rather reacting to the oft-quoted and emotionally flammable marketing and messaging, folks should actually read their 10 Commandments as a framework. 

I wish Rich would have read them because his post indicates to me that the sensational hyperbole he despises so much is hypocritically emanating from his own VoxHole. <sigh>

Here’s a very high-level generalization that I made which was to take the focus off of "throwing away your firewall":

Your perimeter *is* full of holes so what we need to do is fix the problems, not the symptoms.  That is the message.

And Senor Stupendous suggested:

Of course the perimeter is full of holes; I haven’t met a security
professional who thinks otherwise. Of course our software generally
sucks and we need secure platforms and protocols. But come on guys,
making up new terms and freaking out over firewalls isn’t doing you any
good. Anyone still think the network boundary is all you need? What? No
hands? Just the “special” kid in back? Okay, good, we can move on now.

You’re missing the point — both theirs and mine.  I was restating the argument as a setup to the retort.  But who can resist teasing the mentally challenged for a quick guffaw, eh, Short Bus?

Here is the actual meat of the Jericho Commandments.  I’m thrilled that Rich has this all handled and doesn’t need any guidance.  However, given how I just spent my last two days, I know that these issues are not only relevant, but require an investment of time, energy, and strategic planning to make actionable and remind folks that they need to think as well as do.

I defy you to show me where this says "throw away your firewalls."

Repeat after me: THIS IS A FRAMEWORK and provides guidance and a rational, strategic approach to Enterprise Architecture and how security should be baked in.  Please read this without the FUDtastic taint:

Jericho_comm1Jericho_comm2

Rich sums up his opus with this piece of reasonable wisdom, which I wholeheartedly agree with:

You have some big companies on board and could use some serious
pressure to kick those market forces into gear.

…and to warm the cockles of your heart, I submit they do and they are.  Spend a little time with Dr. John Meakin, Andrew Yeomans, Stephen Bonner, Nick Bleech, etc. and stop being so bloody American 😉  These guys practice what they preach and as I found out, have been for some time.

They’ve refined the messaging some time ago.  Unload the baggage and give it a chance.

Look at the real message above and then see how your security program measures up against these topics and how your portfolio and roadmap provides for these capabilities.

Go forth and do stupendous things. <wink>

/Hoff

The British Are Coming! In Defense (Again) of the Jericho Forum…

September 17th, 2007 10 comments

NutsjerichoThe English are coming…and you need to give them a break.  I have.

Back in 2006, after numerous frustrating discussions dating back almost three years without a convincing conclusion, I was quoted in an SC Magazine article titled "World Without Frontiers" which debated quite harshly the Jericho Forum’s evangelism of a security mindset and architecture dubbed as "de-perimeterization."

Here’s part of what I said:

Some people dismiss Jericho as trying to re-invent the wheel. "While
the group does an admirable job raising awareness, there is nothing
particularly new either in what it suggests or even how it suggests we
get there," says Chris Hoff, chief security strategist at Crossbeam
Systems.

"There is a need for some additional technology and
process re-tooling, some of which is here already – in fact, we now
have an incredibly robust palette of resources to use. But why do we
need such a long word for something we already know? You can dress
something up as pretty as you like, but in my world that’s not called
‘deperimeterisation’, it’s called a common sense application of
rational risk management aligned to the needs of the business."   

Hoff
insists the Forum’s vision is outmoded. "Its definition speaks to what
amounts to a very technically focused set of IT security practices,
rather than data survivability. What we should come to terms with is
that confidentiality, integrity and availability will be compromised.
It’s not a case of if, it’s a case of when.

The focus should
be less on IT security and more on information survivability; a
pervasive enterprise-wide risk management strategy and not a
narrowly-focused excuse for more complex end-point products," he says.

But is Jericho just offering insight into the obvious? "Of course,"
says Hoff. "Its suggestion that "deperimeterisation" is somehow a new
answer to a set of really diverse, complex and long-standing IT
security issues… simply ignores the present and blames the past," he
says.

"We don’t need to radically deconstruct the solutions
universe to arrive at a more secure future. We just need to learn how
to appropriately measure risk and quantify how and why we deploy
technology to manage it. I admire Jericho’s effort, and identify with
the need. But the problem needs to be solved, not renamed."

I have stated previously that this was an unfortunate reaction to the marketing of the message and not the message itself, and I’ve come to understand what the Jericho Forum’s mission and its messaging actually represents.  It’s a shame that it took me that long and that others continue to miss the point.

Jericho
Today Mike Rothman commented about NetworkWorld’s coverage of the latest Jericho Forum in New York last week.  The byline of the article suggested that "U.S. network execs clinging to firewalls" and it seems we’re right back on the Hamster Wheel of Pain, perpetuating a cruel myth.

After all this time, it appears that the Jericho Forum is apparently still suffering from a failure to communicate — there exists a language gap — probably due to that allergic issue we had once to an English King and his wacky ideas relating to the governance of our "little island."  Shame, that.

This is one problem that this transplanted Kiwi-American (same Queen after-all) is motivated to fix.

Unfortunately, the Jericho Forum’s message has become polluted and marginalized thanks to a perpetuated imprecise suggestion that the Forum recommends that folks simply turn off their firewalls and IPS’s and plug their systems directly into the Internet, as-is.

That’s simply not the case, and in fact the Forum has recognized some of this messaging mess, and both softened and clarified the definition by way of the issuance of their "10 Commandments." 

You can call it what you like: de-perimeterization, re-perimeterization or radical externalization, but here’s what the Jericho Forum actually advocates, which you can read about here:

header De-perimeterization explained
    The huge explosion in business use of the Web protocols means that:
   

  • today the traditional "firewalled" approach to securing a network boundary is at best Barrierflawed, and at worst ineffective. Examples include:
           

    • business demands that tunnel through perimeters or bypass them altogether
    • IT products that cross the boundary, encapsulating their protocols within Web protocols
    • security exploits that use e-mail and Web to get through the perimeter.

          

  • to respond to future business needs, the break-down of the traditional
    distinctions between “your” network and “ours” is inevitable
  • increasingly, information will flow between business organizations over
    shared and third-party networks, so that ultimately the only reliable
    security strategy is to protect the information itself, rather than the
    network and the rest of the IT infrastructure   

This
trend is what we call “de-perimeterization”. It has been developing for
several years now. We believe it must be central to all IT security
strategies today.

header The de-perimeterization solution
   
SolutionWhile
traditional security solutions like network boundary technology will
continue to have their roles, we must respond to their limitations. In
a fully de-perimeterized network, every component will be independently
secure, requiring systems and data protection on multiple levels, using
a mixture of

  • encryption
  • inherently-secure computer protocols
  • inherently-secure computer systems
  • data-level authentication

The design principles that guide the development of such technology solutions are what we call our “Commandments”, which capture the essential requirements for IT security in a de-perimeterized world.

I was discussing these exact points today in a session at an Institute for Applied Network Security conference today (and as I have before here) wherein I summarized this as the capability to:

Take a host with a secured OS, connect it into any network using whatever means you find appropriate,
without regard for having to think about whether you’re on the "inside"
or "outside." Communicate securely, access and exchange data in
policy-defined "zones of trust" using open, secure, authenticated and
encrypted protocols.

Did you know that one of the largest eCommerce sites on the planet doesn’t even bother with firewalls in front of its webservers!?  Why?  Because with 10+ Gb/s of incoming HTTP and HTTP/S connections using port 80 and 443 specifically, what would a firewall add that a set of ACLs that only allows port 80/443 through to the webservers cannot?

Nothing.  Could a WAF add value?  Perhaps.  But until then, this is a clear example of a U.S. company that gets the utility of not adding security in terms of a firewall just because that’s the way it’s always been done.

From the NetworkWorld article, this is a clear example of the following:

The forum’s view of firewalls is that they no longer meet the needs of businesses that increasingly need to let in traffic
                        to do business. Its deperimeterization thrust calls for using secure applications and firewall protections closer to user devices and servers.

It’s not about tossing away prior investment or abandoning one’s core beliefs, it’s about about being honest as to the status of information security/protection/assurance, and adapting appropriately.

Your perimeter *is* full of holes so what we need to do is fix the problems, not the symptoms.
That is the message.

So consider me the self-appointed U.S. Ambassador to our friends across the pond.  The Jericho Forum’s message is worth considering and deserves your attention.

/Hoff

Google Makes Its Move To The Corporate Enterprise Desktop – Can It Do It Securely?

September 10th, 2007 4 comments

Googleapps
Coming (securely?) soon to a managed enterprise desktop near you, GoogleApps.  As discussed previously in my GooglePOP post demonstrating how Google will become the ASP of choice, outsouring and IT Consultancy CapGeminiCapgemini
announced it is going to offer Google’s Apps as a managed SaaS desktop option to its corporate enterprise customers, the Guardian says today:

Google has linked up with IT consultancy and outsourcing specialist
CapGemini to target corporate customers with its range of desktop
applications, in the search engine’s most direct move against the
dominance of Microsoft.

CapGemini, which already runs the
desktops of more than a million corporate workers, will provide its
customers with "Google Apps" such as email, calendar, spreadsheets and
word processing.

"Microsoft
is an important partner to us as is IBM," said the head of partnerships
at CapGemini’s outsourcing business, Richard Payling. "In our client
base we have a mix of Microsoft users and Lotus Notes users and we now
have our first Google Apps user. But CapGemini is all about freedom,
giving clients choice of the most appropriate technology that is going
to fit their business environment."

Google’s applications such as
its Google Docs word processing and spreadsheet service allow several
people to work on one document and see changes in real time.

"If
you look at the traditional desktop it is very focused on personal
productivity," said Robert Whiteside, Google enterprise manager, UK and
Ireland. "What Google Apps brings is team productivity."

…If you’re wondering how they’re going to make money from all this:

CapGemini will collect the £25 ($50) licence fee charged by Google for its applications, which launched in February.

It
will make further revenues from helping clients use the new
applications, providing helpdesk services and maintenance. It will also
provide help with corporate security, especially for applications such
as email, as well as storage and back-up services.

CapGemini
expects customers to mix and match products, providing some users with
expensive Microsoft tools and others with cheaper and lower-spec Google
Apps.

You can check out the differences between the free and for-pay versions here.

Besides being a very good idea from an SaaS "managed services" perspective, it shows that Google (and global outsourcers) see a target market waiting to unfold in the corporate enterprise space based upon the collaboration sale.

What’s really interesting from a risk management perspective, continuing to ride the theme of Google’s Global Domination, is that Google’s SaaS play will draw focus on the application of security as regulatory compliance issues continue to bite at the heels of productivity gains offered by the utility of centrally hosted collaboration-focused toolsets such as GoogleApps.

Interestingly, Nick Carr points out that GoogleApps’ "outsourced" application hosting capability hasn’t caught on with the large corporate enterprise set largely due to "enterprise readiness," security and compliance concerns, a suggestion that Steve Jones, a Capgemini outsourcing executive who oversees the firm’s work with software-as-a-service applications, maintains is not an issue:

"[Carr] asked Jones about the commonly heard claim that Google Apps, while
fine for little organizations, isn’t "enterprise-ready." He scoffed at
the notion, saying that the objection is just a smokescreen that some
CIOs are "hiding behind." Google Apps, he says, is "already being used
covertly" in big companies, behind the backs of IT staffers. The time
has come, he argues, to bring Apps into the mainstream of IT management
in order to ensure that important data is safeguarded and compliance
requirements are met. Jones foresees "a lot of big companies"
announcing the formal adoption of Apps.

Remember, these applications and their data are hosted on Google’s infrastructure.  Think about the audit, privacy, security and compliance implications of that; folks that utilize ASP services are perhaps used to this, but the question is, what can Google do to suggest it’s hosting model is secure enough, after all, Hoff’s 9th law represents:

Secconven

Since Google’s app. suite isn’t quite complete yet, Microsoft’s not entirely in danger of seeing it’s $12 Billion office empire crumble, but it’s got to start somewhere…

/Hoff

Generalizing About Security/Privacy as a Competitive Advantage is a Waste of Perfectly Good Electrons

September 4th, 2007 6 comments

Advantage
Curphey gets right to the point in this blog post by decrying that security and privacy do not constitute a competitive advantage to those companies who invest in it because consumers have shown time and time again that despite breaches of security, privacy and trust, they continue to do business with them.  I think.

He tends to blur the lines between corporate and consumer "advantage" without really defining either, but does manage to go so far as to hammer the point home with allegory that unites the arguments of security ROI, global warming and the futility of IT overall.  Time for coffee and some happy pills, Mark? 😉

Just for reference, let’s see how those goofy Oxfordians define "advantage":

advantage |ədˈvantij| noun a condition or circumstance that puts one in a favorable or superior position : companies with a computerized database are at an advantage | she had an advantage over her mother’s generation. • the opportunity to gain something; benefit or profit : you could learn something to your advantage | he saw some advantage in the proposal. • a favorable or desirable circumstance or feature; a benefit : the village’s proximity to the town is an advantage. • Tennis a player’s score in a game when they have won the first point after deuce (and will win the game if they win the next point). verb [ trans. ] put in a favorable or more favorable position.

Keep that in your back pocket for a minute.

OK, Mark, I’ll bite:

Many security vendors army of quota
carrying foot soldiers brandish their excel sheets that prove security
is important and why you should care. They usually go on to show
irrefutable numbers demonstrating security ROI models and TCO. I think
its all “bull shitake”!

…and those armies of security drones are fueled by things like compliance mandates put forth by legislation as a direct result of things like breaches, so it’s obviously important to someone.  Shitake or not, those "someones" are also buying.

You’ve already doomed this argument by polarizing it with the intractable death ray of ROI.  We’ve already gone ’round and ’round on the definition of "value" as it relates to ROI and security, so a good majority of folks have already signed off an aren’t reading past this point…yet I digress.

Wired has the scoop;

Privacy
is fast becoming the trendy concept in online marketing. An increasing
number of companies are flaunting the steps they’ve taken to protect
the privacy of their customers. But studies suggest consumers won’t pay
even 25 cents to protect their data.

Why should consumers pay anything to protect their data!? Security and privacy are table stakes expectations (see below) on the consumer front.  Companies invest millions in security and compliance initiatives driven by legislation brought on by representatives in local, state and federal government to help make it so.  Furthermore, given the fact that if someone utilizes my credit card to commit fraud, I’m not responsible; it’s written off!  If you change the accountability model, you can bet consumers would be a little more concerned with protecting their data.  I wager they’d pay a hell of a lot more than $0.25 for it, too.

They aren’t, because despite being inconvenienced, they don’t care.  They don’t have to.  But before you assume I’m just agreeing with your point, read on.

After the TJX debacle I remember seeing predictions that people will vote with their feet. Of course they didn’t, sales actually went up 9%. The same argument was made for Ruby Tuesdays who lost some credit cards. It just doesn’t happen. Lake Chad and disasters on a global scale continue to plague us due to climate change yet still people refuse to stop buying SUV’s.

See previous paragraph above.   When bad things happen, consumers expect that someone will put the hammer down and things will get better.  New legislation.  More safeguards.  Extended protection. They often do. 

Furthermore, with your argument, one could suggest that security/privacy have become a competitive advantage for TJX now since given their uptake and revenues, the following definition seems to apply:

Competitive advantage (CA) is a position that a firm
occupies in its competitive landscape. Michael Porter posits that a
competitive advantage, sustainable or not, exists when a company makes economic rents,
that is, their earnings exceed their costs (including cost of capital).
That means that normal competitive pressures are not able to drive down
the firm’s earnings to the point where they cover all costs and just
provide minimum sufficient additional return to keep capital invested.
Most forms of competitive advantage cannot be sustained for any length
of time because the promise of economic rents drives competitors to
duplicate the competitive advantage held by any one firm.

It looks to me that based upon your argument, TJX benefited from not only their renewed investment in security/privacy but from the breach itself!  I think the last statement resonates with your Carr’s commentary (below)  but you aren’t talking about "sustainable" competitive advantage.  Or are you?

Right, wrong or indifferent, this is how it works.  Corporate incrementalism is an acceptable go to market strategy to overall bolster one’s strategy over a competitor; it’s the entire long tail approach to marketing.  You can’t be surprised by this?

This is why we have hybrid SUV’s now…

Nicholas Carr discusses this in IT Doesn’t Matter.
To start with technologies can become competitive differentials like
the railroads or the telephone. But once everyone has it, the paying
field levels and it becomes table stakes. Its a competitive
disadvantage if you aren’t in the game (i.e. insecure) but the economic
cost of developing a service or technology that is so compelling as to
become an advantage ain’t on the radar (for the most part).

So getting back to what I thought was your original premise, and escape the low-earth orbit of the affliction of the human condition, global warming and ROI… 🙁

For the sake of argument, let’s assume that I agree with your lofty generalizations that security and privacy do not represent a competitive advantage.  Please turn off your firewall now.  Deactivate your anti-virus and ant-spam.  Turn off that IDS/IPS.  Remove those WebApp firewall-enabled load balancers…

Yes, IT (and security/privacy) are table stakes (as I established above) but NOT having them would be a competitive disadvantage. THAT is the point.  It’s a referential argument and a silly one at that.

…almost as silly as suggesting that you shouldn’t try to measure the effectiveness of security; it seems that people want to hang language on these topics and debate that instead of the core issue itself.

The threat models dictate how investments are made and how they are perceived to be advantageous or not.  They’re also cyclical and temporal, so over time, their value depreciates until the next wave requires more investment.  Basic economics.

Generalizing about security and privacy as not being competitive advantages is a waste of time.  I’d love to see an ad from a company that says they’re NOT investing in security and privacy and that their Corporate credo is "screw it, you don’t care, anyway…"

I’m going to get on my bike and ride down to the store to buy a cup of coffee with my credit card now…

/Hoff