Archive for the ‘Information Centricity’ Category

RSA Interview (c/o Tripwire) On the State Of Information Security In Virtualized/Cloud Environments.

March 7th, 2010 1 comment

David Sparks (c/o Tripwire) interviewed me on the state of Information Security in virtualized/cloud environments.  It’s another reminder about Information Centricity.

Direct Link here.

Emedded below:

Reblog this post [with Zemanta]

Slides from My Cloud Security Alliance Keynote: The Cloud Magic 8 Ball (Future Of Cloud)

March 7th, 2010 No comments

Here are the slides from my Cloud Security Alliance (CSA) keynote from the Cloud Security Summit at the 2010 RSA Security Conference.

The punchline is as follows:

All this iteration and debate on the future of the “back-end” of Cloud Computing — the provider side of the equation — is ultimately less interesting than how the applications and content served up will be consumed.

Cloud Computing provides for the mass re-centralization of applications and data in mega-datacenters while simultaneously incredibly powerful mobile computing platforms provide for the mass re-distribution of (in many cases the same) applications and data.  We’re fixated on the security of the former but ignoring that of the latter — at our peril.

People worry about how Cloud Computing puts their applications and data in other people’s hands. The reality is that mobile computing — and the clouds that are here already and will form because of them — already put, quite literally, those applications and data in other people’s hands.

If we want to “secure” the things that matter most, we must focus BACK on information centricity and building survivable systems if we are to be successful in our approach.  I’ve written about the topics above many times, but this post from 2009 is quite apropos: The Quandary Of the Cloud: Centralized Compute But Distributed Data You can find other posts on Information Centricity here.

Slideshare direct link here (embedded below.)

Reblog this post [with Zemanta]

Comments on the PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

February 16th, 2010 2 comments

I saw a very interesting post on LinkedIn with the title PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

PricewaterhouseCoopers are working with the Technology Strategy Board (part of BIS) on a high profile research project which aims to identify future technology and cyber security trends. These statements are forward looking and are intended to purely start a discussion around emerging/possible future trends. This is a great chance to be involved in an agenda setting piece of research. The findings will be released in the Spring at Infosec. We invite you to offer your thoughts…

The cloud/thin computing will fundamentally change the nature of cyber security…

The nature of cyber security threats will fundamentally change as the trend towards thin computing grows. Security updates can be managed instantly by the solution provider so every user has the latest security solution, the data leakage threat is reduced as data is stored centrally, systems can be scanned more efficiently and if Botnets capture end-point computers, the processing power captured is minimal. Furthermore, access to critical data can be centrally managed and as more email is centralised, malware can be identified and removed more easily. The key challenge will become identity management and ensuring users can only access their relevant files. The threat moves from the end-point to the centre.

What are your thoughts?

My response is simple.

Cloud Computing or “Thin Computing” as described above doesn’t change the “nature” of (gag) “cyber security” it simply changes its efficiency, investment focus, capital model and modality. As to the statement regarding threats with movement “…from the end-point to the centre,” the surface area really becomes amorphous and given the potential monoculture introduced by the virtualization layers underpinning these operations, perhaps expands.

Certainly the benefits described in the introduction above do mean changes to who, where and when risk mitigation might be applied, but those activities are, in most cases, still the same as in non-Cloud and “thick” computing.  That’s not a “fundamental change” but rather an adjustment to a platform shift, just like when we went from mainframe to client/server.  We are still dealing with the remnant security issues (identity management, AAA, PKI, encryption, etc.) from prior  computing inflection points that we’ve yet to fix.  Cloud is a great forcing function to help nibble away at them.

But, if you substitute “client server” in relation to it’s evolution from the “mainframe era” for “cloud/thin computing” above, it all sounds quite familiar.

As I alluded to, there are some downsides to this re-centralization, but it is important to note that I do believe that if we look at what PaaS/SaaS offerings and VDI/Thin/Cloud computing offers, it makes us focus on protecting our information and building more survivable systems.

However, there’s a notable bifurcation occurring. Whilst the example above paints a picture of mass re-centralization, incredibly powerful mobile platforms are evolving.  These platforms (such as the iPhone) employ a hybrid approach featuring both native/local on-device applications and storage of data combined with the potential of thin client capability and interaction with distributed Cloud computing services.*

These hyper-mobile and incredibly powerful platforms — and the requirements to secure them in this mixed-access environment — means that the efficiency gains on one hand are compromised by the need to once again secure  diametrically-opposed computing experiences.  It’s a “squeezing the balloon” problem.

The same exact thing is occurring in the Private versus Public Cloud Computing models.


* P.S. Bernard Golden also commented via Twitter regarding the emergence of Sensor nets which also have a very interesting set of implications on security as it relates to both the examples of Cloud and mobile computing elements above.

Reblog this post [with Zemanta]

Incomplete Thought: Storage In the Cloud: Winds From the ATMOS(fear)

May 18th, 2009 1 comment

I never metadata I didn’t like…

I first heard about EMC’s ATMOS Cloud-optimized storage “product” months ago:

EMC Atmos is a multi-petabyte offering for information storage and distribution. If you are looking to build cloud storage, Atmos is the ideal offering, combining massive scalability with automated data placement to help you efficiently deliver content and information services anywhere in the world.

I had lunch with Dave Graham (@davegraham) from EMC a ways back and while he was tight-lipped, we discussed ATMOS in lofty, architectural terms.  I came away from our discussion with the notion that ATMOS was more of a platform and less of a product with a focus on managing not only stores of data, but also the context, metadata and policies surrounding it.  ATMOS tasted like a service provider play with a nod to very large enterprises who were looking to seriously trod down the path of consolidated and intelligent storage services.

I was really intrigued with the concept of ATMOS, especially when I learned that at least one of the people who works on the team developing it also contributed to the UC Berkeley project called OceanStore from 2005:

OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers.

Any computer can join the infrastructure, contributing storage or providing local user access in exchange for economic compensation. Users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. The providers automatically buy and sell capacity and coverage among themselves, transparently to the users. The utility model thus combines the resources from federated systems to provide a quality of service higher than that achievable by any single company.

OceanStore caches data promiscuously; any server may create a local replica of any data object. These local replicas provide faster access and robustness to network partitions. They also reduce network congestion by localizing access traffic.

Pretty cool stuff, right?  This just goes to show that plenty of smart people have been working on “Cloud Computing” for quite some time.

Ah, the ‘Storage Cloud.’

Now, while we’ve heard of and seen storage-as-a-service in many forms, including the Cloud, today I saw a really interesting article titled “EMC, AT&T open up Atmos-based cloud storage service:”

EMC Corp.’s Atmos object-based storage system is the basis for two cloud computing services launched today at EMC World 2009 — EMC Atmos onLine and AT&T’s Synaptic Storage as a Service.
EMC’s service coincides with a new feature within the Atmos Web services API that lets organizations with Atmos systems already on-premise “federate” data – move it across data storage clouds. In this case, they’ll be able to move data from their on-premise Atmos to an external Atmos computing cloud.

Boston’s Beth Israel Deaconess Medical Center is evaluating Atmos for its next-generation storage infrastructure, and storage architect Michael Passe said he plans to test the new federation capability.

Organizations without an internal Atmos system can also send data to Atmos onLine by writing applications to its APIs. This is different than commercial graphical user interface services such as EMC’s Mozy cloud computing backup service. “There is an API requirement, but we’re already seeing people doing integration” of new Web offerings for end users such as cloud computing backup and iSCSI connectivity, according to Mike Feinberg, senior vice president of the EMC Cloud Infrastructure Group. Data-loss prevention products from RSA, the security division of EMC, can also be used with Atmos to proactively identify confidential data such as social security numbers and keep them from being sent outside the user’s firewall.

AT&T is adding Synaptic Storage as a Service to its hosted networking and security offerings, claiming to overcome the data security worries many conservative storage customers have about storing data at a third-party data center.

The federation of data across storage clouds using API’s? Information cross-pollenization and collaboration? Heavy, man.

Take plays like Cisco’s UCS with VMware’s virtualization and stir in VN-Tag with DLP/ERM solutions and sit it on top of ATMOS…from an architecture perspective, you’ve got an amazing platform for service delivery that allows for some slick application of policy that is information centric.  Sure, getting this all to stick will take time, but these are issues we’re grappling with in our discussions related to portability of applications and information.

Settling Back Down to Earth

This brings up a really important set of discussions that I keep harping on as the cold winds of reality start to blow.

From a security perspective, storage is the moose on the table that nobody talks about.  In virtualized environments we’re interconnecting all our hosts to islands of centralized SANs and NAS.  We’re converging our data and storage networks via CNAs and unified fabrics.

In multi-tenant Cloud environments all our data ends up being stored similarly with the trust that segregation and security are appropriately applied.  Ever wonder how storage architectures never designed to do these sorts of things at scale can actually do so securely? Whose responsibility is it to manage the security of these critical centerpieces of our evolving “centers of data.”

So besides my advice that security folks need to run out and get their CCIE certs, perhaps you ought to sign up for a storage security class, too.  You can also start by reading this excellent book by Himanshu Dwivedi titled “Securing Storage.”

What are YOU doing about securing storage in your enterprise our Cloud engagements?  If your answer is LUN masking, here’s four Excedrin, call me after the breach.


The Quandary Of the Cloud: Centralized Compute But Distributed Data

January 7th, 2009 3 comments

Here's a theme I've been banging around for quite some time as it relates to virtualization, cloud computing and security.  I've never really sat down and written about it, however.

As we trend towards consolidating and (re)centralizing our computing platforms — both endpoints and servers — using virtualization and cloud computing as enablers to do so, we're also simultaneously dealing with the decentralization and distributed data sets that come with technologies such as Web2.0, mobility and exposure of APIs from cloud platforms.*

So here we are all frothed up as virtualization and cloud computing have, in a sense, led us back to the resource-based consolidation of the mainframe model with all it's centralized splendor and client virtualization/thin clients/compartmentalized remote access is doing the same thing for endpoints. 

But the interesting thing is that with Moore's Law, the endpoints are also getting more and more powerful even though we're dumbing them down and trying to make their exposure more limited despite the fact that they can still efficiently process and store data locally.

These models, one could argue, are diametrically opposed when describing how to secure the platforms versus the information that resides on or is utilized by them.  As the cyclic waffling between centralized versus distributed continues, the timing of how and where we adapt to securing them always lags behind.  Which do we focus on securing and where?  The host, centralized server, network.

The unfortunate answer is always "yes."

Remember this (simplified) model of how/where we secure things?

If you juxtapose the image above mentally with how I represent the centralized <–> distributed trends in IT below, it's no wonder we're always behind the curve.  The computing model technology changes much more quickly than the security technology and processes do, thus the disconnect:

I need to update the diagram above to split out the "computing" layer
into client and server as well as extend the data layer to reference
storage modalities also, but it gets the job done.

At any rate, it's probably obvious and common sense, but when explaining to people why I spend my time pointing out gaps with security in virtualization and cloud models, I found this useful.


* It's important to note that while I refer to/group cloud computing models as centralized, I understand they have a distributed element to them, also.  I would ask you to think about the multiple cloud overlays as centralized resources, regardless of how intrinsically "distributed" in processing/load balancing they may be.

P.S. I just saw an awesome post titled "The Rise of the Stupid Endpoint" on the vinternals blog that shares many of the same points, although much more eloquently.  Check it out here.  Awesome!

Jaquith: Data-Centric Security Requires Devolution, Not a Revolution

January 6th, 2009 1 comment

If I may be as bold to call Andy Jaquith a friend, I'll do so as I welcomed both his first research report and blog as an analyst for Forrester.

Andy's first topic — Data-Centric Security Requires Devolution, Not a Revolution — is a doozy, and an important one given the recent re-focus on information protection.  The notion of data-centric security has caused quite the stir over the last year with the maturation, consolidation and (some might say) commoditzation of certain marketspaces (DLP) into larger mainstream security product suites.

I will admit that I did not spend the $350 to read Andy's research.  As much as I like to support the ever-turning wheels of the analyst sausage machine, I'm going to upgrade to Apple's newly-announced iLife/iWork '09 bundle instead.  Sorry, Andy.  I'll buy you that beer instead.

However, Andy wrote a great blog entry summarizing the research here:

All of the enterprise's data must be secured… that is obvious. Enterprises have been trying to do this for years with e-mail filtering, hard disk encryption, data leak prevention (DLP) and other technologies. Every few years, another hot technology emerges. But what's less obvious is that the accepted way of tacking the problem — making IT Security the primary responsible party — isn't necessarily the most effective way to do it.

In the report, I take the position that devolution of responsibilities from IT Security to business units is the most important success factor. I'd urge you to read the report for yourself. But in short: as long as data security is just "an IT thing," it's virtually certain that the most accountable parties (BUs) will be able to wash their hands of any responsibility. Depending on the organization, the centralized approach tends to lead to two scenarios:

(1) IT throws up its hands, saying "it's too hard!" — guaranteeing that data security problems breed like rabbits
(2) IT dials up the data controls so tight that end-users and business units rebel against or subvert the controls — leading to even worse problems

What's worse? No controls, or too many? The truth lies somewhere in between, and results vary widely depending on who's accountable: the boss you already know and have a relationship with, or an amorphous cost center whose workers don't know what you do all day. Your boss knows what work products are appropriate to protect, and what aren't. IT Security's role should be supply the tools to enforce the businesses' wishes, not operate them themselves.

Want to secure enterprise data? Stop trying so hard, and devolve!

My only comments are that much like the X-Files, the truth is "out there."  It is most certainly somewhere in between as users and the business will always take the convenient path of least resistance and security will impose the iron fist. 

Securing information must be a cooperative effort that involves the broader adoption of pervasive discovery and classification capabilities across the entire information lifecycle.  The technology has to become as transparent as possible such that workflow isn't interrupted.  That's no easy task

Rich Mogull and I have been writing and presenting about this for quite some time, and we're making evolutionary progress, but not revolutionary progress.

To that point, I might have chosen a different by-line.  Instead of "devolution, not a revolution," I would suggest that perhaps "goverened delegation, not regulation" might be appropriate, too.

Can't wait for that iLife/iWork bundle!


Security Will Not End Up In the Network…

June 3rd, 2008 9 comments

It’s not the destination, it’s the journey, stupid.

You can’t go a day without reading from the peanut gallery that it is
"…inevitable that network security will eventually be subsumed into
the network fabric."  I’m not picking on Rothman specifically, but he’s been banging this drum loudly of late.

For such a far-reaching, profound and prophetic statement, claims like these are strangely myopic and inaccurate..and then they’re exactly right.


Firstly, it’s sort of silly and obvious to trumpet that "network security" will end up in the "network."  Duh.  What’s really meant is that "information security" will end up in the network, but that’s sort of goofy, too. You’ll even hear that "host-based security" will end up in the network…so let’s just say that what’s being angled at here is that security will end up in the network.

These statements are often framed within a temporal bracket
that simply ignores the bigger picture and reads like a eulogy.  The reality is that historically
we have come to accept that security and technology are
cyclic and yet we continue to witness these terminal predictions defining an end state for security that has never arrived and never will.

Let me make plain my point: there is no final resting place for where and how security will "end up."

I’m visual, so let’s reference a very basic representation of my point.  This graph represents the cyclic transition over time of where and how
we invest in security.

We ultimately transition between host-based security,
information-centric security and network security over time. 

We do this little
shuffle based upon the effectiveness and maturity of technology,
economics, cultural, societal and regulatory issues and the effects of disruptive innovation.  In reality, this
isn’t a smooth sine wave at all, it’s actually more a classic dampened
oscillation ala the punctuated equilibrium theory I’ve spoken about
, but it’s easier to visualize this way.


Our investment strategy and where security is seen as being "positioned" reverses direction over time and continues ad infinitum.  This has proven itself time and time again yet we continue to be wowed by the prophetic utterances of people who on the one hand talk about these never-ending cycles and yet on the other pretend they don’t exist by claiming the "death" of one approach over another. 


To answer that let’s take a look at how the cyclic pendulum effect of our focus on
security trends from the host to the information to the network and
back again by analyzing the graph above. 

  1. If we take a look at the arbitrary "starting" point indicated by the "You Are Here" dot on the sine wave above, I suggest that over the last 2-3 years or so we’ve actually headed away from the network as the source of all things security.   

    There are lots of reasons for this; economic, ideological, technological, regulatory and cultural.  If you want to learn more about this, check out my posts on how disruptive Innovation fuels strategic transience.

    In short, the network has not been able to (and never will) deliver the efficacy, capabilities or
    cost-effectiveness desired to secure us from evil, so instead we look at
    actually securing the information itself.  The security industry messaging of late is certainly bearing testimony to that fact.  Check out this year’s RSA conference…

  2. As we focus then on information centricity, we see the resurgence of ERM, governance and compliance come into focus.  As policies proliferate, we realize that this is really hard and we don’t have effective and ubiquitous data
    classification, policy affinity and heterogeneous enforcement capabilities.  We shake our heads at the ineffectiveness of the technology we have and hear the cries of pundits everywhere that we need to focus on the things that really matter…

    In order to ensure that we effectively classify data at the point of creation, we recognize that we can’t do this automagically and we don’t have standardized schemas or metadata across structured and unstructured data, so we’ll look at each other, scratch our heads and conclude that the applications and operating systems need modification to force fit policy, classification and enforcement.

    Rot roh.

  3. Now that we have the concept of policies and classification, we need the teeth to ensure it, so we start to overlay emerging technology solutions on the host in applications and via the OS’s that are unfortunately non-transparent and affect the users and their ability to get their work done.  This becomes labeled as a speed bump and we grapple with how to make this less impacting on the business since security has now slowed things down and we still have breaches because users have found creative ways of bypassing technology constraints in the name of agility and efficiency…
  4. At this point, the network catches up in its ability to process closer to "line
    speed," and some of the data classification functionality from the host commoditizes into the "network" — which by then is as much in the form of appliances as it is routers and switches — and always
    will be.   So as we round this upturn focusing again on being "information centric," with the help of technology, we seek to use our network investment to offset impact on our users.
  5. Ultimately, we get the latest round of "next generation" network solutions which promise to deliver us from our woes, but as we "pass go and collect $200" we realize we’re really at the same point we were at point #1.

‘Round and ’round we go.

So, there’s no end state.  It’s a continuum.  The budget and operational elements of who "owns" security and where it’s implemented simply follow the same curve.  Throw in disruptive innovation such as virtualization, and the entire concept of the "host" and the "network" morphs and we simply realize that it’s a shift in period on the same graph.

So all this pontification that it is "…inevitable that network security will eventually be subsumed into
the network fabric" is only as accurate as what phase of the graph you reckon you’re on.  Depending upon how many periods you’ve experienced, it’s easy to see how some who have not seen these changes come and go could be fooled into not being able to see the forest for the trees.

Here’s the reality we actually already know and should not come to you as a surprise if you’ve been reading my blog: we will always need a blended investment in technology, people and process in order to manage our risk effectively.  From a technology perspective, some of this will take the form of controls embedded in the information itself, some will come from the OS and applications and some will come from the network.

Anyone who tells you differently has something to sell you or simply needs a towel for the back of his or her ears…


GooglePOPs – Cloud Computing and Clean Pipes: Told Ya So…

May 8th, 2008 9 comments

In July of last year, I prognosticated that Google with it’s various acquisitions was entering the security space with the intent to not just include it as a browser feature for search and the odd GoogleApp, but a revenue-generating service delivery differentiator using SaaS via applications and clean pipes delivery transit in the cloud for Enterprises.

My position even got picked up by  By now it probably sounds like old news, but…

Specifically, in my post titled "Tell Me Again How Google Isn’t Entering the Security Market? GooglePOPs will Bring Clean Pipes…" I argued (and was ultimately argued with) that Google’s $625M purchase of Postini was just the beginning:

This morning’s news that Google is acquiring Postini for $625 Million dollars doesn’t surprise me at all and I believe it proves the point.

In fact, I reckon that in the long term we’ll see the evolution of the Google Toolbar morph into a much more intelligent and rich client-side security application proxy service whereby Google actually utilizes client-side security of the Toolbar paired with the GreenBorder browsing environment and tunnel/proxy all outgoing requests to GooglePOPs.

What’s a GooglePOP?

These GooglePOPs (Google Point of Presence) will house large search and caching repositories that will — in conjunction with services such as those from Postini — provide a "clean pipes service to the consumer.  Don’t forget utility services that recent acquisitions such as GrandCentral and FeedBurner provide…it’s too bad that eBay snatched up Skype…

Google will, in fact, become a monster ASP.  Note that I said ASP and not ISP.  ISP is a commoditized function.  Serving applications and content as close to the user as possible is fantastic.  So pair all the client side goodness with security functions AND add GoogleApps and you’ve got what amounts to a thin client version of the Internet.

Here’s where we are almost a year later.  From the Ars Technica post titled "Google turns Postini into Google Web Security for Enterprise:"

The company’s latest endeavor, Google Web Security for Enterprise, is now available, and promises to provide a consistent level of system security whether an end-user is surfing from the office or working at home halfway across town.

The new service is branded under Google’s "Powered by Postini" product line and, according to the company, "provides real-time malware protection and URL filtering with policy enforcement and reporting. An additional feature extends the same protections to users working remotely on laptops in hotels, cafes, and even guest networks." The service is presumably activated by signing in directly to a Google service, as Google explicitly states that workers do not need access to a corporate network.

The race for cloud and secure utility computing continues with a focus on encapsulated browsing and application delivery environments, regardless of transport/ISP, starting to take shape.   

Just think about the traditional model of our enterprise and how we access our resources today turned inside out as a natural progression of re-perimeterization.  It starts to play out on the other end of the information centricity spectrum.

What with the many new companies entering this space and the likes of Google, Microsoft and IBM banging the drum, it’s going to be one interesting ride.


Asset Focused, Not Auditor Focused

May 3rd, 2008 5 comments

Gunnar Peterson wrote a great piece the other day on the latest productization craze in InfoSec – GRC (Governance, Risk Management and Compliance) wherein he asks "GRC – To Be or To Do?"

I don’t really recall when or from whence GRC sprung up as an allegedly legitimate offering, but to me it seems like a fashionably over-sized rug under which the existing failures of companies to effectively execute on the individual G, R, and C initiatives are conveniently being swept.

I suppose the logic goes something like this: "If you cant effectively
govern, manage risk or measure compliance it must be because what you’re doing is fragmented and siloed.  What you need is
a product/framework/methodology that takes potentially digestible
deliverables and perspectives and "evolves" them into a behemoth suite instead?"

I do not dispute that throughout most enterprises, the definitions, approaches and processes in managing each function are largely siloed and fragmented and I see the attractiveness of integrating and standardizing them, but  I am unconvinced that re-badging a control and policy framework collection constitutes a radical new approach. 

GRC appears to be a way to sell more products and services under a fancy new name to address problems rather than evaluate and potentially change the way in which we solve them.  Look at who’s pushing this: large software companies and consultants as well as analysts looking to pin their research to something more meaningful.

From a first blush, GRC isn’t really about governance or managing risk.  It’s audit-driven compliance all tarted up.

It’s a more fashionable way of getting all your various framework and control definitions in one place and appealing to an auditor’s desire for centralized "stuff" in order to document the effectiveness of controls and track findings against some benchmark.  I’m not really sure where the business-driven focus comes into play?

It’s also sold as a more efficient way of reducing the scope and costs of manual process controls.  Fine.  Can’t argue with that.  I might even say it’s helpful, but at what cost?

Gunnar said:

GRC (or Governance, Risk Management, and Compliance for
the uninitiated) is all the rage, but I have to say I think that again
Infosec has the wrong focus.

Instead of Risk Management helping to deliver transparent Governance and as a natural by-product demonstrate compliance as a function of the former, the model’s up-ended with compliance driving the inputs and being mislabeled.

As I think about it, I’m not sure GRC would be something a typical InfoSec function would purchase or use unless forced which is part of the problem.  I see internal audit driving the adoption which given today’s pressures (especially in public companies) would first start in establishing gaps against regulatory compliance.

If the InfoSec function is considering an approach that drives protecting the things that matter most and managing risk to an acceptable level and one that is not compliance-driven but rather built upon a business and asset-driven approach, rather than make a left turn Gunnar suggested:

Personally, I am happy sticking to classic infosec knitting – delivering confidentiality, integrity, and availability through authentication, authorization, and auditing. But if you are looking for a next generation conceptual horse to bet on, I don’t think GRC is it, I would look at information survivability. Hoff’s information survivability primer is a great starting point for learning about survivability.

Why survivability is more valuable over the long haul than GRC is that survivability is focused on assets not focused on giving an auditor what they need, but giving the business what it needs.

Seminal paper on survivability by Lipson, et al. "survivability solutions are best understood as risk management strategies that first depend on an intimate knowledge of the mission being protected." Make a difference – asset focus, not auditor focus.

For obvious reasons, I am compelled to say "me, too."

I would really like to talk to someone in a large enterprise who is using one of these GRC suites — I don’t really care which department you’re from.  I just want to examine my assertions and compare them against my efforts and understanding.


Welcome To the Information Survivability/Sustainability/Centricity Circus…

May 3rd, 2008 No comments

Forget "Security Theater."  The "Security Circus" is in town…

I wrote this some time ago and decided that I didn’t like the tone as it just came out as another whiny complaint against the "man."  I’m in a funny mood as I hit a threshold yesterday with all the so-called experts coming out of the woodwork lately, so I figured I’d post it because it made me chortle. 

They Shoot Horses, Don’t They?

To answer what seems to be a question increasing in frequency due to the surge in my blog’s readership lately, as well as being cycled through the gossip mill, I did not change the name of my blog from "Rational Security" to "Rational Survivability" due to IBM’s Val Rahmani’s charming advertisement keynote at RSA.  😉

One might suggest that Val’s use of the mythological reference to Sisyphus wasn’t as entertaining as Noonan’s "security as the width of two horses’ asses" keynote from a couple of years ago, but her punchline served to illustrate the sad state of Information Security, even if it also wanted to make me shoot myself.

Val’s shocking admission that IBM was "…exiting the security business,"
that "…information security was dead," and that we should all
celebrate by chanting "…long live [information] sustainability!" 

This caused those of us here at Rational Survivability HQ to bow our heads in a moment of silence for the passing of yet another topical meme and catchphrase that has now been "legitimized" by industry and thus must be put out of its misery and never used again.

You say "tomato," I say "tomato…"

Yeah, you might argue that "sustainability" is more business-focused
and less military-sounding than "survivability," but it’s really about
the same concepts. 

I’m not going to dissect her speech because that’s been done.  I have said most of what I have to say on this concept in my posts on Information Survivability and honestly, I think they are as relevant as ever. 

You can read the first one here and follow on with the some more, here. 

For those of you who weren’t around when it happened, I changed the name of my blog over six months ago to illustrate what is akin to the security industry’s equivalent of an introduction at an AA meeting and was so perfectly illustrated by Val’s fireside chat. 

You know the scene.  It’s where an alcoholic stands up and admits his or her weaknesses for a vice amongst an audience of current and "former" addicts.  Hoping for a collective understanding of one’s failure and declaring the observed days of committed sobriety to date,  the goal is to convince oneself and those around you that the counter’s been reset and you’ve really changed.  Despite the possibility of relapse at any moment, the declaration of intent — the will to live sober — is all one needs.

That and a damned good sponsor.

And now for something completely different!

That was a bloody depressing analogy, wasn’t it?  Since this was supposed to be a happy occasion, I found myself challenged to divine an even worse analogy for your viewing pleasure.   Here goes.

That’s right.  I’m going to violate the Prime Directive and go right with the patented Analog Of Barnum & Bailey’s Circus:

What Information Security has become is the equivalent of a carnie’s dancing poodle in the circus tent of industry. 

Secretly we want to see the tigers eat the dude with the whip, but
we cheer when he makes them do the Macarena anyway. 

We all know that one day, that little Romanian kid on
the trapeze is going to miss the triple-lindy and crash to the floor
sans net, but we’re not willing to do anything about it and it’s the tension that makes the act work, despite the exploitative child labor practices and horrible costumes.

We pump $180 in tokens into the ring toss to win an $11 stuffed animal, because it’s the effort that counts, not the price.

We’re all buying tickets, suffering through the stupid antics of the clowns piling out of the tiny little car in the spotlight hoping that the elephant act at the end of the show is going to be worth the price of admission. 

At the end of the night, we leave exhausted, disappointed, broke and smelling like sweaty caramel apples and stale pretzels…wondering when they’ll be back next year so we can take the kids.

See, I told you it was awful.  But you know what’s much worse than my shitty little clown analogy? 


Come one, come all.  Let Me Guess Your Weight!

So in today’s time of crappy economics when money is hard to come by,
it’s now as consumers that we start to pay attention to these practices
— this circus.  It’s now that we start to demand that these alleged
predatory vendors actually solve our business problems and attend to
our issues rather than simply recycle the packaging.

So when life hands vendors a lemon, they make marketingade, charge us $4.50 a pop and we still drink it.

Along those lines, many mainstream players have now begun to work
their marketing sideshows by pitching the supposedly novel themes of
sustainability, survivability, or information centricity.  It’s a surreptitiously repentant admission that all the peanuts and popcorn they’ve been selling us while all along we ooh and ahh at the product equivalents of the bearded lady, werewolf children and the world’s tallest man still climax at the realization that it’s all just an act.

At the end of the night, they count their money, tear down the tents and move on.  When the bearded lady gets a better gig, she bails and they bring in the dude with the longest mustache.  Hey, hair is hair; it’s just packaged differently, and we go to ogle at the newest attraction.

There’s no real punchline here folks, just the jaded, bitter and annoyed comments of someone who’s becoming more and more like the grumpy folks he always made fun of at bingo night and a stark realization of just how much I hate the circus.