Archive

Archive for the ‘Information Survivability’ Category

Endpoint Security vs. DLP? That’s Part Of the Problem…

March 31st, 2008 6 comments

Sandisk
Larry Walsh wrote something (Defining the Difference Between Endpoint Security and Data Loss Prevention) that sparked an interesting debate based upon a vendor presentation given to him on "endpoint security" by SanDisk.

SanDisk is bringing to market a set of high-capacity USB flash drives that feature built-in filesystem encryption as well as strong authentication and access control.  If the device gets lost with the data on it, it’s "safe and secure" because it’s encrypted.  They are positioning this as an "endpoint security" solution.

I’m not going to debate the merits/downsides of that approach because I haven’t seen their pitch, but suffice it to say, I think it’s missing a "couple" of pieces to solve anything other than a very specific set of business problems.

Larry’s dilemma stems from the fact that he maintains that this capability and functionality is really about data loss protection and doesn’t have much to do with "endpoint security" at all:

We debated that in my office for a few minutes. From my perspective, this solution seems more like a data loss prevention solution than endpoint security. Admittedly, there are many flavors of endpoint security. When I think of endpoint security, I think of network access control (NAC), configuration management, vulnerability management and security policy enforcement. While this solution is designed for the endpoint client, it doesn’t do any of the above tasks. Rather, it forces users to use one type of portable media and transparently applies security protection to the data. To me, that’s DLP.

In today’s market taxonomy, I would agree with Larry.  However, what Larry is struggling with is not really the current state of DLP versus "endpoint security," but rather the future state of converged information-centric governance.  He’s describing the problem that will drive the solution as well as the inevitable market consolidation to follow.

This is actually the whole reason Mogull and I are talking about the evolution of DLP as it exists today to a converged solution we call CMMP — Content Management, Monitoring and Protection. {Yes, I just added another M for Management in there…}

What CMMP represents is the evolved and converged end-state technology integration of solutions that today provide a point solution but "tomorrow" will be combined/converged into a larger suite of services.

Off the cuff, I’d expect that we will see at a minimum the following technologies being integrated to deliver CMMP as a pervasive function across the information lifecycle and across platforms in flight/motion and at rest:

  • Data leakage/loss protection (DLP)
  • Identity and access management (IAM)
  • Network Admission/Access Control (NAC)
  • Digital rights/Enterprise rights management (DRM/ERM)
  • Seamless encryption based upon "communities of interest"
  • Information classification and profiling
  • Metadata
  • Deep Packet Inspection (DPI)
  • Vulnerability Management
  • Configuration Management
  • Database Activity Monitoring (DAM)
  • Application and Database Monitoring and Protection (ADMP)
  • etc…

That’s not to say they’ll all end up as a single software install or network appliance, but rather a consolidated family of solutions from a few top-tier vendors who have coverage across the application, host and network space. 

If you were to look at any enterprise today struggling with this problem, they likely have or are planning to have most of the point solutions above anyway.  The difficulty is that they’re all from different vendors.  In the future, we’ll see larger suites from fewer vendors providing a more cohesive solution.

This really gives us the "cross domain information protection" that Rich talks about.

We may never achieve the end-state described above in its entirety, but it’s safe to say that the more we focus on the "endpoint" rather than the "information on the endpoint," the bigger the problem we will have.

/Hoff

The Walls Are Collapsing Around Information Centricity

March 10th, 2008 2 comments

Since Mogull and I collaborate quite a bit on projects and share many thoughts and beliefs, I wanted to make a couple of comments on his last post on Information Centricity and remind the audience at home of a couple of really important points.

Rich’s post was short and sweet regarding the need for Information-Centric solutions with some profound yet subtle guideposts:

For information-centric security to become a reality, in the long term it needs to follow the following principles:

  1. Information (data) must be self describing and defending.
  2. Policies and controls must account for business context.
  3. Information must be protected as it moves from structured to
    unstructured, in and out of applications, and changing business context.
  4. Policies must work consistently through the different defensive layers and technologies we implement.

I’m not convinced this is a complete list, but I’m trying to keep to
my new philosophy of shorter and simpler. A key point that might not be
obvious is that while we have self-defending data solutions, like DRM
and label security, for success they must grow to account for business
context. That’s when static data becomes usable information.

Mike Rothman gave an interesting review of Rich’s post:


The Mogull just laid out your work for the next 10 years. You just
probably don’t know it yet. Yes, it’s all about ensuring that the
fundamental elements of your data are protected, however and wherever
they are used. Rich has broken it up into 4 thoughts. The first one
made my head explode: "Information (data) must be self-describing and
defending."

Now I have to clean up the mess. Sure things like DRM are a
bad start, and have tarnished how we think about information-centric
security, but you do have to start somewhere. The reality is this is a
really long term vision of a problem where I’m not sure how you get
from Point A to Point B. We all talk about the lack of innovation in
security. And how the market just isn’t exciting anymore. What Rich
lays out here is exciting. It’s also a really really really big
problem. If you want a view of what the next big security company does,
it’s those 4 things. And believe me, if I knew how to do it, I’d be
doing it – not talking about the need to do it.

The comments I want to make are three-fold:

  1. Rich is re-stating and Mike’s head is exploding around the exact concepts that Information Survivability represents and the Jericho Forum trumpets in their Ten Commandments.  In fact, you can read all about that in a prior posts I made on the subjects of the Jericho Forum, re-perimeterization, information survivability and information centricity.  I like this post on a process I call ADAPT (Applied Data and Application Policy Tagging) a lot.

    For reference, here are the Jericho Forum’s Ten Commandments. Please see #9:

    Jericho_comm1Jericho_comm2

  2. As mike alluded, DRM/ERM has received a bad rap because of how it’s implemented — which has really left a sour taste in the mouths of the consumer consciousness.  As a business tool, it is the precursor of information centric policy and will become the lynchpin in how we will ultimately gain a foothold on solving the information resiliency/assurance/survivability problem.
  3. As to the innovation and dialog that Mike suggests is lacking in this space, I’d suggest he’s suffering from a bit of Shitake-ism (a-la mushroom-itis.)  The next generation of DLP solutions that are becoming CMP (Content Monitoring and Protection — a term I coined) are evolving to deal with just this very thing.  It’s happening.  Now.

    Further to that, I have been briefed by some very, very interesting companies that are in stealth mode who are looking to shake this space up as we speak.

So, prepare for Information Survivability, increased Information Resilience and assurance.  Coming to a solution near you…

/Hoff

A Worm By Any Other Name Is…An Information Epidemic?

February 18th, 2008 2 comments

Virus
Martin McKeay took exception to some interesting Microsoft research that suggested that the similar methodologies and tactics used by malicious software such as worms/viri, could also be used as an effective distributed defense against them:

Microsoft researchers are hoping to use "information epidemics" to distribute software patches more efficiently.

Milan Vojnović
and colleagues from Microsoft Research in Cambridge, UK, want to make
useful pieces of information such as software updates behave more like
computer worms: spreading between computers instead of being downloaded
from central servers.

The research may also help defend against malicious types of worm, the researchers say.

Software
worms spread by self-replicating. After infecting one computer they
probe others to find new hosts. Most existing worms randomly probe
computers when looking for new hosts to infect, but that is
inefficient, says Vojnović, because they waste time exploring groups or
"subnets" of computers that contain few uninfected hosts.

Despite the really cool moniker (information epidemic,) this isn’t a particularly novel distribution approach and in fact, we’ve seen malware do this.  However, it is interesting to see that an OS vendor (Microsoft) is continuing to actively engage in research to explore this approach despite the opinions of others who simply claim it’s a bad idea.  I’m not convinced either way, however.

I, for one, am all for resilient computing environments that are aware of their vulnerabilities and can actively defend against them.  I will be interested to see how this new paper builds off of work previously produced on the subject and its corresponding criticism.

Vojnović’s team have designed smarter strategies that can exploit the way some subnets provide richer pickings than others.

The
ideal approach uses prior knowledge of the way uninfected computers are
spread across different subnets. A worm with that information can focus
its attention on the most fruitful subnets – infecting a given
proportion of a network using the smallest possible number of probes.

But
although prior knowledge could be available in some cases – a company
distributing a patch after a previous worm attack, for example –
usually such perfect information will not be available. So the
researchers have also developed strategies that mean the worms can
learn from experience.

In
the best of these, a worm starts by randomly contacting potential new
hosts. After finding one, it uses a more targeted approach, contacting
only other computers in the same subnet. If the worm finds plenty of
uninfected hosts there, it keeps spreading in that subnet, but if not,
it changes tack.

That being the case, here’s some of Martin’s heartburn:

But the problem is, if both beneficial and malign
software show the same basic behavior patterns, how do you
differentiate between the two? And what’s to stop the worm from being
mutated once it’s started, since bad guys will be able to capture the
worms and possibly subverting their programs.

The article isn’t clear on how the worms will secure their network,
but I don’t believe this is the best way to solve the problem that’s
being expressed. The problem being solved here appears to be one of
network traffic spikes caused by the download of patches. We already
have a widely used protocols that solve this problem, bittorrents and
P2P programs. So why create a potentially hazardous situation using
worms when a better solution already exists. Yes, torrents can be
subverted too, but these are problems that we’re a lot closer to
solving than what’s being suggested.

I don’t want something that’s viral infecting my computer, whether
it’s for my benefit or not. The behavior isn’t something to be
encouraged. Maybe there’s a whole lot more to the paper, which hasn’t
been released yet, but I’m not comfortable with the basic idea being
suggested. Worm wars are not the way to secure the network.

I think that some of the points that Martin raises are valid, but I also think that he’s reacting mostly out of fear to the word ‘worm.’  What if we called it "distributed autonomic shielding?" 😉

Some features/functions of our defensive portfolio are going to need to become more self-organizing, autonomic and intelligent and that goes for the distribution of intelligence and disposition, also.  If we’re not going to advocate being offensive, then we should at least be offensively defensive.  This is one way of potentially doing this.

Interestingly, this dovetails into some discussions we’ve had recently with Andy Jaquith and Amrit Williams; the notion of herds or biotic propagation and response are really quite fascinating.  See my post titled "Thinning the Herd & Chlorinating the Gene Pool"

I’ve left out most of the juicy bits of the story so you should go read it and churn on some of the very interesting points raised as part of the discussion.

/Hoff

Update: Schneier thinks this is a lousy idea. That doesn’t move me one direction or the other, but I think this is cementing my opinion that had the author not used the word ‘worm’ in his analog the idea might not be dismissed so quickly…

Also, Wismer via a comment on Martin’s blog pointed to an interesting read from Vesselin Bontchev titled "Are "Good" Computer Viruses Still a Bad Idea?"

Update #2: See the comments section about how I think the use case argued by Schneier et. al. is, um, slightly missing the point.  Strangely enough, check out the Network World article that just popped up which says ""This was not the primary scenario targeted for this research," according to a statement."

Duh.

Security Today == Shooting Arrows Through Sunroofs of Cars?

February 7th, 2008 14 comments

Archer_2
In this Dark Reading post, Peter Tippett, described as the inventor of what is now Norton Anti-virus, suggests that the bulk of InfoSec practices are "…outmoded or outdated concepts that don’t apply to today’s computing
environments."

As I read through this piece, I found myself flip-flopping between violent agreement and incredulous eye-rolling from one paragraph to the next, caused somewhat by the overuse of hyperbole in some of his analogies.  This was disappointing, but overall, I enjoyed the piece.

Let’s take a look at Peter’s comments:

For example, today’s security industry focuses way too much time
on vulnerability research, testing, and patching, Tippett suggested.
"Only 3 percent of the vulnerabilities that are discovered are ever
exploited," he said. "Yet there is huge amount of attention given to
vulnerability disclosure, patch management, and so forth."

I’d agree that the "industry" certainly focuses their efforts on these activities, but that’s exactly the mission of the "industry" that he helped create.  We, as consumers of security kit, have perpetuated a supply-driven demand security economy.

There’s a huge amount of attention paid to vulnerabilities, patching and prevention that doesn’t prevent because at this point, that’s all we’ve got.  Until we start focusing on the the root cause rather than the symptoms, this is a cycle we won’t break.  See my post titled "Sacred Cows, Meatloaf, and Solving the Wrong Problems" for an example of what I mean.


Tippett compared vulnerability research with automobile safety
research. "If I sat up in a window of a building, I might find that I
could shoot an arrow through the sunroof of a Ford and kill the
driver," he said. "It isn’t very likely, but it’s possible.


"If I disclose that vulnerability, shouldn’t the automaker put in
some sort of arrow deflection device to patch the problem? And then
other researchers may find similar vulnerabilities in other makes and
models," Tippett continued. "And because it’s potentially fatal to the
driver, I rate it as ‘critical.’ There’s a lot of attention and effort
there, but it isn’t really helping auto safety very much."

What this really means and Peter doesn’t really ever state, is that mitigating vulnerabilities in the absence of threat, impact or probability is a bad thing.  This is why I make such a fuss about managing risk instead of mitigating vulnerabilities.  If there were millions of malicious archers firing arrows through the sunroofs of unsuspecting Ford Escort drivers, then the ‘critical’ rating is relevant given the probability and impact of all those slings and arrows of thine enemies…

Tippett also suggested that many security pros waste time trying
to buy or invent defenses that are 100 percent secure. "If a product
can be cracked, it’s sometimes thrown out and considered useless," he
observed. "But automobile seatbelts only prevent fatalities about 50
percent of the time. Are they worthless? Security products don’t have
to be perfect to be helpful in your defense."

I like his analogy and the point he’s trying to underscore.  What I find in many cases is that the binary evaluation of security efficacy — in products and programs — still exists.  In the absence of measuring the effective impact that something has in effecting one’s risk posture, people revert to a non-gradient scale of 0% or 100% insecure or secure.  Is being "secure" really important or is managing to a level of risk that is acceptable — with or without losses — the really relevant measure of success?   

This concept also applies to security processes, Tippett said.
"There’s a notion out there that if I do certain processes flawlessly,
such as vulnerability patching or updating my antivirus software, that
my organization will be more secure. But studies have shown that there
isn’t necessarily a direct correlation between doing these processes
well and the frequency or infrequency of security incidents.


"You can’t always improve the security of something by doing it
better," Tippett said. "If we made seatbelts out of titanium instead of
nylon, they’d be a lot stronger. But there’s no evidence to suggest
that they’d really help improve passenger safety."

I would like to see these studies.  I think that companies who have rigorous, mature and transparent processes that they execute "flawlessly" may not be more "secure," (a measurement I’d love to see quantified) but are in a much better position to respond and recover when (not if) an event occurs.  Based upon the established corollary that we can’t be 100% "secure" in the first place, we then know we’re going to have incidents.

Being able to recover from them or continue to operate while under duress is more realistic and important in my view.  That’s the point of information survivability.


Security teams need to rethink the way they spend their time,
focusing on efforts that could potentially pay higher security
dividends, Tippett suggested. "For example, only 8 percent of companies
have enabled their routers to do ‘default deny’ on inbound traffic," he
said. "Even fewer do it on outbound traffic. That’s an example of a
simple effort that could pay high dividends if more companies took the
time to do it."

I agree.  Focusing on efforts that eliminate entire classes of problems based upon reducing risk is a more appropriate use of time, money and resources.

Security awareness programs also offer a high
rate of return, Tippett said. "Employee training sometimes gets a bad
rap because it doesn’t alter the behavior of every employee who takes
it," he said. "But if I can reduce the number of security incidents by
30 percent through a $10,000 security awareness program, doesn’t that
make more sense than spending $1 million on an antivirus upgrade that
only reduces incidents by 2 percent?"

Nod.  That was the point of the portfolio evaluation process I gave in my disruptive innovation presentation:

24. Provide Transparency in portfolio effectiveness
Isd2007031_2

I didn’t invent this graph, but it’s one of my favorite ways of
visualizing my investment portfolio by measuring in three dimensions:
business impact, security impact and monetized investment.  All of
these definitions are subjective within your organization (as well as
how you might measure them.)

The Y-axis represents the "security impact" that the solution
provides.  The X-axis represents the "business impact" that the
solution provides while the size of the dot represents the capex/opex
investment made in the solution.

Each of the dots represents a specific solution in the portfolio.

If you have a solution that is a large dot toward the bottom-left of
the graph, one has to question the reason for continued investment
since it provides little in the way of perceived security and business
value with high cost.   On the flipside, if a solution is represented
by a small dot in the upper-right, the bang for the buck is high as is
the impact it has on the organization.

The goal would be to get as many of your investments in your
portfolio from the bottom-left to the top-right with the smallest dots
possible.

This transparency and the process by which the portfolio is assessed
is delivered as an output of the strategic innovation framework which
is really comprised of part art and part science.

All in all, a good read from someone who helped create the monster and is now calling it ugly…

/Hoff

Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

January 10th, 2008 11 comments

Phatburger_2
Juicy Fat Assets, Ripe For the Picking…

So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

  1. Assume a company that:
    …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
  2. Take the following:
    Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
  3. Combine With:
    Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
  4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
  5. You Get:
    Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

…I mean besides the fact that an entire industry has been leeching off this mess for decades…


I’ll Gladly Pay You Tuesday For A Secure Solution Today…

The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

Relax, Dude…Keep Your Firewalls…

In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

This goes for internal corporate users who are chained to their desks and not just mobile users.

What’s preventing you from doing this today?

/Hoff

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

The Seesaw CISO…Changing Places But Similar Faces…

December 8th, 2007 1 comment

Seesaw_shadow
…from geek to business speak…

Dennis Fisher has nice writeup over at the SearchSecurity Security Bytes Blog about the changing role and reporting structure of the CISO.

Specifically, Dennis notes that he was surprised by the number of CISOs who recently told him that they no longer report to the CIO and aren’t a part of IT at all.  Moreover, these same CISOs noted that the skillset and focus is also changing from a technical to a business role:

In the last few months I’ve been hearing more and more from CEOs,
CIOs and CSOs about the changing role of the CSO (or CISO, depending on
your org chart) in the enterprise. In the past, the CSO has nearly
always been a technically minded person who has risen through the IT
ranks and then made the jump to the executive ranks. That lineage
sometimes got in the way when it came time to deal with other upper
managers who typically had little or no technical knowledge and weren’t
interested in the minutiae of authentication schemes, NAC and unified
threat management. They simply wanted things to work and to avoid
seeing the company’s name in the papers for a security breach.

But that seems to be changing rather rapidly. Last month I was on a
panel in Chicago with Howard Schmidt, Lloyd Hession, the CSO of BT
Radianz, and Bill Santille, CIO of Uline, and the conversation quickly
turned to the ways in which the increased focus on risk management in
enterprises has forced CSOs to adapt and expand their skill sets. A
knowledge of IDS, firewalls and PKI is not nearly enough these days,
and in some cases is not even required to be a CSO. One member of the
audience said that the CSO position in his company is rotated regularly
among senior managers, most of whom have no technical background and
are supported by a senior IT staff member who serves as CISO. The CSO
slot is seen as a necessary stop on the management circuit, in other
words. Several other CSOs in the audience said that they no longer
report to the CIO and are not even part of the IT organization.
Instead, they report to the CFO, the chief legal counsel, or in one
case, the ethics officer.

I’ve talked about the fact that "security" should be a business function and not a technical one and quite frankly what Dennis is hearing has been a trend on the uptick for the last 3-4 years as "information security" becomes less relevant and managing risk becomes the focus.  To wit:

The number of organizations making this kind of change surprised me
at the time. But, in thinking more about it, it makes a lot of sense,
given that the daily technical security tasks are handled by people
well below the CSO’s office. And many of the CSOs I know say they spend
most of their time these days dealing with policy issues such as
regulatory compliance. Patrick Conte, the CEO of software maker
Agiliance, which put on the panel, told me that these comments fit with
what he was hearing from his customers, as well. Some of this shift is
clearly attributable to the changing priorities inside these
enterprises. But some of it also is a result of the maturation of the
security industry as a whole, which has translated into less of a focus
on technology and more attention being paid to policies, procedures and
other non-technical matters.

How this plays out in the coming months and years will be quite
interesting. My guess is that as security continues to be absorbed into
the larger IT and operations functions, the CSO’s job will continue to
morph into more of a business role.

I still maintain that "compliance" is nothing more than a gap-filler.  As I said here, we have compliance as an industry [and measurement] today because we manage technology
threats and vulnerabilities and don’t manage risk.  Compliance is
actually nothing more than a way of forcing transparency and plugging a
gap between the two.  For most, it’s the best they’ve got.

Once organizationally we’ve got our act together, compliance will become the floor, not the ceiling and we’ll really start to see the "…maturation of the security industry as a whole."

/Hoff

And Now Some Useful 2008 Information Survivability Predictions…

December 7th, 2007 1 comment

Noculars
So, after the obligatory dispatch of gloom and doom as described in my
2008 (in)Security Predictions, I’m actually going to highlight some of
the more useful things in the realm of Information Security that I
think are emerging as we round the corner toward next year.

They’re not really so much predictions as rather some things to watch.

Unlike folks who can only seem to talk about desperation, futility
and manifest destiny or (worse yet) "anti-pundit pundits" who try to
suggest that predictions and forecasting are useless (usually because
they suck at it,) I gladly offer a practical roundup of impending
development, innovation and some incremental evolution for your
enjoyment. 

You know, good news.

As Mogull mentioned,
I don’t require a Cray XMP48, chicken bones & voodoo or a
prehensile tail to make my picks.  Rather I grab a nice cold glass of
Vitamin G (Guiness) and sit down and think for a minute or two,
dwelling on my super l33t powers of common sense and pragmatism with just a
pinch of futurist wit.

Many of these items have been underway for some time, but 2008 will
be a banner year for these topics as well as the previously-described
"opportunities for improvement…"

That said, let’s roll with some of the goodness we can look forward to in the coming year.  This is not an exhaustive list by any means, but some examples I thought were important and interesting:

  1. More robust virtualization security toolsets with more native hypervisor/vmm accessibility
    Though
    it didn’t start with the notion of security baked in, virtualization
    for all of its rush-to-production bravado will actually yield some
    interesting security solutions that help tackle some very serious
    challenges.  As the hypervisors become thinner, we’re going to see the
    management and security toolsets gain increased access to the guts of
    the sausage machine in order to effect security appropriately and this
    will be the year we see the virtual switch open up to third parties and
    more robust APIs for security visibility and disposition appear.
     
  2. The focus on information centric security survivability graduates from v1.0 to v1.1
    Trying
    to secure the network and the endpoint is like herding cats and folks
    are tired of dumping precious effort on deploying kitty litter around
    the Enterprise to soak up the stinky spots.  Rather, we’re going to see
    folks really start to pay attention to information classification,
    extensible and portable policy definition, cradle-to-grave lifecycle
    management, and invest in technology to help get them there.

    Interestingly
    the current maturity of features/functions such as NAC and DLP have
    actually helped us get closer to managing our information and
    information-related risks.  The next generation of these offerings in
    combination with many of the other elements I describe herein and their
    consolidation into the larger landscape of management suites will
    actually start to deliver on the promise of focusing on what matters —
    the information.
     

  3. Robust Role-based policy, Identity and access management coupled with entitlement, geo-location and federation…oh and infrastructure, too!
    We’re
    getting closer to being able to affect policy not only based upon just
    source/destination IP address, switch and router topology and the odd entry in active directory on
    a per-application basis, but rather holistically based upon robust
    lifecycle-focused role-based policy engines that allow us to tie in all of the major
    enterprise components that sit along the information supply-chain.

    Who, what, where, when, how and ultimately why will be the decision
    points considered with the next generation of solutions in this space.
    Combine the advancements here with item #2 above, and someone might
    actually start smiling.

    If you need any evidence of the convergence/collision of the application-oriented with the network-oriented approach and a healthy overlay of user entitlement provisioning, just look at the about-face Cisco just made regarding TrustSec.  Of course, we all know that it’s not a *real* security concern/market until Cisco announces they’ve created the solution for it 😉
     

  4. Next Generation Networks gain visibility as they redefine the compute model of today
    Just
    as there exists a Moore’s curve for computing, there exists an
    overlapping version for networking, it just moves slower given the
    footprint.  We’re seeing the slope of this curve starting to trend up
    this coming year, and it’s much more than bigger pipes, although that
    doesn’t hurt either…

    These next generation networks will
    really start to emerge visibly in the next year as the existing
    networking models start to stretch the capabilities and capacities of
    existing architecture and new paradigms drive requirements that dictate
    a much more modular, scalable, resilient, high-performance, secure and
    open transport upon which to build distributed service layers.

    How
    networks and service layers are designed, composed, provisioned,
    deployed and managed — and how that intersects with virtualization and
    grid/utility computing — will start to really sink home the message
    that "in the cloud" computing has arrived.  Expect service providers
    and very large enterprises to adapt these new computing climates first
    with a trickle-down to smaller business via SaaS and hosted service
    operators to follow.

    BT’s 21CN
    (21st Century Network) is a fantastic example of what we can expect
    from NGN as the demand for higher speed, more secure, more resilient and more extensible interconnectivity really
    takes off.
     

  5. Grid and distributed utility computing models will start to creep into security
    A
    really interesting by-product of the "cloud compute" model is that as
    data, storage, networking, processing, etc. get distributed, so shall
    security.  In the grid model, one doesn’t care where the actions take
    place so long as service levels are met and the experiential and
    business requirements are delivered.  Security should be thought of in
    exactly the same way. 

    The notion that you can point to a
    physical box and say it performs function ‘X’ is so last Tuesday.
    Virtualization already tells us this.  So, imagine if your security
    processing isn’t performed by a monolithic appliance but instead is
    contributed to in a self-organizing fashion wherein the entire
    ecosystem (network, hosts, platforms, etc.) all contribute in the
    identification of threats and vulnerabilities as well as function to
    contain, quarantine and remediate policy exceptions.

    Sort of sounds like that "self-defending network" schpiel, but not focused on the network and with common telemetry and distributed processing of the problem.

    Check out Red Lambda’s cGrid technology for an interesting view of this model.
     

  6. Precision versus accuracy will start to legitimize prevention as
    the technology starts to allow us the confidence to start turning the
    corner beyond detection

    In a sad commentary on the last few
    years of the security technology grind, we’ve seen the prognostication
    that intrusion detection is dead and the deadpan urging of the security
    vendor cesspool convincing us that we must deploy intrusion prevention
    in its stead. 
       
    Since there really aren’t many pure-play intrusion detection systems
    left anyway, the reality is that most folks who have purchased IPSs
    seldom put them in in-line mode and when they do, they seldom turn on
    the "prevention" policies and instead just have them detect attacks,
    blink a bit and get on with it.

    Why?  Mostly because while the
    threats have evolved the technology implemented to mitigate them hasn’t
    — we’re either stuck with giant port/protocol colanders or
    signature-driven IPSs that are nothing more than IDSs with the ability
    to send RST packets.

    So the "new" generation of technology has
    arrived and may offer some hope of bridging that gap.  This is due to
    not only really good COTS hardware but also really good network
    processors and better software written (or re-written) to take
    advantage of both.  Performance, efficacy and efficiency have begun to
    give us greater visibility as we get away from making decisions based
    on ports/protocols (feel free to debate proxies vs. ACLs vs. stateful
    inspection…) and move to identifying application usage and getting us
    close to being able to make "real time" decisions on content in context
    by examining the payload and data.  See #2 above.

    The
    precision versus accuracy discussion is focused around being able to
    really start trusting in the ability for prevention technology to
    detect, defend and deter against "bad things" with a fidelity and
    resolution that has very low false positive rates.

    We’re getting closer with the arrival of technology such as Palo Alto Network’s solutions
    — you can call them whatever you like, but enforcing both detection
    and prevention using easy-to-define policies based on application (and
    telling the difference between any number of apps all using port
    80/443) is a step in the right direction.
     

  7. The consumerization of IT will cause security and IT as we know it to die radically change
    I know it’s heretical but 2008 is going to really push the limits of
    the existing IT and security architectures to their breaking points, which is
    going to mean that instead of saying "no," we’re going to have to focus
    on how to say "yes, but with this incremental risk" and find solutions for an every increasingly mobile and consumerist enterprise. 

    We’ve talked about this before, and most security folks curl up into a fetal position when you start mentioning the adoption by the enterprise of social
    neworking, powerful smartphones, collaboration tools, etc.  The fact is that the favorable economics, agility , flexibility and efficiencies gained with the adoption of consumerization of IT outweigh the downsides in the long run.  Let’s not forget the new generation of workers entering the workforce. 

    So, since information is going to be leaking from our Enterprises like a sieve on all manners of devices and by all manner of methods, it’s going to force our hands and cause us to focus on being information centric and stop worrying about the "perimeter problem," stop focusing on the network and the host, and start dealing with managing the truly important assets while allowing our employees to do their jobs in the most effective, collaborative and efficient methods possible.

    This disruption will be a good thing, I promise.  If you don’t believe me, ask BP — one of the largest enterprises on the planet.  Since 2006 they’ve put some amazing initiatives into play:

    like this little gem:

    Oil giant BP is pioneering a "digital consumer" initiative
    that will give some employees an allowance to buy their own IT
    equipment and take care of their own support needs.

    The
    project, which is still at the pilot stage, gives select BP staff an
    annual allowance — believed to be around $1,000 — to buy their own
    computing equipment and use their own expertise and the manufacturer’s
    warranty and support instead of using BP’s IT support team.

    Access
    to the scheme is tightly controlled and those employees taking part
    must demonstrate a certain level of IT proficiency through a computer
    driving licence-style certification, as well as signing a diligent use
    agreement.

    …combined with this:

    Rather
    than rely on a strong network perimeter to secure its systems, BP has
    decided that these laptops have to be capable of coping with the worst
    that malicious hackers can throw at it, without relying on a network
    firewall.

    Ken Douglas, technology director of BP, told the UK
    Technology Innovation & Growth Forum in London on Monday that
    18,000 of BP’s 85,000 laptops now connect straight to the internet even
    when they’re in the office.

  8. Desktop Operating Systems become even more resilient
    The first steps taken by Microsoft and Apple in Vista and OS X (Leopard) as examples have begun to
    chip away at plugging up some of the security holes that
    have plagued them due to the architectural "feature" that providing an open execution runtime model delivers.  Honestly, nothing short of a do-over will ultimately mitigate this problem, so instead of suggesting that incremental improvement is worthless, we should recognize that our dark overlords are trying to makethings better.

    Elements in Vista such as ASLR, NX, and UAC combined with integrated firewalling, anti-spyware/anti-phishing, disk encryption, integrated rights management, protected mode IE mode, etc. are all good steps in a "more right" direction than previous offerings.  They’re in response to lessons learned.

    On the Mac, we also see ASLR, sandboxing, input management, better firewalling, better disk encryption, which are also notable improvements.  Yes, we’ve got a long way to go, but this means that OS vendors are paying more attention which will lead to more stable and secure platforms upon which developers can write more secure code.

    It will be interesting to see how the intersection of these "more secure" OS’s factor with virtualization security discussed in #1 above.

    Vista SP1 is due to ship in 2008 and will include APIs through which third-party security products can work with kernel patch protection on Vista
    x64, more secure BitLocker drive encryption and a better Elliptical Curve Cryptography PRNG (pseudo-random number generator.)  Follow-on releases to Leopard will likely feature security enhancements to those delivered this year.
     

  9. Compliance stops being a dirty word  & Risk Management moves beyond buzzword
    Today
    we typically see the role of information security described as blocking and tackling; focused on managing threats and
    vulnerabilities balanced against the need to be "compliant" to some
    arbitrary set of internal and external policies.  In many people’s
    assessment then, compliance equals security.  This is an inaccurate and
    unfortunate misunderstanding.

    In 2008, we’ll see many of the functions of security — administrative, policy and operational — become much more visible and transparent to the business and we’ll see a renewed effort placed on compliance within the scope of managing risk because the former is actually a by-product of a well-executed risk management strategy.

    We have compliance as an industry today because we manage technology threats and vulnerabilities and don’t manage risk.  Compliance is actually nothing more than a way of forcing transparency and plugging a gap between the two.  For most, it’s the best they’ve got.

    What’s traditionally preventing the transition from threat/vulnerability management to risk management is the principal focus on technology with a lack of a good risk assessment framework and thus a lack of understanding of business impact.

    The availability of mature risk assessment frameworks (OCTAVE, FAIR, etc.) combined with the maturity of IT and governance frameworks (CoBIT, ITIL) and the readiness of the business and IT/Security cultures to accept risk management as a language and actionset with which they need to be conversant will yield huge benefits this year.

    Couple that with solutions like Skybox and you’ve got the makings of a strategic risk management strategy that can bring the security more closely aligned to the business.
     

  10. Rich Mogull will, indeed, move in with his mom and start speaking Klingon
    ’nuff said.

So, there we have it.  A little bit of sunshine in your otherwise gloomy day.

/Hoff

Understanding & Selecting a DLP Solution…Fantastic Advice But Wholesale Misery in 10,000 Words or More…

November 6th, 2007 9 comments

Secbreach
If you haven’t been following Rich Mogull’s amazing writeup on how to "Understand and Select a DLP Data Leakage Prevention Solution" you’re missing one of the best combinatorial market studies, product dissection and consumer advice available on the topic from The Man who covered the space at Gartner.

Here’s a link to the latest episode (part 7!) that you can use to work backwards from.

This is not a knock on the enormous amount of work Rich has done to educate us all, in fact it’s probably one of the reasons he chose to write this opus magnum; this stuff is complicated which explains why we’re still having trouble solving this problem… 

If it takes 7 large blog posts and over 10,000 words to enable someone
to make a reasonably educated decision on how to consider approaching the purchase of one of these solutions, there are two possible reasons for this:

  1. Rich is just a detail-oriented, anal-retentive ex-analyst who does a fantastic job of laying out everything you could ever want to know about this topic given his innate knowledge of the space, or
  2. It’s a pie that ain’t quite baked.

I think the answer is "C – All of the above," and t’s absolutely
no wonder why this market feature has a cast of vendors who are
shopping themselves to the highest bidder faster that you can say
"TablusPortAuthorityOakelyOnigmaProvillaVontu."

Yesterday we saw the leader in this space (Vontu) finally submit to the giant Yellow Sausage Machine.

The sales cycle and adoption attach rate for this sort of product must
be excruciating if one must be subjected to the equivalent of the Old
Testament just to understand the definition and scope of the solution…as a consumer, I know I have a pain that needs amelioration in this category, but which one of these ointments is going to stop the itching?

I dig one of the first paragraphs in Part I which is probably the first clue we’re going to hit a slippery slope: 

The first problem in understanding DLP is figuring out what we’re
actually talking about. The following names are all being used to
describe the same market:

  • Data Loss Prevention/Protection
  • Data Leak Prevention/Protection
  • Information Loss Prevention/Protection
  • Information Leak Prevention/Protection
  • Extrusion Prevention
  • Content Monitoring and Filtering
  • Content Monitoring and Protection

And I’m sure I’m missing a few. DLP seems the most common term, and
while I consider its life limited, I’ll generally use it for these
posts for simplicity. You can read more about how I think of this progression of solutions here.

So you’ve got that goin’ for ya… 😉

In the overall evolution of the solution landscape, I think that this iteration of the DLP/ILP/EP/CMF/CMP (!) solution sets raise the visibility of the need to make decisions on content in context and focus on information centricity (data-centric "security" for the technologists) instead  of the continued deployment of packet-filtering 5-tuple network colanders and host-based agent bloatscapes being foisted upon us.

More on the topic of Information Centricity and its relevance to Information Survivability soon.  I spent a fair amount of time talking about this as a source of disruptive innovation/technology during my keynote at the Information Security Decisions conference yesterday.

Great conversations were had afterwards with some *way* smart people on the topic, and I’m really excited to share them once I can digest the data and write it down.

/Hoff

(Image Credit: Stephen Montgomery)

Running With Scissors…Security, Survivability, Management, Resilience…Whatever!

October 26th, 2007 4 comments

Runningscissors_3
Pointy Things Can Hurt

Mom always told me not to run with scissors because she knew that ugly things might happen if I did.  I seem to have blocked this advice out of my psyche.  Running with scissors can be exhilarating.

My latest set of posts represent the equivalent of blogging with scissors, it seems. 

Sadly, sometimes one of the only ways to get people to
intelligently engage in contentious discourse on a critical element of our
profession is to bait them into a game of definitional semantics;
basically pushing buttons and debating nuance to finally arrive at the
destination of an "AHA! moment."

Either that, or I just suck at making a point and have to go through all the machinations to arrive at consensus.  I’m the first to admit that I often times find myself misunderstood, but I’ve come to take this with a grain of salt and try to learn from my mistakes.

I don’t conspire to be tricky or otherwise employ cunning or guile to goad people with the goal of somehow making them look foolish, but rather have discussions that need to be had.  You’ll just have to take my word on that.  Or not.

You Say Potato, I say Po-ta-toe…

There are a number of you smart cookies who have been reading my posts on Information Survivability and have asked a set of very similar questions that are really insightful and provoke exactly the sort of discussion I had hoped for.

Interestingly, folks continue to argue definitional semantics without realizing that we’re mostly saying the same thing.  Most of you bristling really aren’t pushing back on the functional aspects of Information Security vs. Information Survivability.  Rather, it seems that you’ve become mired in the selection of words rather than the meme.

What do I mean?  Folks are spending far too much time debating which verb/noun to use to describe what we do and we’re talking past each other.  Granted, a lot of this is my fault for the way I choose to stage the debate and given this medium, it’s hard to sometimes re-focus the conversation because it becomes so fragmented.

Rich Mogull posted a great set of commentary on this titled "Information Security vs. Information Survivability: Retaking Our Vocabulary." wherein he eloquently rounds this out:

The problem is that we’ve lost control of our own vocabulary.
“Information security” as a term has come to define merely a fraction
of its intended scope.

Thus we have to use terms like security risk management and
information survivability to re-define ourselves, despite having a
completely suitable term available to us. It’s like the battle between
the words “hacker” and “cracker”. We’ve lost that fight with
“information security”, and thus need to use new language to advance
the discussion of our field.

When Chris, myself, and others talk about “information
survivability” or whatever other terms we’ll come up with, it’s not
because we’re trying to redefine our practice or industry, it’s because
we’re trying to bring security back to its core principles. Since we’ve
lost control of the vocabulary we should be using, we need to introduce
a new vocabulary just to get people thinking differently.

As usual, Rich follows up and tries to smooth this all out.  I’m really glad he did because the commentary that followed showed exactly the behavior I am referring to in two parts.  This was from a comment left on Rich’s post.  It’s not meant to single out the author but is awkwardly profound in its relevance:

[1] This is the crux of the biscuit. Thanks for saying this. I don’t
like the word “survivability” for the pessimistic connotations it has,
as you pointed out. I also think it’s a subset of information security,
not the other way around.

I can’t possibly fathom how one would suggest that Survivability, which encompasses risk management, resilience and classical CIA assurance with an overarching shift in business-integrated perspective, can be thought of as a subset of a narrow, technically-focused practice like that which Information Security has become.  There’s not much I can say more than I already have on this topic.

[2] Now, if you wanted to go up a level to *information management*,
where you were concerned not only with getting the data to where it
needs to be at the right time, but also with getting *enough* data, and
the *right* data, then I would buy that as a superset of information
security. Information management also includes the practices of
retaining the right information for as long as it’s needed and no
longer, and reducing duplication of information. It includes deciding
which information to release and which to keep private. It includes a
whole lot more than just security.

Um, that’s what Information Survivability *is.*  That’s not what Information Security has become, however, as the author clearly points out.  This is like some sort of weird passive-aggressive recursion.

So what this really means is that people are really not disagreeing that the functional definition of Information Security is outmoded, but they just don’t like the term survivability.  Fine! Call it what you will: Information Resilience, Information Management, Information Assurance, but here’s why:
you can’t call it Information Security (from Lance’s comment here):

It seems like the focus here is less on technology, and more on process
and risk management. How is this approach from ISO 27000, or any other
ISMS? You use the word survivability instead of business process,
however other then that it seems more similar then different.

That’s right.  It’s not a technology-only focus.  Survivability (or whatever you’d like to call it) focuses on integrating risk assessment and risk management concepts with business blueprinting/business process modeling and applying just the right amount of Information Security where, when and how needed to align to the risk tolerance (I dare not say "appetite") of the business.

In a "scientific" taste test, 7/10 information security programs are focused on compliance and managing threats and vulnerabilities.  They don’t holistically integrate and manage risk.  They deploy a bunch of boxes using a cost-model that absolutely sucks donkey…  See Gunnar’s posts on the matter.

LipstickpigThere are more similarities than differences in many cases, but the reality is that most people today in our profession completely abuse the use of the term "risk."  Not intentionally, mind you, but due to the same reason Information Security has been bastardized and spread liberally like some great mitigation marmalade across the toasty canvas of our profession. 

The short of this is that you can playfully toy with putting lipstick on a pig (which I did for argument’s sake) and call what you do anything you like.

However, unless what you do, regardless of what you call it and no matter how much "difference" you seem to think you make, isn’t in alignment with the strategic initiatives of the company, your function over time becomes irrelevant.  Or at least a giant speedbump.

Time for Jiu Jitsu practice!  With Scissors!

/Hoff