Archive

Archive for the ‘De-Perimeterization’ Category

GooglePOPs – Cloud Computing and Clean Pipes: Told Ya So…

May 8th, 2008 9 comments

In July of last year, I prognosticated that Google with it’s various acquisitions was entering the security space with the intent to not just include it as a browser feature for search and the odd GoogleApp, but a revenue-generating service delivery differentiator using SaaS via applications and clean pipes delivery transit in the cloud for Enterprises.

My position even got picked up by thestreet.com.  By now it probably sounds like old news, but…

Specifically, in my post titled "Tell Me Again How Google Isn’t Entering the Security Market? GooglePOPs will Bring Clean Pipes…" I argued (and was ultimately argued with) that Google’s $625M purchase of Postini was just the beginning:

This morning’s news that Google is acquiring Postini for $625 Million dollars doesn’t surprise me at all and I believe it proves the point.

In fact, I reckon that in the long term we’ll see the evolution of the Google Toolbar morph into a much more intelligent and rich client-side security application proxy service whereby Google actually utilizes client-side security of the Toolbar paired with the GreenBorder browsing environment and tunnel/proxy all outgoing requests to GooglePOPs.

What’s a GooglePOP?

These GooglePOPs (Google Point of Presence) will house large search and caching repositories that will — in conjunction with services such as those from Postini — provide a "clean pipes service to the consumer.  Don’t forget utility services that recent acquisitions such as GrandCentral and FeedBurner provide…it’s too bad that eBay snatched up Skype…

Google will, in fact, become a monster ASP.  Note that I said ASP and not ISP.  ISP is a commoditized function.  Serving applications and content as close to the user as possible is fantastic.  So pair all the client side goodness with security functions AND add GoogleApps and you’ve got what amounts to a thin client version of the Internet.

Here’s where we are almost a year later.  From the Ars Technica post titled "Google turns Postini into Google Web Security for Enterprise:"

The company’s latest endeavor, Google Web Security for Enterprise, is now available, and promises to provide a consistent level of system security whether an end-user is surfing from the office or working at home halfway across town.

The new service is branded under Google’s "Powered by Postini" product line and, according to the company, "provides real-time malware protection and URL filtering with policy enforcement and reporting. An additional feature extends the same protections to users working remotely on laptops in hotels, cafes, and even guest networks." The service is presumably activated by signing in directly to a Google service, as Google explicitly states that workers do not need access to a corporate network.

The race for cloud and secure utility computing continues with a focus on encapsulated browsing and application delivery environments, regardless of transport/ISP, starting to take shape.   

Just think about the traditional model of our enterprise and how we access our resources today turned inside out as a natural progression of re-perimeterization.  It starts to play out on the other end of the information centricity spectrum.

What with the many new companies entering this space and the likes of Google, Microsoft and IBM banging the drum, it’s going to be one interesting ride.

/Hoff

The Walls Are Collapsing Around Information Centricity

March 10th, 2008 2 comments

Since Mogull and I collaborate quite a bit on projects and share many thoughts and beliefs, I wanted to make a couple of comments on his last post on Information Centricity and remind the audience at home of a couple of really important points.

Rich’s post was short and sweet regarding the need for Information-Centric solutions with some profound yet subtle guideposts:

For information-centric security to become a reality, in the long term it needs to follow the following principles:

  1. Information (data) must be self describing and defending.
  2. Policies and controls must account for business context.
  3. Information must be protected as it moves from structured to
    unstructured, in and out of applications, and changing business context.
  4. Policies must work consistently through the different defensive layers and technologies we implement.

I’m not convinced this is a complete list, but I’m trying to keep to
my new philosophy of shorter and simpler. A key point that might not be
obvious is that while we have self-defending data solutions, like DRM
and label security, for success they must grow to account for business
context. That’s when static data becomes usable information.

Mike Rothman gave an interesting review of Rich’s post:


The Mogull just laid out your work for the next 10 years. You just
probably don’t know it yet. Yes, it’s all about ensuring that the
fundamental elements of your data are protected, however and wherever
they are used. Rich has broken it up into 4 thoughts. The first one
made my head explode: "Information (data) must be self-describing and
defending."

Now I have to clean up the mess. Sure things like DRM are a
bad start, and have tarnished how we think about information-centric
security, but you do have to start somewhere. The reality is this is a
really long term vision of a problem where I’m not sure how you get
from Point A to Point B. We all talk about the lack of innovation in
security. And how the market just isn’t exciting anymore. What Rich
lays out here is exciting. It’s also a really really really big
problem. If you want a view of what the next big security company does,
it’s those 4 things. And believe me, if I knew how to do it, I’d be
doing it – not talking about the need to do it.

The comments I want to make are three-fold:

  1. Rich is re-stating and Mike’s head is exploding around the exact concepts that Information Survivability represents and the Jericho Forum trumpets in their Ten Commandments.  In fact, you can read all about that in a prior posts I made on the subjects of the Jericho Forum, re-perimeterization, information survivability and information centricity.  I like this post on a process I call ADAPT (Applied Data and Application Policy Tagging) a lot.

    For reference, here are the Jericho Forum’s Ten Commandments. Please see #9:

    Jericho_comm1Jericho_comm2

  2. As mike alluded, DRM/ERM has received a bad rap because of how it’s implemented — which has really left a sour taste in the mouths of the consumer consciousness.  As a business tool, it is the precursor of information centric policy and will become the lynchpin in how we will ultimately gain a foothold on solving the information resiliency/assurance/survivability problem.
  3. As to the innovation and dialog that Mike suggests is lacking in this space, I’d suggest he’s suffering from a bit of Shitake-ism (a-la mushroom-itis.)  The next generation of DLP solutions that are becoming CMP (Content Monitoring and Protection — a term I coined) are evolving to deal with just this very thing.  It’s happening.  Now.

    Further to that, I have been briefed by some very, very interesting companies that are in stealth mode who are looking to shake this space up as we speak.

So, prepare for Information Survivability, increased Information Resilience and assurance.  Coming to a solution near you…

/Hoff

Security Today == Shooting Arrows Through Sunroofs of Cars?

February 7th, 2008 14 comments

Archer_2
In this Dark Reading post, Peter Tippett, described as the inventor of what is now Norton Anti-virus, suggests that the bulk of InfoSec practices are "…outmoded or outdated concepts that don’t apply to today’s computing
environments."

As I read through this piece, I found myself flip-flopping between violent agreement and incredulous eye-rolling from one paragraph to the next, caused somewhat by the overuse of hyperbole in some of his analogies.  This was disappointing, but overall, I enjoyed the piece.

Let’s take a look at Peter’s comments:

For example, today’s security industry focuses way too much time
on vulnerability research, testing, and patching, Tippett suggested.
"Only 3 percent of the vulnerabilities that are discovered are ever
exploited," he said. "Yet there is huge amount of attention given to
vulnerability disclosure, patch management, and so forth."

I’d agree that the "industry" certainly focuses their efforts on these activities, but that’s exactly the mission of the "industry" that he helped create.  We, as consumers of security kit, have perpetuated a supply-driven demand security economy.

There’s a huge amount of attention paid to vulnerabilities, patching and prevention that doesn’t prevent because at this point, that’s all we’ve got.  Until we start focusing on the the root cause rather than the symptoms, this is a cycle we won’t break.  See my post titled "Sacred Cows, Meatloaf, and Solving the Wrong Problems" for an example of what I mean.


Tippett compared vulnerability research with automobile safety
research. "If I sat up in a window of a building, I might find that I
could shoot an arrow through the sunroof of a Ford and kill the
driver," he said. "It isn’t very likely, but it’s possible.


"If I disclose that vulnerability, shouldn’t the automaker put in
some sort of arrow deflection device to patch the problem? And then
other researchers may find similar vulnerabilities in other makes and
models," Tippett continued. "And because it’s potentially fatal to the
driver, I rate it as ‘critical.’ There’s a lot of attention and effort
there, but it isn’t really helping auto safety very much."

What this really means and Peter doesn’t really ever state, is that mitigating vulnerabilities in the absence of threat, impact or probability is a bad thing.  This is why I make such a fuss about managing risk instead of mitigating vulnerabilities.  If there were millions of malicious archers firing arrows through the sunroofs of unsuspecting Ford Escort drivers, then the ‘critical’ rating is relevant given the probability and impact of all those slings and arrows of thine enemies…

Tippett also suggested that many security pros waste time trying
to buy or invent defenses that are 100 percent secure. "If a product
can be cracked, it’s sometimes thrown out and considered useless," he
observed. "But automobile seatbelts only prevent fatalities about 50
percent of the time. Are they worthless? Security products don’t have
to be perfect to be helpful in your defense."

I like his analogy and the point he’s trying to underscore.  What I find in many cases is that the binary evaluation of security efficacy — in products and programs — still exists.  In the absence of measuring the effective impact that something has in effecting one’s risk posture, people revert to a non-gradient scale of 0% or 100% insecure or secure.  Is being "secure" really important or is managing to a level of risk that is acceptable — with or without losses — the really relevant measure of success?   

This concept also applies to security processes, Tippett said.
"There’s a notion out there that if I do certain processes flawlessly,
such as vulnerability patching or updating my antivirus software, that
my organization will be more secure. But studies have shown that there
isn’t necessarily a direct correlation between doing these processes
well and the frequency or infrequency of security incidents.


"You can’t always improve the security of something by doing it
better," Tippett said. "If we made seatbelts out of titanium instead of
nylon, they’d be a lot stronger. But there’s no evidence to suggest
that they’d really help improve passenger safety."

I would like to see these studies.  I think that companies who have rigorous, mature and transparent processes that they execute "flawlessly" may not be more "secure," (a measurement I’d love to see quantified) but are in a much better position to respond and recover when (not if) an event occurs.  Based upon the established corollary that we can’t be 100% "secure" in the first place, we then know we’re going to have incidents.

Being able to recover from them or continue to operate while under duress is more realistic and important in my view.  That’s the point of information survivability.


Security teams need to rethink the way they spend their time,
focusing on efforts that could potentially pay higher security
dividends, Tippett suggested. "For example, only 8 percent of companies
have enabled their routers to do ‘default deny’ on inbound traffic," he
said. "Even fewer do it on outbound traffic. That’s an example of a
simple effort that could pay high dividends if more companies took the
time to do it."

I agree.  Focusing on efforts that eliminate entire classes of problems based upon reducing risk is a more appropriate use of time, money and resources.

Security awareness programs also offer a high
rate of return, Tippett said. "Employee training sometimes gets a bad
rap because it doesn’t alter the behavior of every employee who takes
it," he said. "But if I can reduce the number of security incidents by
30 percent through a $10,000 security awareness program, doesn’t that
make more sense than spending $1 million on an antivirus upgrade that
only reduces incidents by 2 percent?"

Nod.  That was the point of the portfolio evaluation process I gave in my disruptive innovation presentation:

24. Provide Transparency in portfolio effectiveness
Isd2007031_2

I didn’t invent this graph, but it’s one of my favorite ways of
visualizing my investment portfolio by measuring in three dimensions:
business impact, security impact and monetized investment.  All of
these definitions are subjective within your organization (as well as
how you might measure them.)

The Y-axis represents the "security impact" that the solution
provides.  The X-axis represents the "business impact" that the
solution provides while the size of the dot represents the capex/opex
investment made in the solution.

Each of the dots represents a specific solution in the portfolio.

If you have a solution that is a large dot toward the bottom-left of
the graph, one has to question the reason for continued investment
since it provides little in the way of perceived security and business
value with high cost.   On the flipside, if a solution is represented
by a small dot in the upper-right, the bang for the buck is high as is
the impact it has on the organization.

The goal would be to get as many of your investments in your
portfolio from the bottom-left to the top-right with the smallest dots
possible.

This transparency and the process by which the portfolio is assessed
is delivered as an output of the strategic innovation framework which
is really comprised of part art and part science.

All in all, a good read from someone who helped create the monster and is now calling it ugly…

/Hoff

Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

January 10th, 2008 11 comments

Phatburger_2
Juicy Fat Assets, Ripe For the Picking…

So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

  1. Assume a company that:
    …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
  2. Take the following:
    Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
  3. Combine With:
    Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
  4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
  5. You Get:
    Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

…I mean besides the fact that an entire industry has been leeching off this mess for decades…


I’ll Gladly Pay You Tuesday For A Secure Solution Today…

The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

Relax, Dude…Keep Your Firewalls…

In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

This goes for internal corporate users who are chained to their desks and not just mobile users.

What’s preventing you from doing this today?

/Hoff

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

Sacred Cows, Meatloaf, and Solving the Wrong Problems…

October 16th, 2007 29 comments

Spaf_small_2Just as I finished up a couple of posts decrying the investments being made in lumping device after device on DMZ boundaries for the sake of telling party guests that one subscribes to the security equivalent of the "Jam of the Month Club," (AKA Defense-In-Depth) I found a fantastic post on the CERIAS blog where Prof. Eugene Spafford wrote a fantastic piece titled "Solving Some of the Wrong Problems."

In the last two posts (here and here,) I used the example of the typical DMZ and it’s deployment as a giant network colander which, despite costing hundreds of thousands of dollars, doesn’t generally deliver us from the attacks it’s supposedly designed to defend against — or at least those that really matter.

This is mostly because these "solutions" treat the symptoms and not the problem but we cling to the technology artifacts because it’s the easier road to hoe.

I’ve spent a lot of time over the last few months suggesting that people ought to think differently about who, what, why and how they are focusing their efforts.  This has come about due to some enlightenment I received as part of exercising my noodle using my blog.  I’m hooked and convinced it’s time to make a difference, not a buck.

My rants on the topic (such as those regarding the Jericho Forum) have induced the curious wrath of technology apologists who have no answers beyond those found in a box off the shelf.

I found such resonance in Spaf’s piece that I must share it with you. 

Yes, you.  You who have chided me privately and publicly for my recent proselytizing that our efforts are focused on solving the wrong sets of problems.   The same you who continues to claw disparately at your sacred firewalls whilst we have many of the tools to solve a majority of the problems we face, and choose to do otherwise.  This isn’t an "I told you so."  It’s a "You should pay attention to someone who is wiser than you and I."

Feel free to tell me I’m full of crap (and dismiss my ramblings as just that,) but I don’t think that many can claim to have earned the right to suggest that Spaf has it wrong dismiss Spaf’s thoughts offhandedly given his time served and expertise in matters of information assurance, survivability and security:

As I write this, I’m sitting in a review of some university research
in cybersecurity. I’m hearing about some wonderful work (and no, I’m
not going to identify it further). I also recently received a
solicitation for an upcoming workshop to develop “game changing” cyber
security research ideas. What strikes me about these efforts —
representative of efforts by hundreds of people over decades, and the
expenditure of perhaps hundreds of millions of dollars — is that the
vast majority of these efforts have been applied to problems we already
know how to solve.

We know how to prevent many of our security problems — least
privilege, separation of privilege, minimization, type-safe languages,
and the like. We have over 40 years of experience and research about
good practice in building trustworthy software, but we aren’t using
much of it.

Instead of building trustworthy systems (note — I’m not referring to
making existing systems trustworthy, which I don’t think can succeed)
we are spending our effort on intrusion detection to discover when our
systems have been compromised.

We spend huge amounts on detecting botnets and worms, and deploying
firewalls to stop them, rather than constructing network-based systems
with architectures that don’t support such malware.

Instead of switching to languages with intrinsic features that
promote safe programming and execution, we spend our efforts on tools
to look for buffer overflows and type mismatches in existing code, and
merrily continue to produce more questionable quality software.

And we develop almost mindless loyalty to artifacts (operating
systems, browsers, languages, tools) without really understanding where
they are best used — and not used. Then we pound on our selections as
the “one, true solution” and justify them based on cost or training or
“open vs. closed” arguments that really don’t speak to fitness for
purpose. As a result, we develop fragile monocultures that have a
particular set of vulnerabilities, and then we need to spend a huge
amount to protect them. If you are thinking about how to secure Linux
or Windows or Apache or C++ (et al), then you aren’t thinking in terms
of fundamental solutions.

Please read his entire post.  It’s wonderful. Dr. Spafford, I apologize for re-posting so much of what you wrote, but it’s so fantastically spot-on that I couldn’t help myself.

Timing is everything.

/Hoff

{Ed: I changed the sentence regarding Spaf above after considering Wismer’s comments below.  I didn’t mean to insinuate that one should preclude challenging Spaf’s assertions, but rather that given his experience, one might choose to listen to him over me any day — and I’d agree!  Also, I will get out my Annie Oakley decoder ring and address that Cohen challenge he brought up after at least 2-3 hours of sleep… 😉 }

Opinions Are Like De-Perimeterized Virtualized Servers — Everyone’s Got One, Even Larry Seltzer

October 2nd, 2007 3 comments

Weirdscience
Dude, maybe if we put bras on our heads and chant incoherently we can connect directly to the Internet…

Somebody just pushed my grumpy button!  I’m all about making friends and influencing people, but the following article titled "You Wouldn’t Actually Turn Off Your Firewall, Would You?" is simply a steaming heap of unqualified sensationalism, plain and simple. 

It doesn’t really deserve my attention but the FUD it attempts to promulgate is nothing short of Guinness material and I’m wound up because my second Jiu Jitsu class of the week isn’t until tomorrow night and I’ve got a hankering for an arm-bar.

Larry Seltzer from eWeek decided to pen an opinion piece which attempts for no good reason to collapse two of my favorite topics into a single discussion: de-perimeterization (don’t moan!) and virtualization. 

What one really has to do directly with the other within the context of this discussion, I don’t rightly understand, but it makes for good drama I suppose.

Larry starts off with a question we answered in this very blog (here, here, here and here) weeks ago:

Opinion: I’m unclear on what deperimeterization means. But if it means putting
company systems directly on the Internet then it’s a big mistake.

OK, that’s a sort of a strange way to state an opinion and hinge an article, Larry. Why don’t you go to the source provided by those who coined the term, here.  Once you’re done there, you can read the various clarifications and debates above. 

But before we start, allow me to just point out that almost every single remote salesperson who has a laptop that sits in a Starbucks or stays in a hotel is often connected "…directly on the Internet."  Oh, but wait, they’re sitting behind some sort of NAT gateway broadband-connected super firewall, ya?  Certainly the defenses at Joe’s Java shack must be as stringent as a corporate firewall, right?  <snore>

For weeks now I’ve been thinking on and off about "deperimeterization,"
a term that has been used in a variety of ways for years. Some analyst talk got it in the news recently.

So you’ve been thinking about this for weeks and don’t mention if
you’ve spoken to anyone from the Jericho Forum (it’s quite obvious you haven’t read their 10 commandments) or anyone mentioned in the article
save for a couple of analysts who decided to use a buzzword to get some
press?  Slow newsday, huh?

At least the goal of deperimeterization is to enhance security.
That I can respect. The abstract point seems to be to identify the
resources worth protecting and to protect them. "Resources" is defined
very, very broadly.

The overreacting approach to this goal is to say
that the network firewall doesn’t fit into it. Why not just put systems
on the Internet directly and protect the resources on them that are
worthy of protection with appropriate measures?

Certainly the network firewall fits into it.  Stateful inspection firewalls are, for the most part today, nothing more than sieves that filter out the big chunks.  They serve that purpose very nicely.  They allow port 80 and port 443 traffic through unimpeded.  Sweet.  That’s value.

Even the inventors of stateful inspection will tell you so (enter one Shlomo Kramer and Nir Zuk.)  Most "firewalls" (in the purest definition) don’t do much more than stateful ACL’s do today and are supplemented with IDS’s, IPS’s, Web Application Firewalls, Proxies, URL Filters, Anti-Virus, Anti-Spam, Anti-Malware and DDoS controls for that very reason.

Yup, the firewall is just swell, Larry.  Sigh.

I hope I’m not misreading the approach, but that’s what I got out of
our news article: "BP has taken some 18,000 of its 85,000 laptops off
its LAN and allowed them to connect directly to the Internet,
[Forrester Research analysts Robert Whiteley and Natalie Lambert]
said." This is incredible, if true.

Not for nothing, but rather than depend on a "couple of analysts," did you think to contact someone from BP and ask them what they meant instead of speculating and deriding the effort before you condemned it?  Obviously not:

What does it mean? Perhaps it just means that they can connect
to the VPN through a regular ISP connection? That wouldn’t be news. On
the other hand, what else can it mean? Whitely and Lambert seem to view
deperimeterization as a means to improve performance and lower cost. If
you need to protect the data on a notebook computer they say you should
do it with encryption and "data access controls." This is the
philosophy in the 2001 article in which the term was coined.

Honestly, who in Sam’s Hill cares what "Whitely and Lambert" seem to view deperimeterization as?  They didn’t coin the term, they butchered its true intent and you still don’t apparently know how to answer your own question. 

Further, you also reference a conceptual document floated back in 2001 ignoring the author’s caution that "The actual concept behind the entire paper never really flew, but you may find that too thought provoking."  Onward.

But of course you can’t just put a system on Comcast and have it
access corporate resources. VPNs aren’t just about security, they
connect a remote client into the corporate network. So unless everyone
in the corporation has subnet mask of 0.0.0.0 there needs to be some
network management going on.

Firstly, nobody said that network management should be avoided, where the heck did you get that!?

Secondly, if you don’t have firewalls in the way, sure you can — but that would be cheating the point of the debate.  So we won’t go there.  Yet.  OK, I lied, here we go.

Thirdly, if you look at what you will get with, say, Vista and Longhorn, that’s exactly what you’ll be able to do.  You can simply connect to the Internet and using encryption and mutual authentication, gain access to internal corporate resources without the need for a VPN client at all.  If you need a practical example, you can read about it here, where I saw it with my own eyes.

Or maybe I’m wrong. Maybe that’s what they actually want to do. This certainly sounds like the idea behind the Jericho Forum, the minds behind deperimeterization. This New York Times blog echoes the thoughts.

Maybe…but we’re just dreamers.  I dare say, Larry, that Bill Cheswick has forgotten more about security than you and I know.  It’s obvious you’ve not read much about information assurance or information survivability but are instead content to myopically center on what "is" rather than that which "should be."

Not everyone has this cavalier attitude towards deperimeterization. This article from the British Computer Society
seems a lot more conservative in approach. It refers to protecting
resources "as if [they were] directly exposed to the Internet." It
speaks of using "network segmentation, strict access controls, secure
protocols and systems, authentication and encryption at multiple
levels."

Cavalier!?  What’s so cavalier about suggesting that systems ought to be stand-alone defensible in a hostile environment as much as they are behind one of those big, bad $50,000 firewalls!? I bet you money I can put a hardened host on the Internet today without a network firewall in front of it and it will be just as resistant to attack. 

But here’s the rub, nobody said that to get from point A to point B one would not pragmatically apply host-based hardening and layered security such as (wait for it) a host-based firewall or HIPS?  Gasp!

What’s the difference between filtering TCP handshakes or blocking based on the 4/5 tupule at a network level versus doing it at the host when you’re only interested in scaling to performance and commensurately secured levels of a single host?  Except for tens of thousands of dollars.  How about Nada?  (That’s Spanish for "Damn this discussion is boring…")

And whilst my point above is in response to your assertions regarding "clients," the same can be said for "servers."  If I use encryption and mutual authentication, short of DoS/DDoS, what’s the difference?

That sounds like a shift in emphasis, moving resources more
towards internal protection, but not ditching the perimeter. I might
have some gripes with this—it sounds like the Full Employment Act for
Security Consultants, for example—but it sounds plausible as a useful
strategy.

I can’t see how you’d possibly have anything bad to say about this approach especially when you consider that the folks that make up the Jericho Forum are CISO’s of major corporations, not scrappy consultants looking for freelance pen-testing.

When considering the protection of specific resources, Whitely and
Lambert go beyond encryption and data access controls. They talk
extensively about "virtualization" as a security mechanism. But their
use of the term virtualization sounds like they’re really just talking
about terminal access. Clearly they’re just abusing a hot buzzword.
It’s true that virtualization can be involved in such setups, but it’s
hardly necessary for it and arguably adds little value. I wrote a book
on Windows Terminal Server back in 2000 and dumb Windows clients with
no local state were perfectly possible back then.

So take a crappy point and dip it in chocolate, eh?   Now you’re again tainting the vision of de-perimeterization and convoluting it with the continued ramblings of a "couple of analysts."  Nice.

Whitely and Lambert also talk in this context about how updating in
a virtualized environment can be done "natively" and is therefore
better. But they must really mean "locally," and this too adds no
value, since a non-virtualized Terminal Server has the same advantage.

What is the security value in this? I’m not completely clear
on it, since you’re only really protecting the terminal, which is a
low-cost item. The user still has a profile with settings and data. You
could use virtual machines to prevent the user from making permanent
changes to their profile, but Windows provides for mandatory (static,
unchangeable) profiles already, and has for ages. Someone explain the
value of this to me, because I don’t get it.

Well, that makes two of us..

And besides, what’s it got to do with deperimeterization? The
answer is that it’s a smokescreen to cover the fact that there are no
real answers for protecting corporate resources on a client system
exposed directly to the Internet.

Well, I’m glad we cleared that up.  Absolutely nothing.  As to the smokescreen comment, see above.  I triple-dog-dare you.  My Linux workstation and Mac are sitting on "the Internet" right now.  Since I’ve accomplished the impossible, perhaps I can bend light for you next?

The reasonable approach is to treat local and perimeter security as
a "belt and suspenders" sort of thing, not a zero sum game. Those who
tell you that perimeter protections are a failure because there have
been breaches are probably just trying to sell you protection at some
other layer.

…or they are pointing out to you that you’re treating the symptom and not the problem.  Again, the Jericho Forum is made up of CISO’s of major multinational corporations, not VP’s of Marketing from security vendors or analyst firms looking to manufacture soundbites.

Now I have to set a reminder for myself in Outlook for about
two years from now to write a column on the emerging trend towards
"reperimeterization."

Actually, Larry, set that appointment back a couple of months…it’s already been said.  De-perimeterization has been called many things already, such as re-perimeterization or radical externalization.

I don’t really give much merit to what you choose to call it, but I call it a good idea that should be discussed further and driven forward in consensus such that it can be used as leverage against the software and OS vendors to design and build more secure systems that don’t rely on band-aids.

…but hey, I’m just a dreamer.

/Hoff

Amrit: I Love You, Man…But You’re Still Not Getting My Bud Lite

September 26th, 2007 1 comment

Medium_budlightotter
I’ve created a monster!

Well, a humble, well-spoken and intelligent monster who — like me — isn’t afraid to admit that sometimes it’s better to let go than grip the bat too tight.  That doesn’t happen often, but when it does, it’s a wonderful thing.

I reckon that despite having opinions, perhaps sometimes it’s better to listen with two holes and talk with one, shrugging off the almost autonomic hardline knee-jerks of defensiveness that come from having to spend years of single minded dedication to cramming good ideas down people’s throats.

It appears Amrit’s been speaking to my wife, or at least they read the same books.

So it is with the utmost humility that I take full credit for nudging along Amrit’s renaissance and spiritual awakening as evidenced in this, his opus magnum of personal growth titled "Embracing Humility – Enlightened Information Security" wherein a dramatic battle of the Ego and Id is played out in daring fashion before the world:


Too often in IT ego drives one to be rigid and stubborn. This results
in a myopic and distorted perspective of technology that can limit ones
ability to gain an enlightened view of dynamic and highly volatile
environments. This defect is especially true of information security
professionals that tend towards ego driven dispositions that create
obstacles to agility. Agility is one of the key foundational tenets to
achieving an enlightened perspective on information security; humility
enables one to become agile.  Humility, which is far different from
humiliation, is the wisdom to realize one’s own ignorance,
insignificance, and limitations of intellect, without which one cannot
see the truth.

19th century philosopher Herbert Spencer captured this sentiment in
an oft-cited quote “There is a principle which is a bar against all
information, which is proof against all arguments and which cannot fail
to keep a man in everlasting ignorance – that principle is contempt
prior to investigation.”

The security blogging community is one manifestation of the
information security profession, based upon which one could argue that
security professionals lack humility and generally propose contempt for
an idea prior to investigation. I will relate my own experience to
highlight this concept.

Humility and the Jericho Forum
I was one of the traditionalists that was vehemently opposed to the
ideas, at least my understanding of the ideas, put forth by the Jericho
forum. In essence all I heard was “de-perimeterization”, “Firewalls are
dead and you do not need them”, and “Perfect security is achieved
through the end-point” – I lacked the humility required to properly
investigate their position and debated against their ideas blinded by
ego and contempt. Reviewing the recent spate of blog postings related
to the Jericho forum I take solace in knowing that I was not alone in
my lack of humility. The reality is that there is a tremendous amount
of wisdom in realizing that the traditional methods of network security
need to be adjusted to account for a growing mobile workforce, coupled
with a dramatic increase in contractors, service providers and non pay
rolled actors, all of which demand access to organizational assets, be
it individuals, information or infrastructure. In the case of the
Jericho forum’s ideas I lacked humility and it limited my ability to
truly understand their position, which limits my ability to broaden my
perspective’s on information security.


Good stuff.

It takes a lot of chutzpah to privately consider changing one’s stance on matters; letting go of preconceived notions and embracing a sense of openness and innovation.  It’s quite another thing to do it publicly.   I think that’s very cool.  It’s always been a refreshing study in personal growth when I’ve done it. 

I know it’s still very hard for me to do in certain areas, but my kids — especially my 3 year old — remind me everyday just how fun it can be to be wrong and right within minutes of one another without any sense of shame.

I’m absolutely thrilled if any of my posts on Jericho and the ensuing debate has made Amrit or anyone else consider for a moment that perhaps there are other alternatives worth exploring in the way in which we think, act and take responsibility for what we do in our line of work.

I could stop blogging right now and…

Yeah, right.  Stiennon, batter up!

/Hoff

(P.S. Just to be clear, I said "batter" not "butter"…I’m not that open minded…)

Captain Stupendous — Making the Obvious…Obvious! Jericho Redux…

September 19th, 2007 8 comments

Captstupendous
Sometimes you have to hurt the ones you love. 

I’m sorry, Rich.  This hurts me more than it hurts you…honest.

The Mogull decides that rather than contribute meaningful dialog to discuss the meat of the topic at hand, he would rather contribute to the FUD regarding the messaging of the Jericho Forum that I was actually trying to wade through.

…and he tried to be funny.  Sober.  Painful combination.

In a deliciously ironic underscore to his BlogSlog, Rich caps off his post with a brilliant gem of obviousness of his own whilst chiding everyone else to politely "stay on message" even when he leaves the reservation himself:

"I formally
submit “buy secure stuff” as a really good one to keep us busy for a
while."

<phhhhhht> Kettle, come in over, this is Pot. <phhhhhhttt> Kettle, do you read, over? <phhhhhhht>  It’s really dark in here <phhhhhhttt>

So if we hit the rewind button for a second, let’s revisit Captain Stupendous’ illuminating commentary.  Yessir.  Captain Stupendous it is, Rich, since the franchise on Captain Obvious is plainly over-subscribed.

I spent my time in my last post suggesting that the Jericho Forum’s message is NOT that one should toss away their firewall.  I spent my time suggesting that rather reacting to the oft-quoted and emotionally flammable marketing and messaging, folks should actually read their 10 Commandments as a framework. 

I wish Rich would have read them because his post indicates to me that the sensational hyperbole he despises so much is hypocritically emanating from his own VoxHole. <sigh>

Here’s a very high-level generalization that I made which was to take the focus off of "throwing away your firewall":

Your perimeter *is* full of holes so what we need to do is fix the problems, not the symptoms.  That is the message.

And Senor Stupendous suggested:

Of course the perimeter is full of holes; I haven’t met a security
professional who thinks otherwise. Of course our software generally
sucks and we need secure platforms and protocols. But come on guys,
making up new terms and freaking out over firewalls isn’t doing you any
good. Anyone still think the network boundary is all you need? What? No
hands? Just the “special” kid in back? Okay, good, we can move on now.

You’re missing the point — both theirs and mine.  I was restating the argument as a setup to the retort.  But who can resist teasing the mentally challenged for a quick guffaw, eh, Short Bus?

Here is the actual meat of the Jericho Commandments.  I’m thrilled that Rich has this all handled and doesn’t need any guidance.  However, given how I just spent my last two days, I know that these issues are not only relevant, but require an investment of time, energy, and strategic planning to make actionable and remind folks that they need to think as well as do.

I defy you to show me where this says "throw away your firewalls."

Repeat after me: THIS IS A FRAMEWORK and provides guidance and a rational, strategic approach to Enterprise Architecture and how security should be baked in.  Please read this without the FUDtastic taint:

Jericho_comm1Jericho_comm2

Rich sums up his opus with this piece of reasonable wisdom, which I wholeheartedly agree with:

You have some big companies on board and could use some serious
pressure to kick those market forces into gear.

…and to warm the cockles of your heart, I submit they do and they are.  Spend a little time with Dr. John Meakin, Andrew Yeomans, Stephen Bonner, Nick Bleech, etc. and stop being so bloody American 😉  These guys practice what they preach and as I found out, have been for some time.

They’ve refined the messaging some time ago.  Unload the baggage and give it a chance.

Look at the real message above and then see how your security program measures up against these topics and how your portfolio and roadmap provides for these capabilities.

Go forth and do stupendous things. <wink>

/Hoff

The British Are Coming! In Defense (Again) of the Jericho Forum…

September 17th, 2007 10 comments

NutsjerichoThe English are coming…and you need to give them a break.  I have.

Back in 2006, after numerous frustrating discussions dating back almost three years without a convincing conclusion, I was quoted in an SC Magazine article titled "World Without Frontiers" which debated quite harshly the Jericho Forum’s evangelism of a security mindset and architecture dubbed as "de-perimeterization."

Here’s part of what I said:

Some people dismiss Jericho as trying to re-invent the wheel. "While
the group does an admirable job raising awareness, there is nothing
particularly new either in what it suggests or even how it suggests we
get there," says Chris Hoff, chief security strategist at Crossbeam
Systems.

"There is a need for some additional technology and
process re-tooling, some of which is here already – in fact, we now
have an incredibly robust palette of resources to use. But why do we
need such a long word for something we already know? You can dress
something up as pretty as you like, but in my world that’s not called
‘deperimeterisation’, it’s called a common sense application of
rational risk management aligned to the needs of the business."   

Hoff
insists the Forum’s vision is outmoded. "Its definition speaks to what
amounts to a very technically focused set of IT security practices,
rather than data survivability. What we should come to terms with is
that confidentiality, integrity and availability will be compromised.
It’s not a case of if, it’s a case of when.

The focus should
be less on IT security and more on information survivability; a
pervasive enterprise-wide risk management strategy and not a
narrowly-focused excuse for more complex end-point products," he says.

But is Jericho just offering insight into the obvious? "Of course,"
says Hoff. "Its suggestion that "deperimeterisation" is somehow a new
answer to a set of really diverse, complex and long-standing IT
security issues… simply ignores the present and blames the past," he
says.

"We don’t need to radically deconstruct the solutions
universe to arrive at a more secure future. We just need to learn how
to appropriately measure risk and quantify how and why we deploy
technology to manage it. I admire Jericho’s effort, and identify with
the need. But the problem needs to be solved, not renamed."

I have stated previously that this was an unfortunate reaction to the marketing of the message and not the message itself, and I’ve come to understand what the Jericho Forum’s mission and its messaging actually represents.  It’s a shame that it took me that long and that others continue to miss the point.

Jericho
Today Mike Rothman commented about NetworkWorld’s coverage of the latest Jericho Forum in New York last week.  The byline of the article suggested that "U.S. network execs clinging to firewalls" and it seems we’re right back on the Hamster Wheel of Pain, perpetuating a cruel myth.

After all this time, it appears that the Jericho Forum is apparently still suffering from a failure to communicate — there exists a language gap — probably due to that allergic issue we had once to an English King and his wacky ideas relating to the governance of our "little island."  Shame, that.

This is one problem that this transplanted Kiwi-American (same Queen after-all) is motivated to fix.

Unfortunately, the Jericho Forum’s message has become polluted and marginalized thanks to a perpetuated imprecise suggestion that the Forum recommends that folks simply turn off their firewalls and IPS’s and plug their systems directly into the Internet, as-is.

That’s simply not the case, and in fact the Forum has recognized some of this messaging mess, and both softened and clarified the definition by way of the issuance of their "10 Commandments." 

You can call it what you like: de-perimeterization, re-perimeterization or radical externalization, but here’s what the Jericho Forum actually advocates, which you can read about here:

header De-perimeterization explained
    The huge explosion in business use of the Web protocols means that:
   

  • today the traditional "firewalled" approach to securing a network boundary is at best Barrierflawed, and at worst ineffective. Examples include:
           

    • business demands that tunnel through perimeters or bypass them altogether
    • IT products that cross the boundary, encapsulating their protocols within Web protocols
    • security exploits that use e-mail and Web to get through the perimeter.

          

  • to respond to future business needs, the break-down of the traditional
    distinctions between “your” network and “ours” is inevitable
  • increasingly, information will flow between business organizations over
    shared and third-party networks, so that ultimately the only reliable
    security strategy is to protect the information itself, rather than the
    network and the rest of the IT infrastructure   

This
trend is what we call “de-perimeterization”. It has been developing for
several years now. We believe it must be central to all IT security
strategies today.

header The de-perimeterization solution
   
SolutionWhile
traditional security solutions like network boundary technology will
continue to have their roles, we must respond to their limitations. In
a fully de-perimeterized network, every component will be independently
secure, requiring systems and data protection on multiple levels, using
a mixture of

  • encryption
  • inherently-secure computer protocols
  • inherently-secure computer systems
  • data-level authentication

The design principles that guide the development of such technology solutions are what we call our “Commandments”, which capture the essential requirements for IT security in a de-perimeterized world.

I was discussing these exact points today in a session at an Institute for Applied Network Security conference today (and as I have before here) wherein I summarized this as the capability to:

Take a host with a secured OS, connect it into any network using whatever means you find appropriate,
without regard for having to think about whether you’re on the "inside"
or "outside." Communicate securely, access and exchange data in
policy-defined "zones of trust" using open, secure, authenticated and
encrypted protocols.

Did you know that one of the largest eCommerce sites on the planet doesn’t even bother with firewalls in front of its webservers!?  Why?  Because with 10+ Gb/s of incoming HTTP and HTTP/S connections using port 80 and 443 specifically, what would a firewall add that a set of ACLs that only allows port 80/443 through to the webservers cannot?

Nothing.  Could a WAF add value?  Perhaps.  But until then, this is a clear example of a U.S. company that gets the utility of not adding security in terms of a firewall just because that’s the way it’s always been done.

From the NetworkWorld article, this is a clear example of the following:

The forum’s view of firewalls is that they no longer meet the needs of businesses that increasingly need to let in traffic
                        to do business. Its deperimeterization thrust calls for using secure applications and firewall protections closer to user devices and servers.

It’s not about tossing away prior investment or abandoning one’s core beliefs, it’s about about being honest as to the status of information security/protection/assurance, and adapting appropriately.

Your perimeter *is* full of holes so what we need to do is fix the problems, not the symptoms.
That is the message.

So consider me the self-appointed U.S. Ambassador to our friends across the pond.  The Jericho Forum’s message is worth considering and deserves your attention.

/Hoff