Archive

Archive for October, 2007

Version 1.0 of the CIS Benchmark for VMware ESX Server Available

October 19th, 2007 No comments

Cis_2
Version 1.0 of the Center for Internet Security (CIS) benchmark for securing VMware ESX server is available.  This is specific to version 3.x of ESX and covers the basic best practices of preparing an ESX server for deployment.

We’ve still got a ton of stuff that didn’t make the deadline cut-off for the first version of the document in follow-on iterations, but it’s a good start.  Please sign up if you can contribute to making this document even better.

You can find it here.

/Hoff

Categories: Virtualization, VMware Tags:

Information Security: Deader Than a Door Nail. Information Survivability’s My Game.

October 17th, 2007 14 comments

This isn’t going to be a fancy post with pictures.   It’s not going to be long.  It’s not particularly well thought out, but I need to get it out of my head and written down as tomorrow I plan on beginning a new career. 

I am retiring from the Information Security rat race and moving on to something fulfilling, achievable, impacting and that will make a difference.

Why?

Mogull just posted Information Security’s official eulogy titled "An Optimistically Fatalistic View of The Futility of Security."

He doesn’t know just how right he is.

Sad, though strangely inspiring, it represents the highpoint of a lovely internment ceremony replete with stories of yore, reflections on past digressions, oddly paradoxical and quixotic paramedic analogies, the wafting fragility of the human spirit and our unstoppable yearning to all make a difference.  It made me all weepy inside.   You’ll laugh, you’ll cry.  Before I continue, a public service announcement:

I’ve been instructed to ask that you please send donations in lieu of flowers to Mike Rothman so he can hire someone other than his four year old to produce caricatures of "Security Mike."  Thank you.

However amusing parts of it may have been, Rich has managed to catalyze the single most important thought I’ve had in a long time regarding this topic and I thank him dearly for it.

Along the lines of how Spaf suggested we are solving the wrong problems comes my epiphany that this is to be firmly levied on the wide shoulders of the ill-termed industrial complex and practices we have defined to describe the terminus of some sort of unachievable end-state goal.  Information Security represents  a battle we will never win.

Everyone’s admitted to that, yet we’re to just carry on "doing the best we can" as we "make a difference" and hope for the best?  What a load of pessimistic, nihilist, excuse-making donkey crap.  Again, we know that what we’re doing isn’t solving the problem, but rather than admitting the problems we’re solving aren’t the right ones, we’ll just keep on keeping on?

Describing our efforts, mission, mantra and end-state as "Information Security" or more specifically "Security" has bred this unfaithful housepet we now call an industry that we’re unable to potty train.  It’s going to continue to shit on the carpet no matter how many times we rub it’s nose in it.

This is why I am now boycotting the term "Information Security" or for that matter "Security" period.  I am going to find a way to change the title of my blog and my title at work.

Years ago I dredged up some research that came out of DARPA that focused on Information Assurance and Information Survivability.  It was fantastic stuff and profoundly affected what and how I added value to the organizations I belonged to.  It’s not a particularly new, but it represents a new
way of thinking even though it’s based on theory and practice from many
years ago.

I’ve been preaching about the function without the form.  Thanks to Rich for reminding me of that.

I will henceforth only refer to what I do — and my achievable end-state — using the term Information Survivability.

Information Survivability is defined  as “the capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents to ensure
that the right people get the right information at the right time.

A survivability approach combines risk management and contingency planning with computer security to protect highly distributed information services and assets in order to sustain mission-critical functions. Survivability expands the view of security from a narrow, technical specialty understood only by security experts to a risk management perspective with participation by the entire organization and stakeholders."

This is what I am referring to.  This is what Spaf is referring to.  This is what the Jericho Forum is referring to.

This is my new mantra. 

Information Security is dead.  Long live Information Survivability.  I’ll be posting all my I.S. references in the next coming days.

Rich, those paramedic skills are going to come in handy.

/Hoff

Apathy and Alchemy: When Good Enough Security is Good Enough

October 17th, 2007 4 comments

Apathy
Despite the consistent heel nipping assertions that all I want to do is have people throw away their firewalls (I don’t,) I think Shrdlu nailed it with a comment posted on Lindstrom’s blog.  I’ll get to that in a second.  Here’s the setup.

Specifically, Pete maintains that Spaf’s comments (see here) are an indicator that security isn’t failing, rather we are — and by design.  We’re simply choosing not to fix the things we ought to fix:

This is a simple one, from Dr. Eugene Spafford’s blog:

We know how to prevent many of our security problems — least privilege,
separation of privilege, minimization, type-safe languages, and the
like. We have over 40 years of experience and research about good
practice in building trustworthy software, but we aren’t using much of
it.

So,
we have resources that are unallocated – we have time, money, and
bodies we could throw at the security problem. We have the know-how and
the tools to reduce the risk. And yet, we aren’t doing it.

If security were "failing" there would be evidence of people either
giving up entirely and reducing their IT investments and resources, or
spending more money on success.

An interesting perspective and one I’m bound to agree with.

Here’s Shrdlu’s comment which I think really nails the reason I am going to continue to press the issue regardless; I think the general apathetic state of the security industry (as Pete suggests also) is the first obstacle to overcome:

Cherchez l’argent, mes amis. Mix in Spaf’s argument with Pete’s and
add Marcus and Bruce, and you’ve got the answer: people don’t think
security is failing enough to spend money doing something about it. The
externalities aren’t intolerable. The public isn’t up in arms; if
anything, security breaches have reached the same level of public
semi-awareness as bombing in Iraq — it happens every day, everyone
agrees how awful it is, and then they go back to their lattes.

We’re not going to fire or retrain a generation of cheap programming
labor to Do the Right Thing and redesign systems. Not until it hurts
enough, and let’s face it, it doesn’t. All the FUD and hand-wringing is
within the security industry. We’re doing our jobs just well enough to
keep things from melting down, so why should anyone pay more attention
and money to something that’s mediocre but not a disaster?

There’s not a whole lot more that needs to be said to embellish or underscore that argument.

I’ll be over here waiting for the next "big thing" to hit and instead of fixing it, we’ll see SoX part Deux.

See, Shrdlu’s not the only one who can toss in a little French to sound sophisticated ;)

/Hoff

 

Sacred Cows, Meatloaf, and Solving the Wrong Problems…

October 16th, 2007 29 comments

Spaf_small_2Just as I finished up a couple of posts decrying the investments being made in lumping device after device on DMZ boundaries for the sake of telling party guests that one subscribes to the security equivalent of the "Jam of the Month Club," (AKA Defense-In-Depth) I found a fantastic post on the CERIAS blog where Prof. Eugene Spafford wrote a fantastic piece titled "Solving Some of the Wrong Problems."

In the last two posts (here and here,) I used the example of the typical DMZ and it’s deployment as a giant network colander which, despite costing hundreds of thousands of dollars, doesn’t generally deliver us from the attacks it’s supposedly designed to defend against — or at least those that really matter.

This is mostly because these "solutions" treat the symptoms and not the problem but we cling to the technology artifacts because it’s the easier road to hoe.

I’ve spent a lot of time over the last few months suggesting that people ought to think differently about who, what, why and how they are focusing their efforts.  This has come about due to some enlightenment I received as part of exercising my noodle using my blog.  I’m hooked and convinced it’s time to make a difference, not a buck.

My rants on the topic (such as those regarding the Jericho Forum) have induced the curious wrath of technology apologists who have no answers beyond those found in a box off the shelf.

I found such resonance in Spaf’s piece that I must share it with you. 

Yes, you.  You who have chided me privately and publicly for my recent proselytizing that our efforts are focused on solving the wrong sets of problems.   The same you who continues to claw disparately at your sacred firewalls whilst we have many of the tools to solve a majority of the problems we face, and choose to do otherwise.  This isn’t an "I told you so."  It’s a "You should pay attention to someone who is wiser than you and I."

Feel free to tell me I’m full of crap (and dismiss my ramblings as just that,) but I don’t think that many can claim to have earned the right to suggest that Spaf has it wrong dismiss Spaf’s thoughts offhandedly given his time served and expertise in matters of information assurance, survivability and security:

As I write this, I’m sitting in a review of some university research
in cybersecurity. I’m hearing about some wonderful work (and no, I’m
not going to identify it further). I also recently received a
solicitation for an upcoming workshop to develop “game changing” cyber
security research ideas. What strikes me about these efforts —
representative of efforts by hundreds of people over decades, and the
expenditure of perhaps hundreds of millions of dollars — is that the
vast majority of these efforts have been applied to problems we already
know how to solve.

We know how to prevent many of our security problems — least
privilege, separation of privilege, minimization, type-safe languages,
and the like. We have over 40 years of experience and research about
good practice in building trustworthy software, but we aren’t using
much of it.

Instead of building trustworthy systems (note — I’m not referring to
making existing systems trustworthy, which I don’t think can succeed)
we are spending our effort on intrusion detection to discover when our
systems have been compromised.

We spend huge amounts on detecting botnets and worms, and deploying
firewalls to stop them, rather than constructing network-based systems
with architectures that don’t support such malware.

Instead of switching to languages with intrinsic features that
promote safe programming and execution, we spend our efforts on tools
to look for buffer overflows and type mismatches in existing code, and
merrily continue to produce more questionable quality software.

And we develop almost mindless loyalty to artifacts (operating
systems, browsers, languages, tools) without really understanding where
they are best used — and not used. Then we pound on our selections as
the “one, true solution” and justify them based on cost or training or
“open vs. closed” arguments that really don’t speak to fitness for
purpose. As a result, we develop fragile monocultures that have a
particular set of vulnerabilities, and then we need to spend a huge
amount to protect them. If you are thinking about how to secure Linux
or Windows or Apache or C++ (et al), then you aren’t thinking in terms
of fundamental solutions.

Please read his entire post.  It’s wonderful. Dr. Spafford, I apologize for re-posting so much of what you wrote, but it’s so fantastically spot-on that I couldn’t help myself.

Timing is everything.

/Hoff

{Ed: I changed the sentence regarding Spaf above after considering Wismer’s comments below.  I didn’t mean to insinuate that one should preclude challenging Spaf’s assertions, but rather that given his experience, one might choose to listen to him over me any day — and I’d agree!  Also, I will get out my Annie Oakley decoder ring and address that Cohen challenge he brought up after at least 2-3 hours of sleep… ;) }

On Castles: Moats, Machicolations, Burning Oil and Berms Vs. The Trebuchet (or DMZ’s teh Sux0r!)

October 16th, 2007 1 comment

TrebuchetCheck out the comments in the last post regarding my review of the recently released film titled "Me and My DMZ – ‘Til Death Do Us Part"

Carrying forward the mental exercise of debating the application of the classical DMZ deployment and it’s traceable heritage to the concentric levels of defense-in-depth from ye olde "castle/moat" security analogy, I’d like to admit into evidence one interesting example of disruptive technology that changed the course of medieval castle siege warfare, battlefield mechanics and history forever: the Trebuchet.

The folks that advocated concentric circles of architectural defense-in-depth as their strategy would love to tell you about the Trebuchet and its impact.  The problem is, they’re all dead.  Yet I digress.

The Trebuchet represented a quantum leap in the application of battlefield weaponry and strategy that all but ended the utility of defense-in-depth for castle dwellers.  Trebuchets were amazingly powerful weapons and could launch projectiles up to several hundred pounds with a range of up to about 300 yards!

The Trebuchet allowed for the application of technology that put the advantages of time, superior firepower and targeted precision squarely in the hands of the attacker and left the victim to simply wait until the end came. 

To review the basics, a castle is a defensive structure built around a keep or center
structure.  The castle is a fortress, a base from which to mount a
defense against a siege or center of operations from which to conduct
an attack.  The goal of these defenses is to repel, delay, deny, disrupt, and incapacitate the enemy. However, the castle on its own will not provide
a defense against a determined siege force.

One interesting point is that the assumption holds true that all the insiders are "friendlies…"

Castledefenses_2
Here we have an illustration of a well-fortified castle with the Keep in the center surrounded by multiple cascading levels of defense spiraling outward.  Presumably as an attacker breached one defensive boundary, they would encounter yet another with the added annoyance of defensive/offensive tactics such as archers, spiked pits, burning oil, etc.

Breaching one of these things took a long time, cost a lot of lives and meant a significant investment in time, materials, and effort.  Imagine what is was like for the defenders!

Enter the Trebuchet.  You wheel one of these bad boys within effective strike range, but out of range of the castle defender’s archers, and launched hurling masses of projectiles toward and over walls, obliterating them and most anything nearby.  You have a BBQ while a rotated crew of bombardiers merrily flung hunks of pain toward the hapless and helpless defenders.  They can either stay and die or run out the gate and die.

Pythondeadcow
This goes on and on.  Stuff inside starts to burn.  Walls crumble.  People start to starve.  The attackers then start flinging over dead corpses — animals, humans, whatever.  Days and perhaps weeks pass.  Disease sets in.  The bombardment continues until there are no defenses left, most of the defenders have either died or plan to and the enemy marches in and dispatches the rest.

What’s the defense against a Trebuchet?  You mean besides rebuilding your castle in the middle of a lake out of range making it exceedingly difficult to live as an inhabitant?  Not a lot.  In short, artillery meant the end of the castle as a defensive measure.  It simply stopped working.

Let’s be intellectually honest here within the context of this argument.  We’re facing our own version of the Trebuchet with the tactics, motivation, skill, tools and directed force of how attackers engage us today.  Most modern day castle technology apologists are content to simply sit in their keeps, playing Parcheesi and extolling the virtues of their fortifications, while the determined leviathan force smashes down the surrounding substructure.

There came a point in the illustration above wherein the art of warfare and the technology involved completely changed the playing field.  We’ve reached that point now in information warfare, yet people still want to build castles.

What I think people really want to say privately in their stoic defense of the DMZ and defense-in-depth is that they can’t think of anything else that’s better at the moment and they’re simply trying to wait out the bombardment.  Too bad the attackers aren’t governed by such motivating encouragement.

Look, I’m not trying to be abrasively critical of what people have done — I’ve done it, too.  I’m also not suggesting that we immediately forklift what’s in place now; that’s not feasible or practical.  However, I am being critical of people who continue to hang onto and defend outmoded concepts and suggest it’s an acceptable idea to fight a conventional war against a force using guerilla tactics and superior tools with nothing but time and resources on their hands.

There has to be a better way than just waiting to die.

If you don’t think differently about how you’re going to focus your efforts and with what, here’s what you have to look forward to:

Castleruins

/Hoff

The DMZ Isn’t Dead…It’s Merely Catatonic

October 16th, 2007 5 comments

Headinsand
Joel Espenschied over at Computerworld wrote a topical today titled "The DMZ’s not dead…whatever the vendors are telling you."  Joel basically suggests that due to poorly written software, complex technology such as Web Services and SOA and poor operational models, that the DMZ provides the requisite layers of defense in depth to provide the security we need.

I’m not so sure I’d suggest that DMZ’s provide "defense in depth."  I’d suggest they provide segmentation and isolation, but if you look at most DMZ deployments they represent the typical Octopus approach to security; a bunch of single segments isolated by one (or a cluster) or firewalls.  It’s the crap surrounding these segments that is appropriately tagged with the DiD moniker.

A DMZ is an abstracted representation of a security architecture, while I argue that defense in depth is an control implementation strategy…and one I think needs to be dealt with as honestly by security/network teams as it is by Enterprise Architects.  My simple truth is that there are now hundreds if not thousands of "micro-perimeterized single host" DMZ’s in most enterprise networks today and we lean on defense in depth as a crutch and a bad habit because we’re treating the symptom and not the problem — and it’s the only thing that most people know.

Defenseindepth
By the way, defense in depth doesn’t mean 15 network security boxes piled on top of one another.  Defense in depth really spoke to this model which entailed a holistic view of the "stack" — but in a coordinated manner.  You must focus on data, applications, host and networking as equal and objective recipients of investment in a protection strategy, not just one.

Too idealistic?  Waiting for me to run out of air holding my breath for secure applications, operating systems and protocols?  Good.  We’ll see who plays chicken first. 

You keep designing for obsolescence and the way things were 10 years ago while I look at what the business needs and where its priorities are and how best to balance risk with sharing information.  We’ll see who’s better prepared in the next three year refresh cycle to tackle the problems that arise as the business continues to embrace disruptive technology while you become the former by focusing on the latter.

There’s a real difference between managing threats and vulnerabilities versus managing risk.  Back to the article.

Two quotes stand out in the bunch, and I’ll focus on them:

The philosophy of Defense in Depth is based on the idea that stuff
invariably fails or is cracked, and it ought to take more than one
breach event before control is lost over data or processes. But with
this "dead DMZ" talk, the industry seems to be inching away from that
idea — and toward potential trouble.

Right.  I see how effective that’s been with all the breaches thus far.  Please demonstrate how defense in depth has protected us against XSS, CSRF, SQL Injection and fuzzing so far?  How about basic wireless security issues?  How about data leakage?  Your precious design anachronism isn’t looking so good at this point.  You spend hundreds of thousands of dollars and are still completely vulnerable.

That’s because your defense in depth is really defense in breadth and it’s being applied to the wrong sets of problems.  Where’s the security value in that?

The talking heads may say the DMZ is dead, but those actually managing
enterprise IT installations shouldn’t give it up so easily. Until no
mistakes are made in application coding, placement, operations and
other processes — and you know better than to hold your breath —
layered network security controls still provide a significant barrier
to loss of data or other breach. The advice regarding application
configuration and optimization is useful and developers’ efforts to
make that work are encouraging, but when it comes to the real-world
network, organizations can’t just ignore the reality of undiscovered
vulnerabilities and older systems still lurking in the corners.

Look, the reality is that "THE DMZ" is dead, but it doesn’t mean "the DMZ" is…it simply means you have to reassess and redefine both your description and expectation of what a DMZ and defense in depth really mean to your security posture given today’s attack surfaces.

Keep your firewalled DMZ Octopi for now, but realize that with the convergence of technologies such as virtualization, mobility, Mashups, SaaS, etc., the reality is that a process or data could show up running somewhere other than where you thought it was — VMotion is a classic example.

If security policies don’t/can’t travel with affinity to the resources they protect, your DMZ doesn’t mean squat if I just VMotioned a VM to a segment that doesn’t have a firewall, IDS, IPS, WAF and Proxy in front of it.

THAT’S what these talking heads are talking about while you’re intent on sticking yours in the sand.  If you don’t start thinking about how these disruptive technologies will impact you in the next 12 months, you’ll be reading about yourself in the blogosphere breach headlines soon enough.

Think different.

/Hoff

BeanSec! Wednesday, October 17th – 6PM to ?

October 16th, 2007 2 comments

Beansec3
Yo!  BeanSec! is once again upon us.  Wednesday, October 17th, 2007.

BeanSec! is an informal meetup of information security professionals, researchers and academics in the Greater Boston area that meets the third Wednesday of each month. 

I say again, BeanSec! is hosted the third Wednesday of every month.  Add it to your calendar.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join up”, present a zero-day exploit, or defend your dissertation to attend. Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139.  Look for the Elephant on the left door next to the Central Kitchen entrance.  Come upstairs. We sit on the left hand side…

Don’t worry about being "late" because most people just show up when they can.  6:30 is a good time to aim for.  We’ll try and save you a seat.  There is a parking garage across the street and 1 block down or you can try the streets (or take the T)

In case you’re wondering, we’re getting about 30-40 people on average per BeanSec!  Weld, 0Day and I have been at this for just over a year and without actually *doing* anything, it’s turned out swell.

We’ve had some really interesting people of note attend lately (I’m not going to tell you who…you’ll just have to come and find out.)  At around 9:00pm or so, the DJ shows up…as do the rather nice looking people from the Cambridge area, so if that’s your scene, you can geek out first and then get your thang on.

The food selection is basically high-end finger-food appetizers and the drinks are really good; an attentive staff and eclectic clientèle make the joint fun for people watching.  I’ll generally annoy you into participating somehow, even if it’s just fetching napkins. ;)

See you there.

/Hoff

Categories: BeanSec! Tags:

Security is NOT the Primary Limiting Factor Inhibiting SOA’s Growth

October 12th, 2007 7 comments

Peter Schoof over at eBizQ’s Twenty-Four Seven Security makes a couple of very interesting assertions regarding the lack of growth of Service Oriented Architecture (SOA.) 

I haven’t seen much discussion in the blogosphere about the security
challenges that arise from loosely coupled service orientated systems,
but that will soon change. As more and more companies move towards open
applications ala SOA, data is also opened up to a whole new series of
exploits and vulnerabilities.

I will agree that SOA provides some very interesting security
challenges that, much like many emerging technologies, are attempted at being solved by having security bolted
on instead of baked in.    I’d also agree that SOA will manifest new attack surfaces and potential vulnerabilities; it already has. 

Interestingly, the market for SOA security solutions came out of the gate strong, looked hot in the midst of consolidation and M&A madness, but then stumbled as the adoption of SOA (or specifically SOA security) did not support this nascent market kindly.  It has, in fact, become a feature, not a market. 

As to there not being much discussion in the blogosphere surrounding SOA, perhaps Peter missed Gunnar Peterson, Lori MacVittie, Arnon Rotem-Gal-Oz, or even Me.  Obviously Joe McKendrick has been blogging about SOA and security for some time also since he’s the person moderating the webinar that Peter is referring to in his full post.

At this point, security is the primary limiting factor inhibiting
SOA’s growth.
In order to counteract that, "Enterprises need to apply
non-invasive, externalized security policy enforcement mechanisms
consistently throughout their SOA ecosystems, while also centrally
managing security policy."

<Cough!> Um, no.  Firstly, please shoot the marketing drone that wrote that.

Secondly, and most important, the primary limiting factor inhibiting SOA’s growth is gross sum of: the definition of SOA, the state (mess) of Enterprise Architecture, operationalizing SOA and message buses, the business case, business value, complexity, and the cost center.  Security’s in there somewhere, but it’s far from being THE primary limiting factor, Peter.

I’m all for trying to raise the flag regarding SOA and the need for security, but please don’t play pin the tail on the donkey with security as the Ass…you’re only going to look like one.

/Hoff

Categories: Uncategorized Tags:

Everybody Wing Chun Tonight & “ISPs Providing Defense By Engaging In Offensive Computing” For $100, Alex.

October 11th, 2007 4 comments

3stoogesfootball_3
You say "defense" I say "offense."  I know the argument’s coming, but it’s just a matter of perspective.  What am I talking about?  ISPs ultimately going on the "offense" to provide a defense to protect their transport networks customers from the ravages of bots, worms and viri.

Let’s look at the latest spin on how services which are represented as protecting customers are really meant to transfer accountability and can potentially punish subscribers by addressing the symptoms instead of fixing the problem. 

Saving money operationally across a huge network makes for a better P&L.  It goes back to my posting on some of the economics of Clean Pipes here.

I’m guessing I won’t be getting a Qwest service discount any time soon after this…

Qwest’s announcement regarding their "Customer Internet Protection Program" in which they will "help a customer remediate an infected machine connected to it’s network" can be perceived in one of two ways.  I’m a cynic, but to be fair let me first present Qwest’s view:

The Qwest(R) Customer Internet
Protection Program (CIPP) notifies Qwest Broadband customers about
viruses and malware that may be on their computers, informs them of
safe Internet security practices and helps them clean viruses and
malware from their computers. The CIPP is part of Qwest’s ongoing
commitment to make the Internet safer for customers and is available to
residential and small-business Qwest Broadband ADSL* customers at no
additional charge.

That’s a nice concept and is meant to give us the warm and fuzzies that Qwest "cares."  I would agree that on the surface, this sounds terrific and Qwest is doing the "right thing."  Now we just need to explore what the "right reason" might be for this generous outreach.

Given the example above, the client machines are only actually "protected" and "more secure" once they have been discovered to be infected.  Now, this means that either they became infected whilst connected to Qwest’s "secure" network (thus bypassing all that heady protection levied by their network defenses) or during some out of band event.  More on that in a minute.

Here’s some additional color from Qwest:

The proliferation of cyber crime continues to require individuals,
businesses and even government agencies to take action against
ever-changing methods of attack. Because viruses and malware can cause
problems not only for individual Qwest Broadband customers, but also
for the online community, Qwest proactively monitors its network to
detect viruses or malware. When one of these is discovered, the Qwest
Customer Internet Protection Program notifies the specific customer of
the infection; gives the customer information on how to remove the
infection; educates the customer on good Internet security practices;
and provides the customer with additional resources, including
downloadable or online anti-virus software.

The Qwest CIPP only acts on malicious network traffic on the public
Internet; the program does not scan or otherwise monitor content on
customers’ computers.

Again, that sounds nice, but let’s back up a second because there’s something missing here.  What happens when they can’t remediate an infection and a zombie continues to spew crap across the network?  What happens if I’m running a BSD, Linux or Mac and not Windows?  What then?  Geek Squad in black helicopters?

Larry Seltzer gives us an idea in his write-up:

What Qwest is doing is something like NAC for ISP clients, however
there are a lot of differences, so I don’t want to take that analogy
too far. The system actively monitors clients for behaviors
characteristic of malware; spamming, for example. When it determines
that the system meets its profile, it takes action.

The monitoring is entirely at the network level. No software is
installed on any PC, nor are there any active probes of them. SMTP and
HTTP are blocked; other services like POP3 and VOIP are unaffected.
Attempts to send e-mail, legitimately or not, will fail. This is
something like the "walled garden" idea of NAC implementations where
the user is isolated from the rest of the network and expected to spend
their time cleaning up the system.

The next time the user attempts to connect to the Web they are
presented with a special page that warns of a possible "virus" on the
computer. (Their use of the word virus on this page is technically off,
but they’re trying to be colloquial and accessible, not strict-geek.)
The page says that malicious traffic has been monitored coming from
this computer or another on the same account; they can’t know which
computer behind your router is the dirty one.

The page gives you three options: remove the virus now, remove
it later, or assert that you have already removed it. In the first
case, they enter a removal process, the details of which I don’t have,
but it could be something like Trend Micro’s HouseCall.

In the second case you are allowed to connect even though your system
is infected, but you will be given the same warning again soon, and
after a few times you won’t have the "later" option anymore. In the
third case, I presume they let you back on the Internet and monitor you
once again.

In the second case, where they actually block out users who
refuse to clean up their systems, we’ve got big news. Will they really
shut off customers? Anecdotal evidence will come out of course, but we
won’t know how many times they really had to do this unless Qwest
volunteers the numbers.

Wowie!  That last paragraph presents a doosie of a case.  You mean you’re going to prevent me from accessing the network I am paying to use when I’m not knowingly engaging in malicious activity (I’m stupid and got infected, remember?)  I don’t really care about the mechanism for doing so, but this is offensive in multiple meanings of the word.

Oh, but wait.  The security remediation "service" Qwest is generously donating to their subscribers is free (as in beer) and there’s no guarantees, right?  Actually, there are.  They guarantee, based upon their terms of service, to remove you from service whenever and however they see fit.

If you look at the vagaries of Qwest’s Broadband Subscriber Agreement, you might have a hard time recognizing the rainbows and unicorns from the realities of what they "could" do should your machine, say, start transmitting SPAM on their network because you’re infected. 

It doesn’t matter why you’re doing it, because if you are, you’ve already agreed to them charging you $5 per spam message in the TOS!  It’s in there.  Don’t believe me?  Read it yourself in the AUP section.

What this really means to me is because ISPs can’t stop the infection across their network, can’t stop the true source of the infection in the first place, and are having to bear the brunt of the transport and security costs to alleviate financial strain due to operational impacts on their networks, they’re going to penalize the users.  Why?  because they can.

"But Hoff," you say, "you’re overreacting to a gracious and no-cost way to make Internet denizens more secure!  You’ve got this all wrong!"

I’m sure that for every 1 user they can’t remediate they’ll be 5 more that think this is terrific.  Until they get nuked off the network a-la option #2 in Larry’s write-up above, that is.  Qwest maintains all is well in Mayberry:

Qwest Broadband customers have responded positively to the CIPP. In
fact, since the program began, more than three-quarters of infected
customers who were surveyed said they appreciated the CIPP and Qwest’s
efforts to help them get rid of viruses and malware on their computers.

I wonder what happened to the other 25%.  This is why Enterprise NAC deployments often have the potential to suck donkey balls (that is a technical term relating to the spherical multidimensional paradoxes faced by the burros who bear the brunt of operationalizing security technology.)  All’s well and good until someone important, like the CEO, can’t get on the network.  Flexible quarantine that like that above, you say?  Sort of defeats the purpose, doesn’t it?

Here is where the fluff falls away and we have to come to grips to how ISP’s are "combating" the waves of attacks which are overwhelming their "defenses."   This is when we have to start talking about what it means to truly defend networks. 

Wingchung
I reckon it means that we’re going to see the very subtle uptake of "offensive" measures to provide "defensive" capabilities in  many different forms.  I don’t know many wars that were "won" on defense alone.  I’m not a military historian, so can someone help me out here?

I’m sure Bejtlich can give you some cool martial arts fu analogy, so I’ll beat him to the punch (ha!) and offer up the fact that you need what Wing Chun Fu offers as a tenet of the art:

Wing Chun Kung Fu assumes that an opponent will be
bigger and stronger than you. Therefore, WC emphasizes fast and strong
structure over physical strength and speed and simultaneous attack and defense.

Wing Chun focuses on combining a defensive movement with an offensive
movement, or using offensive techniques that provide defense.  In
this way, WC is structurally faster that those styles that teach one to
defend first, then attack.

So it’s clear to me that we need offense paired with defense, described transparently and with expectations set as to what pushing the "launch" button might mean.  Again, what was the customer satisfaction of the remaining 25% who had this feature applied to them when Qwest prevented them accessing the Internet?

People don’t like talking about this within the context of networks because the notion of "ethics" and "collateral damage" bubble up.   Look, it’s happening anyway.  We’re pretty much screwed at the moment.  And you, dear broadband Internet user, are paying for the privilege of being bent over.

You go ahead and define DoS anyway you see fit, but when an ISP turns on the customer because they can’t combat the true attacker, I smell a rat because for what appears to be economic reasons, they can’t really be honest about what they’re doing and what they’d really like to do in order to defend "their" network.

Get ready for more offense(s.)

That’s my $0.02 ($5 if I sent this from Qwest’s network) and I’m stickin’ to it.

/Hoff

Categories: Offensive Computing Tags:

Loose Lips Sink Ships But They Also Float Boats…

October 10th, 2007 2 comments

Mouth_tape
I’m going to play devil’s advocate again as I ponder a point.  Roll with me here.  I’m slightly conflicted.

Jeff Hayes blogged about an interesting encounter in a sports bar he had with the head of physical security for an international accounting firm.  It turns out that as part of a casusal conversation, this person disclosed some very interesting facts about his company’s security:

It turns out this guy handles physical security for a major
international accounting firm. He travels around North America doing
premises and access control assessments and deployments. He described
to me, without me asking specific questions, the technology they use,
the problems they deal with including the push-back they get from each
office complaining about burdensome security, their budgets, his
working environment, how he moved up the company ladder and his
qualifications or lack thereof, and a number of other tidbits that
would prove valuable to anyone doing surveillance.

It would appear that this guy had one too many and the apparent level of detail disclosed seems excessive.  Jeff’s point about confidence and accelerated reconnaissance for targeted profiling seem to be quite relevant in this scenario.  This person was being reckless and was potentially endangering his company.

However, let’s look at this a little differently to illustrate a counterpoint.

This encounter sounds like what many of us read and talk about under the guise of non-attribution at many of the security forums and "professional" security gatherings we attend and participate in with our "peers."  You know the ones where we all sit around, hoping that the badges actually represent the fact that the organizers have appropriately vetted and authenticated that the person wearing it is who they say they are…

Moreover, it sounds a lot like the conversations at the bar after said forum roundtables.  We share our collective experiences in order to gain insight and intelligence so we can improve our security posture, accelerate our intelligence on short-listing vendors and not make mistakes by learning from others.

How about those Visio diagrams you show on the whiteboard to VARs when they send their SE’s in for work and pitches?

It gets even more interesting when you have CISO’s/CSO’s (like I do) talk to the press and do case studies describing technologies and solutions deployed.  Some CISO’s don’t mind doing so after making a tactical risk-based decision that what they reveal does not expose the company adversely.  Others simply don’t talk at all about what they do.

I understand there exists the potential that by disclosing that you use
vendor ABC or technology XYZ that someone could exploit that knowledge
for malignant purposes.  I suppose this is where the fuzzy area (I’m sorry Mr. Hutton!) of
thin-slicing and quickly assessing risk comes into play.   What is the likelihood that this
information when combined with a vulnerability (in policy, architecture, deployment) in the presence of a
threat might become a risk to my company?

I use Check Point NGX R65.  I run it on a Crossbeam X-Series.  It filters a bunch of packets.  I use Cisco routers.  Is that information you couldn’t have found out with a network scan, fingerprinting and enumeration?  Have I made your job of attacking me orders of magnitude easier?

Ah, the slippery slope is claiming me as a victim…

Have you seen the Military Channel?  I watched several fantastic Navy/Marine-sponsored documentaries on Carriers, NextGen APC’s, new weapons systems…all of which are deployed.  Is Al Qaeda now in a more advantageous position because they know how the de-desalinization plant on a fast frigate functions?

Everyone in a company is both a sales and marketing rep as well as a
potential security breach waiting to happen. Most businesses like
people to present their company in a good light. We want people to know
that we work for a good employer. What we don’t want people to do is to
tell others how crappy our employer is. Likewise, we probably don’t
want our security personnel describing the details of our security
systems, policies and procedures.

So Jeff’s right, but I guess that depends upon the level of "details" he’s referring to?  Is Jeff’s point still valid when we’re talking about a breakfast conversation at an Infragard meeting?  How about the forums over at SecurityCatalyst.com?  There’s that level of trust and judgment factor again.  How about an ISAC gathering?  Aren’t we all supposed to share knowledge so we can help one another? 

Where do we draw the line as to who gets to say what and to whom?  Those policies either have to get really fuzzy or very, very black and white…which goes to Jeff’s point:

Loose lips have been known to sink ships; they can also hurt organizations.

Yes they have.  They’ve also been known, when appropriately pliable with a modicum of restraint, to float the boat of someone whose time, energy and budget you’ve been able to save by sharing relevant experience.  Let’s be careful not to throw the baby out with the bilge water.

So, how do you establish "trust" and assess risk before you talk about your experience with technology you’ve deployed or are thinking about deploying?  What about policies and procedures?  How about lessons learned?

Obviously anybody who answers is not a true "security guy" ;)

/Hoff

Categories: General Rants & Raves Tags: