Archive for the ‘Offensive Computing’ Category

The Curious Case Of Continuous and Consistently Contiguous Crypto…

August 8th, 2013 9 comments

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.


The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.


P.S. Remember back in the 80’s/90’s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta

Incomplete Thought: The Psychology Of Red Teaming Failure – Do Not Pass Go…

August 6th, 2013 14 comments
team fortress red team

team fortress red team (Photo credit: gtrwndr87)

I could probably just ask this of some of my friends — many of whom are the best in the business when it comes to Red Teaming/Pen Testing, but I thought it would be an interesting little dialog here, in the open:

When a Red Team is engaged by an entity to perform a legally-authorized pentest (physical or electronic) with an explicit “get out of jail free card,” does that change the tactics, strategy and risk appetite of the team were they not to have that parachute?

Specifically, does the team dial-up or dial-down the aggressiveness of the approach and execution KNOWING that they won’t be prosecuted, go to jail, etc.?

Blackhats and criminals operating outside this envelope don’t have the luxury of counting on a gilded escape should failure occur and thus the risk/reward mapping *might* be quite different.

To that point, I wonder what the gap is between an authorized Red Team action versus those that have everything to lose?  What say ye?


Enhanced by Zemanta

Six Degress Of Desperation: When Defense Becomes Offense…

July 15th, 2012 No comments
English: Defensive and offensive lines in Amer...

English: Defensive and offensive lines in American football (Photo credit: Wikipedia)

One cannot swing a dead cat without bumping into at least one expose in the mainstream media regarding how various nation states are engaged in what is described as “Cyberwar.”

The obligatory shots of darkened rooms filled with pimply-faced spooky characters basking in the green glow of command line sessions furiously typing are dosed with trademark interstitial fade-ins featuring the masks of Anonymous set amongst a backdrop of shots of smoky Syrian streets during the uprising,  power grids and nuclear power plants in lockdown replete with alarms and flashing lights accompanied by plunging stock-ticker animations laid over the trademark icons of financial trading floors.

Terms like Stuxnet, Zeus, and Flame have emerged from the obscure .DAT files of AV research labs and now occupy a prominent spot in the lexicon of popular culture…right along side the word “Hacker,” which now almost certainly brings with it only the negative connotation it has been (re)designed to impart.

In all of this “Cyberwar” we hear that the U.S. defense complex is woefully unprepared to deal with the sophistication, volume and severity of the attacks we are under on a daily basis.  Further, statistics from the Private Sector suggest that adversaries are becoming more aggressive, motivated, innovative, advanced,  and successful in their ability to attack what is basically described as basically undefended — nee’ undefendable — assets.

In all of this talk of “Cyberwar,” we were led to believe that the U.S. Government — despite hostile acts of “cyberaggression” from “enemies” foreign and domestic — never engaged in pre-emptive acts of Cyberwar.  We were led to believe that despite escalating cases of documented incursions across our critical infrastructure (Aurora, Titan Rain, etc.,) that our response was reactionary, limited in scope and reach and almost purely detective/forensic in nature.

It’s pretty clear that was a farce.

However, what’s interesting — besides the amazing geopolitical, cultural, socio-economic, sovereign,  financial and diplomatic issues that war of any sort brings — including “cyberwar” — is that even in the Private Sector, we’re still led to believe that we’re both unable, unwilling or forbidden to do anything but passively respond to attack.

There are some very good reasons for that argument, and some which need further debate.

Advanced adversaries are often innovative and unconstrained in their attack methodologies yet defenders remain firmly rooted in the classical OODA-fueled loops of the past where the A, “act,” generally includes some convoluted mixture of detection, incident response and cleanup…which is often followed up with a second dose when the next attack occurs.

As such, “Defenders” need better definitions of what “defense” means and how a silent discard from a firewall, a TCP RST from an IPS or a blip from Bro is simply not enough.  What I’m talking about here is what defensive linemen look to do when squared up across from their offensive linemen opponents — not to just hold the line to prevent further down-field penetration, but to sack the quarterback or better yet, cause a fumble or error and intercept a pass to culminate in running one in for points to their advantage.

That’s a big difference between holding till fourth down and hoping the offense can manage to not suffer the same fate from the opposition.

That implies there’s a difference between “winning” and “not losing,” with arbitrary values of the latter.

Put simply, it means we should employ methods that make it more and more difficult, costly, timely and non-automated for the attacker to carry out his/her mission…[more] active defense.

I’ve written about this before in 2009 “Incomplete Thought: Offensive Computing – The Empire Strikes Back” wherein I asked people’s opinion on both their response to and definition of “offensive security.”  This was a poor term…so I was delighted when I found my buddy Rich Mogull had taken the time to clarify vocabulary around this issue in his blog titled: “Thoughts on Active Defense, Intrusion Deception, and Counterstrikes.

Rich wrote:

…Here are some possible definitions we can work with:

  • Active defense: Altering your environment and system responses dynamically based on the activity of potential attackers, to both frustrate attacks and more definitively identify actual attacks. Try to tie up the attacker and gain more information on them without engaging in offensive attacks yourself. A rudimentary example is throwing up an extra verification page when someone tries to leave potential blog spam, all the way up to tools like Mykonos that deliberately screw with attackers to waste their time and reduce potential false positives.
  • Intrusion deception: Pollute your environment with false information designed to frustrate attackers. You can also instrument these systems/datum to identify attacks. DataSoft Nova is an example of this. Active defense engages with attackers, while intrusion deception can also be more passive.
  • Honeypots & tripwires: Purely passive (and static) tools with false information designed to entice and identify an attacker.
  • Counterstrike: Attack the attacker by engaging in offensive activity that extends beyond your perimeter.

These aren’t exclusive – Mykonos also uses intrusion deception, while Nova can also use active defense. The core idea is to leave things for attackers to touch, and instrument them so you can identify the intruders. Except for counterattacks, which move outside your perimeter and are legally risky.

I think that we’re seeing the re-emergence of technology that wasn’t ready for primetime now become more prominent in consideration when folks refresh their toolchests looking for answers to problems that “passive response” offers.  It’s important to understand that tools like these — in isolation — won’t solve many complex attacks, nor are they a silver bullet, but understanding that we’re not limited to cleanup is important.

The language of “active defense,” like Rich’s above, is being spoken more and more.

Traditional networking and security companies such as Juniper* are acquiring upstarts like Mykonos Software in this space.  Mykonos’ mission is to “…change the economics of hacking…by making the attack surface variable and inserting deceptive detection points into the web application…mak[ing] hacking a website more time consuming, tedious and costly to an attacker. Because the web application is no longer passive, it also makes attacks more difficult.”

VC’s like Kleiner Perkins are funding companies whose operating premise is a more active “response” such as the in-stealth company “Shape Security” that expects to “…change the web security paradigm by shifting costs from defenders to hackers.”

Or, as Rich defined above, the notion of “counterstrike” outside one’s “perimeter” is beginning to garner open discussion now that we’ve seen what’s possible in the wild.

In fact, check out the abstract at Defcon 20 from Shawn Henry of newly-unstealthed company “Crowdstrike,” titled “Changing the Security Paradigm: Taking Back Your Network and Bringing Pain to the Adversary:

The threat to our networks is increasing at an unprecedented rate. The hostile environment we operate in has rendered traditional security strategies obsolete. Adversary advances require changes in the way we operate, and “offense” changes the game.

Shawn Henry Prior to joining CrowdStrike, Henry was with the FBI for 24 years, most recently as Executive Assistant Director, where he was responsible for all FBI criminal investigations, cyber investigations, and international operations worldwide.

If you look at Mr. Henry’s credentials, it’s clear where the motivation and customer base are likely to flow.

Without turning this little highlight into a major opus — because when discussing this topic it’s quite easy to do so given the definition and implications of “active defense,”– I hope this has scratched an itch and you’ll spend more time investigating this fascinating topic.

I’m convinced we will see more and more as the cybersword rattling continues.

Have you investigated technology solutions that offer more “active defense?”


* Full disclosure: I work for Juniper Networks who recently acquired Mykonos Software mentioned above.  I hold a position in, and enjoy a salary from, Juniper Networks, Inc. 😉

Enhanced by Zemanta

On Security Conference Themes: Offense *Versus* Defense – Or, Can You Code?

November 22nd, 2010 7 comments

This morning’s dialog on Twitter from @wmremes and @singe reminded me of something that’s been bouncing around in my head for some time.

Wim blogged about a tweet Jeff Moss made regarding Black Hat DC in which he suggested CFP submissions should focus on offense (versus defense.)

Black Hat (and Defcon) have long focused on presentations which highlight novel emerging attacks.  There are generally not a lot of high-profile “defensive” presentations/talks because for the most part, they’re just not sexy, generally they involve hard work/cultural realignment and the reality that as hard as we try, attackers will always out-innovate and out-pace defenders.

More realistically, offense is sexy and offense sells — and it often sells defense.  That’s why vendors sponsor those shows in the first place.

Along these lines, one will notice that within our industry, the defining criterion for the attack versus defend talks and those that give them, is one’s ability to write code and produce tools that demonstrate the vulnerability via exploit.  Conceptual vulnerabilities paired with non-existent exploits are generally thought of as fodder for academia.  Only when a tool that weaponizes an attack shows up do people pay attention.

Zero days rule by definition. There’s no analog on the defensive side unless you buy into marketing like “…ahead of the threat.” *cough* Defense for offense that doesn’t exist generally doesn’t get the majority of the funding 😉

So it’s no wonder that security “rockstars” in our industry are generally those who produce attack/offensive code which illustrate how a vector can be exploited.  It’s tangible.  It’s demonstrable.  It’s sexy.

On the other hand, most defenders are reconciled to using tools that others wrote — or become specialists in the integration of them — in order to parlay some advantage over the ever-increasing wares of the former.

Think of those folks who represent the security industry in terms of mindshare and get the most amount of press.  Overwhelmingly it’s those “hax0rs” who write cool tools — tools that are more offensive in nature, even if they produce results oriented toward allowing practitioners to defend better (or at least that’s how they’re sold.)  That said, there are also some folks who *do* code and *do* create things that are defensive in nature.

I believe the answer lies in balance; we need flashy exploits (no matter how impractical/irrelevant they may be to a large amount of the population) to drive awareness.  We also need  more practitioner/governance talks to give people platforms upon which they can start to architect solutions.  We need more defenders to be able to write code.

Perhaps that’s what Richard Bejtlich meant when he tweeted: “Real security is built, not bought.”  That’s an interesting statement on lots of fronts. I’m selfishly taking Richard’s statement out of context to support my point, so hopefully he’ll forgive me.

That said, I don’t write code.  More specifically, I don’t write code well.  I have hundreds of ideas of things I’d like to do but can’t bridge the gap between ideation and proof-of-concept because I can’t write code.

This is why I often “invent” scenarios I find plausible, talk about them, and then get people thinking about how we would defend against them — usually in the vacuum of either offensive or defensive tools being available, or at least realized.

Sometimes there aren’t good answers.

I hope we focus on this balance more at shows like Black Hat — I’m lucky enough to get to present my “research” there despite it being defensive in nature but we need more defensive tools and talks to make this a reality.


Enhanced by Zemanta

Incomplete Thought: Offensive Computing – The Empire Strikes Back

March 5th, 2009 11 comments

Yesterday at IANS, Greg Shipley gave a great keynote that focused on a lot of things we do today in InfoSec that aren't necessarily as effective as they should be. Greg called for a change in our behavior as a community to address the gaps we have.

In the Q&A section, it occurred to me that for the sake of argument, I would ask Greg about his thoughts on changing our behavior and position in dealing with security and our adversaries by positing that instead of always playing defense, we should play some offense.

I didn't constrain what I meant by "offense" other to suggest that it could include "active countermeasures," but what is obvious is that people immediately throw up walls around being "offensive" without spending much time defining what it actually means.

I've written and spoken about this before, but it's a rather contentious issue. It gets shelved pretty quickly by most but it really shouldn't in my opinion.

In a follow-on discussion after the keynote, Marcus Ranum, Richard Bejtlich, Rocky DeStefano and I were standing around shooting the, uh, stuff, when I brought this up again.

We had a really interesting dialog wherein we explored what "offensive computing" meant to each of us and it was clear that simply playing defense alone would never allow us to do anything more than spend money and hope.

There's not been a war yet that has been won with defense alone, so why do we expect we can win this one by simply piling on more barbed wire when the enemy is dropping smart bombs? This is the definition of insanity and a behavior that we don't talk about changing.

"Don't spend money on AV because it's not effective" is an interesting behavioral change from the perspective of how you invest. Don't lay down and take it up the assets by only playing defense is another.

I'm being intentionally vague, obtuse and non-specific when it comes to defining what I mean by "offensive," but we're at a point in time where at a minimum we have the technology and capability to add a little "offense" to our defense.  

You want a change in behavior?  How about not playing the victim?

What are your thoughts on "offensive computing?"  

Categories: Offensive Computing Tags:

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.



Everybody Wing Chun Tonight & “ISPs Providing Defense By Engaging In Offensive Computing” For $100, Alex.

October 11th, 2007 4 comments

You say "defense" I say "offense."  I know the argument’s coming, but it’s just a matter of perspective.  What am I talking about?  ISPs ultimately going on the "offense" to provide a defense to protect their transport networks customers from the ravages of bots, worms and viri.

Let’s look at the latest spin on how services which are represented as protecting customers are really meant to transfer accountability and can potentially punish subscribers by addressing the symptoms instead of fixing the problem. 

Saving money operationally across a huge network makes for a better P&L.  It goes back to my posting on some of the economics of Clean Pipes here.

I’m guessing I won’t be getting a Qwest service discount any time soon after this…

Qwest’s announcement regarding their "Customer Internet Protection Program" in which they will "help a customer remediate an infected machine connected to it’s network" can be perceived in one of two ways.  I’m a cynic, but to be fair let me first present Qwest’s view:

The Qwest(R) Customer Internet
Protection Program (CIPP) notifies Qwest Broadband customers about
viruses and malware that may be on their computers, informs them of
safe Internet security practices and helps them clean viruses and
malware from their computers. The CIPP is part of Qwest’s ongoing
commitment to make the Internet safer for customers and is available to
residential and small-business Qwest Broadband ADSL* customers at no
additional charge.

That’s a nice concept and is meant to give us the warm and fuzzies that Qwest "cares."  I would agree that on the surface, this sounds terrific and Qwest is doing the "right thing."  Now we just need to explore what the "right reason" might be for this generous outreach.

Given the example above, the client machines are only actually "protected" and "more secure" once they have been discovered to be infected.  Now, this means that either they became infected whilst connected to Qwest’s "secure" network (thus bypassing all that heady protection levied by their network defenses) or during some out of band event.  More on that in a minute.

Here’s some additional color from Qwest:

The proliferation of cyber crime continues to require individuals,
businesses and even government agencies to take action against
ever-changing methods of attack. Because viruses and malware can cause
problems not only for individual Qwest Broadband customers, but also
for the online community, Qwest proactively monitors its network to
detect viruses or malware. When one of these is discovered, the Qwest
Customer Internet Protection Program notifies the specific customer of
the infection; gives the customer information on how to remove the
infection; educates the customer on good Internet security practices;
and provides the customer with additional resources, including
downloadable or online anti-virus software.

The Qwest CIPP only acts on malicious network traffic on the public
Internet; the program does not scan or otherwise monitor content on
customers’ computers.

Again, that sounds nice, but let’s back up a second because there’s something missing here.  What happens when they can’t remediate an infection and a zombie continues to spew crap across the network?  What happens if I’m running a BSD, Linux or Mac and not Windows?  What then?  Geek Squad in black helicopters?

Larry Seltzer gives us an idea in his write-up:

What Qwest is doing is something like NAC for ISP clients, however
there are a lot of differences, so I don’t want to take that analogy
too far. The system actively monitors clients for behaviors
characteristic of malware; spamming, for example. When it determines
that the system meets its profile, it takes action.

The monitoring is entirely at the network level. No software is
installed on any PC, nor are there any active probes of them. SMTP and
HTTP are blocked; other services like POP3 and VOIP are unaffected.
Attempts to send e-mail, legitimately or not, will fail. This is
something like the "walled garden" idea of NAC implementations where
the user is isolated from the rest of the network and expected to spend
their time cleaning up the system.

The next time the user attempts to connect to the Web they are
presented with a special page that warns of a possible "virus" on the
computer. (Their use of the word virus on this page is technically off,
but they’re trying to be colloquial and accessible, not strict-geek.)
The page says that malicious traffic has been monitored coming from
this computer or another on the same account; they can’t know which
computer behind your router is the dirty one.

The page gives you three options: remove the virus now, remove
it later, or assert that you have already removed it. In the first
case, they enter a removal process, the details of which I don’t have,
but it could be something like Trend Micro’s HouseCall.

In the second case you are allowed to connect even though your system
is infected, but you will be given the same warning again soon, and
after a few times you won’t have the "later" option anymore. In the
third case, I presume they let you back on the Internet and monitor you
once again.

In the second case, where they actually block out users who
refuse to clean up their systems, we’ve got big news. Will they really
shut off customers? Anecdotal evidence will come out of course, but we
won’t know how many times they really had to do this unless Qwest
volunteers the numbers.

Wowie!  That last paragraph presents a doosie of a case.  You mean you’re going to prevent me from accessing the network I am paying to use when I’m not knowingly engaging in malicious activity (I’m stupid and got infected, remember?)  I don’t really care about the mechanism for doing so, but this is offensive in multiple meanings of the word.

Oh, but wait.  The security remediation "service" Qwest is generously donating to their subscribers is free (as in beer) and there’s no guarantees, right?  Actually, there are.  They guarantee, based upon their terms of service, to remove you from service whenever and however they see fit.

If you look at the vagaries of Qwest’s Broadband Subscriber Agreement, you might have a hard time recognizing the rainbows and unicorns from the realities of what they "could" do should your machine, say, start transmitting SPAM on their network because you’re infected. 

It doesn’t matter why you’re doing it, because if you are, you’ve already agreed to them charging you $5 per spam message in the TOS!  It’s in there.  Don’t believe me?  Read it yourself in the AUP section.

What this really means to me is because ISPs can’t stop the infection across their network, can’t stop the true source of the infection in the first place, and are having to bear the brunt of the transport and security costs to alleviate financial strain due to operational impacts on their networks, they’re going to penalize the users.  Why?  because they can.

"But Hoff," you say, "you’re overreacting to a gracious and no-cost way to make Internet denizens more secure!  You’ve got this all wrong!"

I’m sure that for every 1 user they can’t remediate they’ll be 5 more that think this is terrific.  Until they get nuked off the network a-la option #2 in Larry’s write-up above, that is.  Qwest maintains all is well in Mayberry:

Qwest Broadband customers have responded positively to the CIPP. In
fact, since the program began, more than three-quarters of infected
customers who were surveyed said they appreciated the CIPP and Qwest’s
efforts to help them get rid of viruses and malware on their computers.

I wonder what happened to the other 25%.  This is why Enterprise NAC deployments often have the potential to suck donkey balls (that is a technical term relating to the spherical multidimensional paradoxes faced by the burros who bear the brunt of operationalizing security technology.)  All’s well and good until someone important, like the CEO, can’t get on the network.  Flexible quarantine that like that above, you say?  Sort of defeats the purpose, doesn’t it?

Here is where the fluff falls away and we have to come to grips to how ISP’s are "combating" the waves of attacks which are overwhelming their "defenses."   This is when we have to start talking about what it means to truly defend networks. 

I reckon it means that we’re going to see the very subtle uptake of "offensive" measures to provide "defensive" capabilities in  many different forms.  I don’t know many wars that were "won" on defense alone.  I’m not a military historian, so can someone help me out here?

I’m sure Bejtlich can give you some cool martial arts fu analogy, so I’ll beat him to the punch (ha!) and offer up the fact that you need what Wing Chun Fu offers as a tenet of the art:

Wing Chun Kung Fu assumes that an opponent will be
bigger and stronger than you. Therefore, WC emphasizes fast and strong
structure over physical strength and speed and simultaneous attack and defense.

Wing Chun focuses on combining a defensive movement with an offensive
movement, or using offensive techniques that provide defense.  In
this way, WC is structurally faster that those styles that teach one to
defend first, then attack.

So it’s clear to me that we need offense paired with defense, described transparently and with expectations set as to what pushing the "launch" button might mean.  Again, what was the customer satisfaction of the remaining 25% who had this feature applied to them when Qwest prevented them accessing the Internet?

People don’t like talking about this within the context of networks because the notion of "ethics" and "collateral damage" bubble up.   Look, it’s happening anyway.  We’re pretty much screwed at the moment.  And you, dear broadband Internet user, are paying for the privilege of being bent over.

You go ahead and define DoS anyway you see fit, but when an ISP turns on the customer because they can’t combat the true attacker, I smell a rat because for what appears to be economic reasons, they can’t really be honest about what they’re doing and what they’d really like to do in order to defend "their" network.

Get ready for more offense(s.)

That’s my $0.02 ($5 if I sent this from Qwest’s network) and I’m stickin’ to it.


Categories: Offensive Computing Tags: