Archive

Archive for the ‘Intrusion Detection’ Category

The Curious Case Of Continuous and Consistently Contiguous Crypto…

August 8th, 2013 9 comments

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

/Hoff

P.S. Remember back in the 80′s/90′s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta

Incomplete Thought: In-Line Security Devices & the Fallacies Of Block Mode

June 28th, 2013 16 comments

blockadeThe results of a long-running series of extremely scientific studies has produced a Metric Crapload™ of anecdata.

Namely, hundreds of detailed discussions (read: lots of booze and whining) over the last 5 years has resulted in the following:

Most in-line security appliances (excluding firewalls) with the ability to actively dispose of traffic — services such as IPS, WAF, Anti-malware — are deployed in “monitor” or “learning” mode are rarely, if ever, enabled with automated blocking.  In essence, they are deployed as detective versus preventative security services.

I have many reasons compiled for this.

I am interested in hearing whether you agree/disagree and your reasons for such.

/Hoff

Enhanced by Zemanta

Six Degress Of Desperation: When Defense Becomes Offense…

July 15th, 2012 No comments
English: Defensive and offensive lines in Amer...

English: Defensive and offensive lines in American football (Photo credit: Wikipedia)

One cannot swing a dead cat without bumping into at least one expose in the mainstream media regarding how various nation states are engaged in what is described as “Cyberwar.”

The obligatory shots of darkened rooms filled with pimply-faced spooky characters basking in the green glow of command line sessions furiously typing are dosed with trademark interstitial fade-ins featuring the masks of Anonymous set amongst a backdrop of shots of smoky Syrian streets during the uprising,  power grids and nuclear power plants in lockdown replete with alarms and flashing lights accompanied by plunging stock-ticker animations laid over the trademark icons of financial trading floors.

Terms like Stuxnet, Zeus, and Flame have emerged from the obscure .DAT files of AV research labs and now occupy a prominent spot in the lexicon of popular culture…right along side the word “Hacker,” which now almost certainly brings with it only the negative connotation it has been (re)designed to impart.

In all of this “Cyberwar” we hear that the U.S. defense complex is woefully unprepared to deal with the sophistication, volume and severity of the attacks we are under on a daily basis.  Further, statistics from the Private Sector suggest that adversaries are becoming more aggressive, motivated, innovative, advanced,  and successful in their ability to attack what is basically described as basically undefended — nee’ undefendable — assets.

In all of this talk of “Cyberwar,” we were led to believe that the U.S. Government — despite hostile acts of “cyberaggression” from “enemies” foreign and domestic — never engaged in pre-emptive acts of Cyberwar.  We were led to believe that despite escalating cases of documented incursions across our critical infrastructure (Aurora, Titan Rain, etc.,) that our response was reactionary, limited in scope and reach and almost purely detective/forensic in nature.

It’s pretty clear that was a farce.

However, what’s interesting — besides the amazing geopolitical, cultural, socio-economic, sovereign,  financial and diplomatic issues that war of any sort brings — including “cyberwar” — is that even in the Private Sector, we’re still led to believe that we’re both unable, unwilling or forbidden to do anything but passively respond to attack.

There are some very good reasons for that argument, and some which need further debate.

Advanced adversaries are often innovative and unconstrained in their attack methodologies yet defenders remain firmly rooted in the classical OODA-fueled loops of the past where the A, “act,” generally includes some convoluted mixture of detection, incident response and cleanup…which is often followed up with a second dose when the next attack occurs.

As such, “Defenders” need better definitions of what “defense” means and how a silent discard from a firewall, a TCP RST from an IPS or a blip from Bro is simply not enough.  What I’m talking about here is what defensive linemen look to do when squared up across from their offensive linemen opponents — not to just hold the line to prevent further down-field penetration, but to sack the quarterback or better yet, cause a fumble or error and intercept a pass to culminate in running one in for points to their advantage.

That’s a big difference between holding till fourth down and hoping the offense can manage to not suffer the same fate from the opposition.

That implies there’s a difference between “winning” and “not losing,” with arbitrary values of the latter.

Put simply, it means we should employ methods that make it more and more difficult, costly, timely and non-automated for the attacker to carry out his/her mission…[more] active defense.

I’ve written about this before in 2009 “Incomplete Thought: Offensive Computing – The Empire Strikes Back” wherein I asked people’s opinion on both their response to and definition of “offensive security.”  This was a poor term…so I was delighted when I found my buddy Rich Mogull had taken the time to clarify vocabulary around this issue in his blog titled: “Thoughts on Active Defense, Intrusion Deception, and Counterstrikes.

Rich wrote:

…Here are some possible definitions we can work with:

  • Active defense: Altering your environment and system responses dynamically based on the activity of potential attackers, to both frustrate attacks and more definitively identify actual attacks. Try to tie up the attacker and gain more information on them without engaging in offensive attacks yourself. A rudimentary example is throwing up an extra verification page when someone tries to leave potential blog spam, all the way up to tools like Mykonos that deliberately screw with attackers to waste their time and reduce potential false positives.
  • Intrusion deception: Pollute your environment with false information designed to frustrate attackers. You can also instrument these systems/datum to identify attacks. DataSoft Nova is an example of this. Active defense engages with attackers, while intrusion deception can also be more passive.
  • Honeypots & tripwires: Purely passive (and static) tools with false information designed to entice and identify an attacker.
  • Counterstrike: Attack the attacker by engaging in offensive activity that extends beyond your perimeter.

These aren’t exclusive – Mykonos also uses intrusion deception, while Nova can also use active defense. The core idea is to leave things for attackers to touch, and instrument them so you can identify the intruders. Except for counterattacks, which move outside your perimeter and are legally risky.

I think that we’re seeing the re-emergence of technology that wasn’t ready for primetime now become more prominent in consideration when folks refresh their toolchests looking for answers to problems that “passive response” offers.  It’s important to understand that tools like these — in isolation — won’t solve many complex attacks, nor are they a silver bullet, but understanding that we’re not limited to cleanup is important.

The language of “active defense,” like Rich’s above, is being spoken more and more.

Traditional networking and security companies such as Juniper* are acquiring upstarts like Mykonos Software in this space.  Mykonos’ mission is to “…change the economics of hacking…by making the attack surface variable and inserting deceptive detection points into the web application…mak[ing] hacking a website more time consuming, tedious and costly to an attacker. Because the web application is no longer passive, it also makes attacks more difficult.”

VC’s like Kleiner Perkins are funding companies whose operating premise is a more active “response” such as the in-stealth company “Shape Security” that expects to “…change the web security paradigm by shifting costs from defenders to hackers.”

Or, as Rich defined above, the notion of “counterstrike” outside one’s “perimeter” is beginning to garner open discussion now that we’ve seen what’s possible in the wild.

In fact, check out the abstract at Defcon 20 from Shawn Henry of newly-unstealthed company “Crowdstrike,” titled “Changing the Security Paradigm: Taking Back Your Network and Bringing Pain to the Adversary:

The threat to our networks is increasing at an unprecedented rate. The hostile environment we operate in has rendered traditional security strategies obsolete. Adversary advances require changes in the way we operate, and “offense” changes the game.

Shawn Henry Prior to joining CrowdStrike, Henry was with the FBI for 24 years, most recently as Executive Assistant Director, where he was responsible for all FBI criminal investigations, cyber investigations, and international operations worldwide.

If you look at Mr. Henry’s credentials, it’s clear where the motivation and customer base are likely to flow.

Without turning this little highlight into a major opus — because when discussing this topic it’s quite easy to do so given the definition and implications of “active defense,”– I hope this has scratched an itch and you’ll spend more time investigating this fascinating topic.

I’m convinced we will see more and more as the cybersword rattling continues.

Have you investigated technology solutions that offer more “active defense?”

/Hoff

* Full disclosure: I work for Juniper Networks who recently acquired Mykonos Software mentioned above.  I hold a position in, and enjoy a salary from, Juniper Networks, Inc. ;)

Enhanced by Zemanta

Elemental: Leveraging Virtualization Technology For More Resilient & Survivable Systems

June 21st, 2012 Comments off

Yesterday saw the successful launch of Bromium at Gigamon’s Structure conference in San Francisco.

I was privileged to spend some stage time with Stacey Higginbotham and Simon Crosby (co-founder, CTO, mentor and good friend) on stage after Simon’s big reveal of Bromium‘s operating model and technology approach.

While product specifics weren’t disclosed, we spent some time chatting about Bromium’s approach to solving a particularly tough set of security challenges with a focus on realistic outcomes given the advanced adversaries and attack methodologies in use today.

At the heart of our discussion* was the notion that in many cases one cannot detect let alone prevent specific types of attacks and this requires a new way of containing the impact of exploiting vulnerabilities (known or otherwise) that are as much targeting the human factor as they are weaknesses in underlying operating systems and application technologies.

I think Kurt Marko did a good job summarizing Bromium in his article here, so if you’re interested in learning more check it out. I can tell you that as a technology advisor to Bromium and someone who is using the technology preview, it lives up to the hype and gives me hope that we’ll see even more novel approaches of usable security leveraging technology like this.  More will be revealed as time goes on.

That said, with productization details purposely left vague, Bromium’s leveraged implementation of Intel’s VT technology and its “microvisor” approach brought about comments yesterday from many folks that reminded them of what they called “similar approaches” (however right/wrong they may be) to use virtualization technology and/or “sandboxing” to provide more “secure” systems.  I recall the following in passing conversation yesterday:

  • Determina (VMware acquired)
  • Green Borders (Google acquired)
  • Trusteer
  • Invincea
  • DeepSafe (Intel/McAfee)
  • Intel TXT w/MLE & hypervisors
  • Self Cleansing Intrusion Tolerance (SCIT)
  • PrivateCore (Newly launched by Oded Horovitz)
  • etc…

I don’t think Simon would argue that the underlying approach of utilizing virtualization for security (even for an “endpoint” application) is new, but the approach toward making it invisible and transparent from a user experience perspective certainly is.  Operational simplicity and not making security the user’s problem is a beautiful thing.

Here is a video of Simon and my session “Secure Everything.

What’s truly of interest to me — and based on what Simon said yesterday — the application of this approach could be just at home in a “server,” cloud or mobile application as it is on a classical desktop environment.  There are certainly dependencies (such as VT) today, but the notion that we can leverage virtualization for better resilience, survivability and assurance for more “trustworthy” systems is exciting.

I for one am very excited to see how we’re progressing from “bolt on” to more integrated approaches in our security models. This will bear fruit as we become more platform and application-centric in our approach to security, allowing us to leverage fundamentally “elemental” security components to allow for more meaningfully trustworthy computing.

/Hoff

* The range of topics was rather hysterical; from the Byzantine General’s problem to K/T Boundary extinction-class events to the Mexican/U.S. border fence, it was chock full of analogs ;)

 

Enhanced by Zemanta

Why Steeling Your Security Is Less Stainless and More Irony…

March 5th, 2012 3 comments

(I originally pre-pended to this post a lengthy update based on my findings and incident response, but per a suggestion from @jeremiahg, I’ve created a separate post here for clarity)

Earlier today I wrote about the trending meme in the blogosphere/security bellybutton squad wherein the notion that security — or the perceived lacking thereof — is losing the “war.”

My response was that the expectations and methodology by which we measure success or failure is arbitrary and grossly inaccurate.  Furthermore, I suggest that the solutions we have at our disposal are geared toward solving short-term problems designed to generate revenue for vendors and solve point-specific problems based on prevailing threats and the appetite to combat them.

As a corollary, if you reduce this down to the basics, the tools we have at our disposal that we decry as useless often times work just fine…if you actually use them.

For most of us, we do what we can to provide appropriate layers of defense where possible but our adversaries are crafty and in many cases more skilled.  For some, this means our efforts are a lost cause but the reality is that often times good enough is good enough…until it isn’t.

Like it wasn’t today.

Let me paint you a picture.

A few days ago a Wired story titled “Is antivirus a waste of money?” hit the wires that quoted many (of my friends) as saying that security professionals don’t run antivirus.  There were discussions about efficacy, performance and usefulness. Many of the folks quoted in that article also run Macs.  There was some interesting banter on Twitter also.

If we rewind a few weeks, I was contacted by two people a few days apart, one running a FireEye network-based anti-malware solution and another running a mainstream host-based anti-virus solution.

Both of these people let me know that their solutions detected and blocked a Javascript-based redirection attempt from my blog which runs a self-hosted WordPress installation.

I pawed through my blog’s PHP code, turned off almost every plug-in, ran the exploit scanner…all the while unable to reproduce the behavior on my Mac or within a fresh Windows 7 VM.

The FireEye report ultimately was reported back as a false positive while the host-based AV solution couldn’t be reproduced, either.

Fast forward to today and after I wrote the blog “You know what’s dead? Security…” I had a huge number of click-throughs from my tweet.

The point of my blog was that security isn’t dead and we aren’t so grossly failing but rather suffering a death from a thousand cuts.  However, while we’ve got a ton of band-aids, it doesn’t make it any less painful.

Speaking of pain, almost immediately upon posting the tweet, I received reports from 5-6 people indicating their AV solutions detected an attempted malicious code execution, specifically a Javascript redirector.

This behavior was commensurate with the prior “sightings” and so with the help of @innismir and @chort0, I set about trying to reproduce the event.

@chort0 found that a hidden iFrame was redirecting to a site hosting in Belize (screen caps later) that ultimately linked to other sites in Russia and produced a delightful greeting which said “Gotcha!” after attempting to drop an executable.

Again, I was unable to duplicate and it seemed that once loaded, the iFrame and file dropper did not reappear.  @innismir didn’t get the iFrame but grabbed the dropped file.

This led to further investigation that it was likely this was an embedded compromise within the theme I was using.  @innismir found that the Sakura theme included “…woo-tumblog [which] uses a old version of TimThumb, which has a hole in it.”

I switched back to a basic built-in theme and turned off the remainder of the non-critical plug-ins.

Since I have no way of replicating the initial drop attempt, I can only hope that this exercise which involved some basic AV tools, some browser debug tools, some PCAP network traces and good ole investigation from three security wonks has paid off…

ONLY YOU CAN PREVENT MALWARE FIRES (so please let me know if you see an indication of an attempted malware infection.)

Now, back to the point at hand…I would never have noticed this (or more specifically others wouldn’t) had they not been running AV.

So while many look at these imperfect tools as a failure because they don’t detect/prevent all attacks, imagine how many more people I may have unwittingly infected accidentally.

Irony?  Perhaps, but what happened following the notification gives me more hope (in the combination of people, community and technology) than contempt for our gaps as an industry.

I plan to augment this post with more details and a conclusion about what I might have done differently once I have a moment to digest what we’ve done and try and confirm if it’s indeed repaired.  I hope it’s gone for good.

Thanks again to those of you who notified me of the anomalous behavior.

What’s scary is how many of you didn’t.

Is security “losing?”

Ask me in the morning…I’ll likely answer that from my perspective, no, but it’s one little battle at a time that matters.

/Hoff

Enhanced by Zemanta

Incomplete Thought: Why Security Doesn’t Scale…Yet.

January 11th, 2011 1 comment
X-ray machines and metal detectors are used to...
Image via Wikipedia

There are lots of reasons one might use to illustrate why operationalizing security — both from the human and technology perspectives — doesn’t scale.

I’ve painted numerous pictures highlighting the cyclical nature of technology transitions, the supply/demand curve related to threats, vulnerabilities, technology and compensating controls and even relevant anecdotes involving the intersection of Moore’s and Metcalfe’s laws.  This really was a central theme in my Cloudinomicon presentation; “idempotent infrastructure, building survivable systems and bringing sexy back to information centricity.”

Here are some other examples of things I’ve written about in this realm.

Batting around how public “commodity” cloud solutions forces us to re-evaluate how, where, why and who “does” security was an interesting journey.  Ultimately, it comes down to architecture and poking at the sanctity of models hinged on an operational premise that may or may not be as relevant as it used to be.

However, I think the most poignant and yet potentially obvious answer to the “why doesn’t security scale?” question is the fact that security products, by design, don’t scale because they have not been created to allow for automation across almost every aspect of their architecture.

Automation and the interfaces (read: APIs) by which security products ought to be provisioned, orchestrated, and deployed are simply lacking in most security products.

Yes, there exist security products that are distributed but they are still managed, provisioned and deployed manually — generally using a management hub-spoke model that doesn’t lend itself to automated “anything” that does not otherwise rely upon bubble-gum and bailing wire scripting…

Sure, we’ve had things like SNMP as a “standard interface” for “management” for a long while ;)  We’ve had common ways of describing threats and vulnerabilities.  Recently we’ve seen the emergence of XML-based APIs emerge as a function of the latest generation of (mostly virtualized) firewall technologies, but most products still rely upon stand-alone GUIs, CLIs, element managers and a meat cloud of operators to push the go button (or reconfigure.)

Really annoying.

Alongside the lack of standard API-based management planes, control planes are largely proprietary and the output for correlated event-driven telemetry at all layers of the stack is equally lacking.  Of course the applications and security layers that run atop infrastructure are still largely discrete thus making the problem more difficult.

The good news is that virtualization in the enterprise and the emergence of the cultural and operational models predicated upon automation are starting to influence product roadmaps in ways that will positively affect the problem space described above but we’ve got a long haul as we make this transition.

Security vendors are starting to realize that they must retool many of their technology roadmaps to deal with the impact of dynamism and automation.  Some, not all, are discovering painfully the fact that simply creating a virtualized version of a physical appliance doesn’t make it a virtual security solution (or cloud security solution) in the same way that moving an application directly to cloud doesn’t necessarily make it a “cloud application.”

In the same way that one must often re-write or specifically design applications “designed” for cloud, we have to do the same for security.  Arguably there are things that can and should be preserved; the examples of the basic underpinnings such as firewalls that at their core don’t need to change but their “packaging” does.

I’m privy to lots of the underlying mechanics of these activities — from open source to highly-proprietary — and I’m heartened by the fact that we’re beginning to make progress.  We shouldn’t have to make a distinction between crafting and deploying security policies in physical or virtual environments.  We shouldn’t be held hostage by the separation of application logic from the underlying platforms.

In the long term, I’m optimistic we won’t have to.

/Hoff

Related articles

Enhanced by Zemanta

The Security Hamster Sine Wave Of Pain: Public Cloud & The Return To Host-Based Protection…

July 7th, 2010 7 comments
Snort Intrusion Detection System Logo
Image via Wikipedia

This is a revisitation of a blog I wrote last year: Incomplete Thought: Cloud Security IS Host-Based…At The Moment

I use my ‘Security Hamster Sine Wave of Pain” to illustrate the cyclical nature of security investment and deployment models over time and how disruptive innovation and technology impacts the flip-flop across the horizon of choice.

To wit: most mass-market Public Cloud providers such as Amazon Web Services rely on highly-abstracted and limited exposure of networking capabilities.  This means that most traditional network-based security solutions are impractical or non-deployable in these environments.

Network-based virtual appliances which expect generally to be deployed in-line with the assets they protect are at a disadvantage given their topological dependency.

So what we see are security solution providers simply re-marketing their network-based solutions as host-based solutions instead…or confusing things with Barney announcements.

Take a press release today from SourceFire:

Snort and Sourcefire Vulnerability Research Team(TM) (VRT) rules are now available through the Amazon Elastic Compute Cloud (Amazon EC2) in the form of an Amazon Machine Image (AMI), enabling customers to proactively monitor network activity for malicious behavior and provide automated responses.

Leveraging Snort installed on the AMI, customers of Amazon Web Services can further secure their most critical cloud-based applications with Sourcefire’s leading protection. Snort and Sourcefire(R) VRT rules are also listed in the Amazon Web Services Solution Partner Directory, so that users can easily ensure that their AMI includes the latest updates.

As far as I can tell, this means you can install a ‘virtual appliance’ of Snort/Sourcefire as a standalone AMI, but there’s no real description on how one might actually implement it in an environment that isn’t topologically-friendly to this sort of network-based implementation constraint.*

Since you can’t easily “steer traffic” through an IPS in the model of AWS, can’t leverage promiscuous mode or taps, what does this packaging implementation actually mean?  Also, if  one has a few hundred AMI’s which contain applications spread out across multiple availability zones/regions, how does a solution like this scale (from both a performance or management perspective?)

I’ve spoken/written about this many times:

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where… and

Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye

Ultimately, expect that Public Cloud will force the return to host-based HIDS/HIPS deployments — the return to agent-based security models.  This poses just as many operational challenges as those I allude to above.  We *must* have better ways of tying together network and host-based security solutions in these Public Cloud environments that make sense from an operational, cost, and security perspective.

/Hoff

Related articles by Zemanta

* I “spoke” with Marty Roesch on the Twitter and he filled in the gaps associated with how this version of Snort works – there’s a host-based packet capture element with a “network” redirect to a stand-alone AMI:

@Beaker AWS->Snort implementation is IDS-only at the moment, uses software packet tap off customer app instance, not topology-dependent

and…

they install our soft-tap on their AMI and send the traffic to our AMI for inspection/detection/reporting.

It will be interesting to see how performance nets out using this redirect model.

Enhanced by Zemanta

To Achieve True Cloud (X/Z)en, One Must Leverage Introspection

January 6th, 2010 No comments

Back in October 2008, I wrote a post detailing efforts around the Xen community to create a standard security introspection API (Xen.Org Launches Community Project To Bring VM Introspection to Xen :)

The Xen Introspection Project is a community effort within Xen.org to leverage the existing research presented above with other work not yet public to create a standard API specification and methodology for virtual machine introspection.

That blog was focused on introspection for virtualization proper but since many of the larger cloud providers utilize Xen virtualization as an underpinning of their service architecture and as an industry we’re suffering from a lack of visibility and deployable security capabilities, the relevance of VM and VMM introspection to cloud computing is quite relevant.

I thought I’d double around and see where we are.

It looks as though there’s been quite a bit of recent activity from the folks at Georgia Tech (XenAccess Project) and the University of Alaska at Fairbanks (Virtual Introspection for Xen) referenced in my previous blog.  The vCloud API proffered via the DMTF seems to also leverage (at least some of) the VMsafe API capabilities present in VMware‘s vSphere virtualization platform.

While details are, for obvious reasons sketchy, I am encouraged in speaking to representatives from a few cloud providers who are keenly interested in including these capabilities in their offerings.  Wouldn’t that be cool?

Adoption and inclusion of introspection capabilities will overcome some of the inherent security and visibility limitations we face in highly-virtualized multi-tenant environments due to networking constraints for integrating security functionality that I wrote about here.

I plan a follow-on blog in more detail once I finish some interviews.

/Hoff

Reblog this post [with Zemanta]

Cloud Providers and Security “Edge” Services – Where’s The Beef?

September 30th, 2009 16 comments

usbhamburgerPreviously I wrote a post titled “Oh Great Security Spirit In the Cloud: Have You Seen My WAF, IPS, IDS, Firewall…” in which I described the challenges for enterprises moving applications and services to the Cloud while trying to ensure parity in compensating controls, some of which are either not available or suffer from the “virtual appliance” conundrum (see the Four Horsemen presentation on issues surrounding virtual appliances.)

Yesterday I had a lively discussion with Lori MacVittie about the notion of what she described as “edge” service placement of network-based WebApp firewalls in Cloud deployments.  I was curious about the notion of where the “edge” is in Cloud, but assuming it’s at the provider’s connection to the Internet as was suggested by Lori, this brought up the arguments in the post
above: how does one roll out compensating controls in Cloud?

The level of difficulty and need to integrate controls (or any “infrastructure” enhancement) definitely depends upon the Cloud delivery model (SaaS, PaaS, and IaaS) chosen and the business problem trying to be solved; SaaS offers the least amount of extensibility from the perspective of deploying controls (you don’t generally have any access to do so) whilst IaaS allows a lot of freedom at the guest level.  PaaS is somewhere in the middle.  None of the models are especially friendly to integrating network-based controls not otherwise supplied by the provider due to what should be pretty obvious reasons — the network is abstracted.

So here’s the rub, if MSSP’s/ISP’s/ASP’s-cum-Cloud operators want to woo mature enterprise customers to use their services, they are leaving money on the table and not fulfilling customer needs by failing to roll out complimentary security capabilities which lessen the compliance and security burdens of their prospective customers.

While many provide commoditized solutions such as anti-spam and anti-virus capabilities, more complex (but profoundly important) security services such as DLP (data loss/leakage prevention,) WAF, Intrusion Detection and Prevention (IDP,) XML Security, Application Delivery Controllers, VPN’s, etc. should also be considered for roadmaps by these suppliers.

Think about it, if the chief concern in Cloud environments is security around multi-tenancy and isolation, giving customers more comfort besides “trust us” has to be a good thing.  If I knew where and by whom my data is being accessed or used, I would feel more comfortable.

Yes, it’s difficult to do properly and in many cases means the Cloud provider has to make a substantial investment in delivery platforms and management/support integration to get there.  This is why niche players who target specific verticals (especially those heavily regulated) will ultimately have the upper hand in some of these scenarios – it’s not socialist security where “good enough” is spread around evenly.  Services like these need to be configurable (SELF-SERVICE!) by the consumer.

An example? How about Google: where’s DLP integrated into the messaging/apps platforms?  Amazon AWS: where’s IDP integrated into the VMM for introspection?

I wrote a couple of interesting posts about this (that may show up in the automated related posts lists below):

My customers in the Fortune 500 complain constantly that the biggest providers they are being pressured to consider for Cloud services aren’t listening to these requests — or aren’t in a position to respond.

That’s bad for everyone.

So how about it? Are services like DLP, IDP, WAF integrated into your Cloud providers’ offerings something you’d like to see rather than having to add additional providers as brokers and add complexity and cost back into Cloud?

/Hoff

IDS: Vitamins Or Prophylactic?

September 25th, 2008 2 comments

Agentmaxwell
Ravi Char commented on Alan Shimel's blog titled "IDS – The Beast That Just Won't Die."

Ravi makes a number of interesting comments in his blog titled
"IDS/IPS – is it Vitamins?"  I'd like to address them because they offer what I maintain is a disturbing perspective on the state and implementation of IDS today by referencing deployment models and technology baselines from about 2001 that don't reflect reality based on my personal experience.

Ravi doesn't allow comments for this blog, so I thought I'd respond here.

Firstly, I'm not what I would consider and IDS apologist, but I do see the value in being able to monitor and detect things of note as they traverse my network.  In order to "prevent" you first must be able to "detect" so the notion one can have one without the other doesn't sound very realistic to me.

Honestly, I'd like to understand what commercial stand-alone IDS-only solutions Ravi is referring to.  Most IDS functions are built into larger suites of IDP products and include technology such as correlation engines, vulnerability assessment, behavioral anomaly, etc., so calling out IDS as a failure when in reality it represents the basis of many products today is nonsensical.

If a customer were to deploy an IPS in-line or out-of-band in "IDS" mode and not turn on automated blocking or all of the detection or prevention capabilities, is it fair to simply generalize that an entire suite of solutions is useless because of how someone chooses to deploy it?

I've personally deployed Snort, Sourcefire (with RNA,) ISS, Dragon, TippingPoint, TopLayer and numerous other UTM and IDP toolsets and the detection, visibility, reporting, correlation, alerts, forensic and (gasp!) preventative capabilities these systems offered were in-line with my expectations for deploying them in the first place.

IDS can capture tons of intrusion events, there is so much of don't
care events it is difficult to single out event such as zero day event
in the midst of such noise. 

Yes, IDS can capture tons of events.  You'll notice I didn't say "intrusion events" because in order to quantify an event as an "intrusion," one would obviously have already defined it as such in the IDS, thus proving the efficacy of the product in the first place.  This statement is contradictory on face value.

The operator's decision to not appropriately "tune" the things he or she is interested in viewing or worse yet not investing in a product that allows one to do so is a "trouble between the headsets" issue and not a generic solution-space one.  Trust me when I tell you there are plenty of competent IDS/IPS systems available that make this sort of decision making palatable and easy.

To take a note from a friend of mine, the notion of "false positives" in IDS systems seems a little silly.  If you're notified that traffic has generated an alert based upon a rule/signature/trigger you defined as noteworthy, how is that a false positive?  The product has done just what you asked it to do!  Tuning IDS/IPS systems requires context and an understanding of what you're trying to protect, from where, from whom, and why.

People looking for self-regulating solutions are going to get exactly what they deserve.

Secondarily, if an attacker is actually using an exploit against a zero-day vulnerability, how would *any* system be able to detect it based on the very description of a zero-day?

It requires tremendous effort to sift through the log and derive meaningful actions out of the log entries.

I again remind the reader that manually sifting through "logs" and "log entries" is a rather outdated concept and sounds more like forensics and log analysis than it does network IDS, especially given today's solutions replete with dashboards, graphical visualization tools, etc.

IDS needs a dedicated administrator to manage. An administrator who
won't get bored of looking at all the packets and patterns, a truly
boring job for a security engineer. Probably this job would interest a
geekier person and geeks tend to their own interesting research!

I suppose that this sort of activity is not for everyone, but again, this isn't Intrusion Detection, Shadow Style (which I took at SANS 1999 with Northcutt) where we're reading through TCPDUMP traces line by line…

There are companies that do without IDS, and they do just fine. I
agree with Alan's assessment that IDS is like a Checkbox in most
cases.  Business can run without IDS just fine, why invest in such a
technology?

I am convinced that there are many companies that "do without IDS."  By that token, there are many companies that can do without firewalls, IPS, UTM, AV, DLP, NAC, etc., until the day they realize they can't or realize that to be even minimally compliant with any sort of regulation and/or "best practice" (a concept I don't personally care for,) they cannot.

I'm really, really interested in understanding how it is that these companies "…do just fine."  Against what baseline and using what metrics does Ravi establish this claim.  If they don't have IDS and don't detect potential "intrusions," then how exactly can it be determined that they are better off (or just as good) without IDS?

Firewalls and other devices have built in features of IDS, so why invest in a separate product.

So I misunderstood the point here?  It's not that IDS is useless, it's just that standalone IDS deployments from 2001 are useless but if they're "good enough" functions bundled with firewalls or "other devices," they have some worth because it costs less?

I can't figure out whether the chief complaint here is an efficacy or ill-fated ROI exercise.

IDS is like Vitamins, nice to have, not having won't kill you in
most cases. Customers are willing to pay for Pain Killers because they
have to address their pain right away. For Vitamins, they can wait.
Stop and think for moment, without Anti-virus product, businesses can't
run for few days. But, without IDS, most businesses can run just fine
and I base it out of my own experience.

…and it's interesting to note the difference between chronic and acute pain.  In many cases, customers don't notice chronic pain — it's always there and they adjust their tolerance thresholds accordingly.  It may not kill you suddenly, but over time the same might not be true.  If you don't ingest vitamins in some form, you will die.

When an acute pain strikes, people obviously notice and seek immediate amelioration, but the pain killer option is really just a stop-gap, it doesn't "cure" it simply masks the pain. 

By this definition it's already too late for "prevention" because the illness has already set in and detection is self-evident — you're ill.  Take your vitamins, however, and perhaps you won't get sick in the first place…

The investment strategy in IDS is different than that of AV.  You're addressing different problems, so comparing them equliaterally is both unfair and confusing.  In the long term, if you don't know what "normal" looks like — a metric or trending you can certainly gain from IDS/IPS/IDP systems — how will you determine what "abnormal" looks like?

So, sure, just like the vitamin example — not having IDS may not "kill" you in the short term, but it can certainly grind you down over the long haul.

Probably, I would have offended folks from the IDS camp. I have a
good friend who is a founder of an IDS company, I am sure he will react
differently if he reads my narratives about IDS.  Once businesses start
realizing that IDS is a Checkbox, they will scale down their
investments in this area. In the current economic climate, financial
institutions are not doing well. Financial institutions are
big customers in terms of security products, with the current scenario
of financial meltdown, they would scale down heavily on their spending
on Vitamins.

Again, if you're suggesting that stand-alone IDS systems are not a smart investment and that the functionality will commoditize into larger suites over time but the FUNCTION is useful an necessary, then we agree.

If you're suggesting that IDS is simply not worthwhile, then we're in violent disagreement.

Running IDS software on VMware sounds fancy.  Technology does not
matter unless you can address real world pain and prove the utilitarian
value of such a technology. I am really surprised that IDS continues to
exist. Proof of existence does not forebode great future. Running IDS
on VMware does not make it any more utilitarian. I see a bleak future
for IDS.

Running IDS in a virtual machine (whether it's on top of Xen, VMware or Hyper-V) isn't all that "fancy" but the need for visibility into virtual environments is great when the virtual networking obfuscates what can be seen from tradiational network and host-based IDS/IPS systems. 

You see a bleak future for IDS?  I see just as many opportunites for the benefits it offers — it will just be packaged differently as it evolves — just like it has over the last 5+ years.

/Hoff