Archive

Archive for the ‘Network Access Control’ Category

The Curious Case Of Continuous and Consistently Contiguous Crypto…

August 8th, 2013 9 comments

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

/Hoff

P.S. Remember back in the 80’s/90’s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta

Shimel’s in Der Himmel & Stiennon’s A Mean-Un…NAC Dust-Up Part Deux.

May 3rd, 2008 2 comments

Fluxcapacitor
Nothing to see here folks.  Move along…

This is like a bad episode of "Groundhog Day" meets "Back To the Future." 

You know, when you wake every day to the same daymare where one person’s touting that features like NAC are the next flux capacitor while another compares its utility to that of sandpaper in the toilet roll dispensers in a truck stop restroom? 

I know Internet blog debates like this get me more excited than having my nipples connected to jumper cables and being waterboarded whilst simultaneously shocked with 1.21 Jigawatts…

Alan Shimel’s post ("Stiennon says NAC is dead – I must be in heaven!") in response to Stiennon’s entry ("Don’t even bother investing in Network Admission Control") is hysterical.

Why?

Because it’s the exact arguments (here and here) they had back in August 2007 when I refereed (see below) the squabble the first time around and demonstrated convincingly how they were both right and both wrong.  The silly little squabble — like most things — is all a matter of perspective.

I’d suggest that if you want a quick summary of the arguments without having to play blog pong, you can just read my summary from last year, as none of their arguments have changed.

/Hoff

P.S. The German word "himmel" translates to "heaven" (and sky) in English…funny given Shimmy’s post title, methinks…

Categories: Network Access Control Tags:

Client Virtualization and NAC: The Fratto Strikes Back…

January 20th, 2008 5 comments

ChubbyvaderAttention NAC vendors who continue to barrage me via email/blog
postings claiming I don’t understand NAC:  You’re missing the point of
this post which basically confirms my point; you’re not paying
attention and are being myopic.
I included NAC with IPS in (the original) post here to illustrate two things:

(1) Current NAC solutions aren’t particularly relevant when you have centralized and virtualized client infrastructure and

(2) If you understand the issues with offline VM’s in the server world
and what it means to compliance and admission control on spin-up or
when VMotioned, you could add a lot of value by adapting your products
(if you’re software based) to do offline VM conformance/remediation and
help prevent VM sprawl and inadvertent non-compliant VM spin-up…

But you go ahead and continue with your strategy…you’re doing swell so far convincing the market of your relevance.

Now back to our regular programming…

— ORIGINAL POST —


I sense a disturbance in the force…

Mike Fratto’s blog over at the NWC NAC Immersion Center doesn’t provide a method of commenting, so I thought I’d respond to his post here regarding my latest rant on how virtualization will ultimately and profoundly impact the IPS and NAC appliance markets titled "How the hypervisor is death by a thousand cuts to the network IPS/NAC appliance vendors."

I think Mike took a bit of a left turn when analyzing my comments because he missed my point.  Assuming I’m wrong, I’ll respond the best I can.

A couple of things really stood out in Mike’s comments and I’m going to address them in reverse order.  I think most of Mike’s comments strike an odd chord to me because my post was about what is going to happen to the IPS/NAC markets given virtualization’s impact and not necessarily what these products look like today.

Even though the focus of my post was not client virtualization, let’s take this one first:

Maybe I am missing something, but client virtualization just doesn’t
seem to be in the cards today. Even if I am wrong, and I very well
could be, I don’t think mixing client VM’s with server VM in the same
hypervisor would be a good idea if for no other reason than the fact
that a client VM could take down the hypervisor or suck up resources.

I don’t say this to be disrespectful, but it doesn’t
appear like Mike understands how virtualization technology works.  I
can’t understand what he means when he speaks of "…mixing client VM’s
with server VM in the same hypervisor."  VM’s sit atop of the
hypervisor, not *in* it.   Perhaps he’s suggesting that despite isolation and the entire operating premise of virtualization that it’s a bad idea to have a virtualized client instance colocated in the same physical host as a VM next to a VM running a server instance?  Why?

Further, beyond theoretical hand wringing,
I’d very much like to see a demo today of how a "…client VM could take down the
hypervisor."

I won’t argue that client virtualization is still not as popular
as server virtualization today, but according to folks like Gartner, it’s on
the uptake, especially when it goes toward dealing with endpoint
management and the consumerization of IT.  With entire product lines from folks like Citrix (Desktop Server, Presentation Server products, XenDesktop) and VMware (VDI) it’s sort of a hard bite to swallow.

This is exactly the topic of
my post here (Thin Clients: Does this laptop make my assets look fat?), underscored with a quick example by Thomas Bittman from Gartner:

Virtualization on the PC has even more potential than server
virtualization to improve the management of IT infrastructure,
according to Mr Bittman.
“Virtualization on the client is perhaps two years behind, but it is
going to be much bigger. On the PC, it is about isolation and creating
a managed environment that the user can’t touch. This will help change
the paradigm of desktop computer management in organizations. It will
make the trend towards employee-owned notebooks more manageable,
flexible and secure.”

Today, I totally get that NAC is about edge deployment (access layer,) keeping the inadvertent client polluter from bringing something nasty onto the network, making sure endpoints are compliant to network policy, and in some cases, controlling access to network resources:

NAC is, by definition, targeting hosts at the edge. The idea is to
keep control access of untrusted or untrustworthy hosts to the network
based on some number of conditions like authentication, host
configuration, software, patch level, activity, etc. NAC is client
facing regardless of whether you’re controlling access at the client
edge or the data center edge.

I understand that today’s version of NAC isn’t about servers,
but the distinction between clients and servers blurs heavily due to
virtualization and NAC — much like IPS — is going to have to change
to address this.  In fact, some might argue it already has.  Further, some of the functionality being discussed when using the TPM is very much NAC-like.  Remember, given the dynamic nature of VMs (and technology like VMotion) the reality is that a VM could turn up anywhere on a network.  In fact, I can run (I do today, actually) a Windows "server" in a VM on my laptop:

You could deploy NAC to access by servers to the network, but I
don’t think that is a particularly useful or effective strategy mainly
because I would hope that your servers are better maintained and better
managed than desktops. Certainly, you aren’t going to have arbitrary
users accessing the server desktop and installing software, launching
applications, etc. The main threat to server is if they come under the
control of an attacker so you really need to make sure your apps and
app servers are hardened.

Within a virtualized environment (client and server) you won’t need a bunch of physical appliances or "NAC switches," as this functionality will be provided by a virtual appliance within a host or as a function of the trusted security subsystem embedded within the virtualization provider’s platform.

I think it’s a natural by-product of the productization of what we see as NAC platforms today, anyhow.  Most of the NAC solutions today used to be IPS products yesterday.  That’s why I grouped them together in this example.

This next paragraph almost makes my point entirely:

Client virtualization is better served with products like Citrix
MetaFrame or Microsoft’s Terminal Services where the desktop
configuration is dictated and controlled by IT and thus doesn’t t
suffer from the same problems that physical desktop do. Namely, in a
centrally managed remote client situation, the administrator can more
easily and effectively control the actions of a user and their
interactions on the remote desktop. Drivers that are being pushed by
NAC vendors and analysts, as well as responses to our own reader polls,
relating the host condition like patch level, running applications,
configuration, etc are more easily managed and should lead to a more
controlled environment.

Exactly!  Despite perhaps his choice of products, if the client environment is centralized and virtualized, why would I need NAC (as it exists today) in this environment!?  I wouldn’t.  That was the point of the post!

Perhaps I did a crappy job of explaining my point, or maybe if I hadn’t included NAC alongside IPS, Mike wouldn’t have made that left turn, but I maintain that IPS and NAC both face major changes in having to deal with the impact virtualization will bring.

/Hoff

 

UPDATED: How the Hypervisor is Death By a Thousand Cuts to the Network IPS/NAC Appliance Vendors

January 18th, 2008 4 comments

Ipsnacdead_2Attention NAC vendors who continue to barrage me via email/blog postings claiming I don’t understand NAC:  You’re missing the point of this post which basically confirms my point; you’re not paying attention and are being myopic.

I included NAC with IPS in this post to illustrate two things:

(1) Current NAC solutions aren’t particularly relevant when you have centralized and virtualized client infrastructure and

(2) If you understand the issues with offline VM’s in the server world and what it means to compliance and admission control on spin-up or when VMotioned, you could add a lot of value by adapting your products (if you’re software based) to do offline VM conformance/remediation and help prevent VM sprawl and inadvertent non-compliant VM spin-up…

But you go ahead and continue with your strategy…you’re doing swell so far convincing the market of your relevance.

Now back to our regular programming…

— ORIGINAL POST —


From the "Out Of the Loop" Department…

Virtualization is causing IPS and NAC appliance vendors some real pain in the strategic planning department.  I’ve spoken to several product managers of IPS and NAC companies that are having to make some really tough bets regarding just what to do about the impact virtualization is having on their business.

They hmm and haw initially about how it’s not really an issue, but 2 beers later, we’re speaking the same language…

Trying to align architecture, technology and roadmaps to the emerging tidal wave of consolidation that virtualization brings can be really hard.  It’s hard to differentiate where the host starts and the network ends…

In reality, firewall vendors are in exactly the same spot.  Check out this Network World article titled "Options seen lacking in firewall virtual server protection."  In today’s world, it’s almost impossible to distinguish a "firewall" from an "IPS" from a "NAC" device to a new-fangled "highly adaptive access control" solution (thanks, Vernier Autonomic Networks…)

It’s especially hard for vendors whose IPS/NAC software is tied to specialty hardware, unless of course all you care about is enforcing at the "edge" — wherever that is, and that’s the point.  The demarcation of those security domain diameters has now shrunk.  Significantly, and not just for servers, either.  With the resurgence of thin clients and new VDI initiatives, where exactly is the client/server boundary?

Prior to virtualization, network-based IPS/NAC vendors would pick arterial network junctions and either use a tap/SPAN port in an out-of-band deployment or slap a box inline between the "trusted" and "untrusted" sides of the links and that was that.  You’d be able to protect assets based on port, VLAN or IP address.

You obviously only see what traverses those pipes.  If you look at the problem I described here back in August of last year, where much of the communication takes place as intra-VM sessions on the same physical host that never actually touch the externally "physical" network, you’ve lost precious visibility for detection let alone prevention.

I think by now everyone recognizes how server virtualization impacts network and security architecture and basically provides four methods (potentially in combination) today for deploying security solutions:

  1. Keep all your host based protections intact and continue to circle the wagons around the now virtualized endpoint by installing software in the actual VMs
  2. When available, take a security solution provider’s virtual appliance version of their product (if they have one) and install in on a host as a VM and configure the virtual networking within the vSwitch to provide the appropriate connectivity.
  3. Continue to deploy physical appliances between the hosts and the network
  4. Utilize a combination of host-based software and physical IPS/NAC hardware to provide off-load "switched" or "cut-through" policy enforcement between the two.

Each of these options has its pros and cons for both the vendor and the customer; trade-offs in manageability, cost, performance, coverage, scalability and resilience can be ugly.  Those that have both endpoint and network-based solutions are in a far more flexible place than those that do not.

Many vendors who have only physical appliance offerings are basically stuck adding 10Gb/s Ethernet connections to their boxes as they wait impatiently for options 5, 6 and 7 so they can "plug back in":

5.  Virtualization vendors will natively embed more security functionality within the hypervisor and continue integrating with trusted platform models

6.  Virtualization vendors will allow third parties to substitute their own vSwitches as a function of the hypervisor

7. Virtualization vendors will allow security vendors to utilize a "plug-in" methodology and interact directly with the VMM via API

These options would allow both endpoint software installed in the virtual machines as well as external devices to interact directly with the hypervisor with full purview of inter and intra-VM flows and not merely exist as a "bolted-on" function that lacks visibility and best-of-breed functionality.

While we’re on the topic of adding 10Gb/s connectivity, it’s important to note that having 10Gb/s appliances isn’t always about how many Gb/s of IPS
traffic you can handle, but also about consolidating what would
otherwise be potentially dozens of trunked LACP 1Gb/s Ethernet and FC connections pouring
out of each host to manage both the aggregate bandwidth but also the issues driven by a segmented network.

So to get the coverage across a segmented network today, vendors are shipping their appliances with tons of ports — not necessarily because they want to replace access switches, but rather to enable coverage and penetration.

On the other hand, most of the pure-play software vendors today who say they are "virtualization enabled" really mean that their product installs as a virtual appliance on a VM on a host.  The exposure these solutions have to traffic is entirely dependent upon how the vSwitches are configured.

…and it’s going to get even more hairy as the battle for the architecture of the DatacenterOS also rages.  The uptake of 10Gb/s Ethernet is also contributing to the mix as we see
customers:

  • Upgrading from servers to blades
  • Moving from hosts and switches to clusters and fabrics
  • Evolving from hardware/software affinity to gird/utility computing
  • Transitioning from infrastructure to service layers in “the cloud”

Have you asked your IPS and NAC vendors who are hardware-bound how they plan to deal with this Tsunami on their roadmaps within the next 12 months.  If not, grab a lifejacket.

/Hoff

UPDATE:  It appears nobody uses trackbacks anymore, so I’m resorting to activity logs, Google alerts and stubbornness to tell when someone’s referencing my posts.  Here are some interesting references to this post:

…also, this is right on the money:

I think I’ll respond to them on my blog with a comment on theirs pointing back over…

BrokeNAC Mountain – “I wish I knew how to quit you.”

June 25th, 2007 1 comment

Brokebackmountain
An entire day and forum dedicated to NAC in the NYC?  Huh.  I thought we did that at InterOp and RSA already!?  I suppose it’s necessary to wade through all the, uh, information surrounding the second coming of network security.

If someone builds one for UTM, I will kill myself.   

Oh NAC…I wish I knew how to quit you!

(I was going to photoshop the poster to the left including Alan Shimel and changing the title to BrokeNAC Mountain, but I can’t find my Photoshop CD and I’ve got a plane to catch to Milan…)

I’ve made it clear that I think NAC (Network Admission Control and Network Access Control) is valuable and worth investing in as part of a layered defense.  It ain’t the silver bullet of security, however.  Maybe Stiennon can come up with a new name for it and it will be?

I’ve also made it clear that despite the biggest amount of hype since the Furby, NAC will become a feature as part of a conglomeration of solutions in the short term (24 months); it already is a replacement blanket marketing term for companies that used to be SSL VPN’s that then became IPS’s that are now NAC.  Look at the companies that now claim they’re NAC-focused.  That’s usually because the "market" they were in previously collapsed — just like NAC will.

It seems that NAC’s relationship with the world plays out just like a scene from Brokeback Mountain where the two main characters discuss whether the public sees through the thin facade of the uneasy relationship they project to the world — just like the front NAC puts on:

Ennis Del Mar:
You ever get the feelin’… I don’t know, er… when you’re in town and
someone looks at you all suspicious, like he knows? And then you go out
on the pavement and everyone looks like they know too?
Jack Twist:
[Casually] Well… maybe you oughta get out of there, you know? Find yourself someplace different. Maybe Texas.

Ennis Del Mar:
[Sarcastically]
Texas? Sure, maybe you can convince Alma to let you and Lureen to adopt
the girls. And we can just live together herding sheep. And it’ll rain
money from LD Newsome and whiskey’ll flow in the streams – Jack, that’s
real smart.
Jack Twist:
Go to hell, Ennis. If you wanna live your miserable fuckin’ life, then go right ahead.

Ennis Del Mar:
Fine.

Jack Twist:
I was just thinkin’ out loud.

Ennis Del Mar:
Yep, you’re a real thinker there. Goddamn. Jack fuckin’ Twist; got it all figured out, ain’t ya?

If the next NAC Forum is held in Texas, you’ll know the end of the world is near…’course there ain’t nuthin’ wrong with the heavens rainin’ money and streams full-a whiskey…

At any rate, I was catching up on my back-dated blog entries and just read Dom Wilde’s (Nevis Networks  Illuminiations Blog) summary of the Network Computing NAC 2007 Forum and couldn’t help but chuckle.  Shimel’s review seemed a little more upbeat compared to Dom’s, but since Alan got stalked by a blogger paparazzi in a three-wheeled, pedal-powered rickshaw, I can see why.

Snippet Summary from Dom’s Post:

It’s little wonder that people are confused about NAC.  Too many times
during the day I found myself with a furrowed brow trying delineate
between reality and fiction…Disappointing moment of the day – 7 panelists on the OOB panel frying
the audience’s collective brain, by taking 10 minutes each to say "me
too".  Result: half the audience didn’t return after lunch for more
lively and concise discussions on in-line and framework based
solutions, and more critically, to hear narratives and lessons learned
from people who have deployed NAC.

Snippet Summary from Alan’s Post:

Anyway, it was a great way for people looking at deploying NAC to come
up and touch and feed a real live NAC vendor. Ultimately, you still
have to install the product and play with it yourself to see if it
works.  There were lots of claims and NAC crap flying today.  I also
would like to see more of a panel of answering questions then just
giving our elevator pitch powerpoints to the crowd.  Still a worthwhile
day and a good job by Network Computing. I think all of the elevator
pitches will be posted on NC site soon.

Sounds great.

Both Dom and Alan’s companies provide NAC solutions.  Both were at the show.  Both seem to convey the sense that this was more circus than it was scholarly.  I’m not sure that’s because it was focused on NAC or because in general most conferences/forums are completely useless, but I’m interested in anyone else’s opinion from those what where there.

/Hoff

Really, There’s More to Security than Admission/Access Control…

June 16th, 2007 2 comments

Wired_science_religion
Dr. Joseph Tardo over at the Nevis Networks Illuminations blog composed a reasonably well-balanced commentary regarding one or more of my posts in which I was waxing on philosophically about about my beliefs regarding keeping the network plumbing dumb and overlaying security as a flexible, agile, open and extensible services layer.

It’s clear he doesn’t think this way, but I welcome the discourse.  So let me make something clear:

Realistically, and especially in non-segmented flat networks, I think there are certain low-level security functions that will do well by being served up by switching infrastructure as security functionality commoditizes, but I’m not quite sure for the most part how or where yet I draw the line between utility and intelligence.  I do, however, think that NAC is one of those utility services.

I’m also unconvinced that access-grade, wiring closet switches are architected to scale in either functionality, efficacy or performance to provide any more value or differentiation other than port density than the normal bolt-on appliances which continue to cause massive operational and capital expenditure due to continued forklifts over time.  Companies like Nevis and Consentry quietly admit this too, which is why they have both "secure switches" AND appliances that sit on top of the network…

Joseph suggested he was entering into a religious battle in which he summarized many of the approaches to security that I have blogged about previously and I pointed out to him on his blog that this is exactly why I practice polytheism 😉 :

In case you aren’t following the
religious wars going on in the security blogs and elsewhere, let me bring you
up to date.

It goes like this. If you are in
the client software
business, then security has to be done in the endpoints and the network is just
dumb “plumbing,” or rather, it might as well be because you can’t assume
anything about it. If you sell appliances
that sit here and there in the network, the network sprouts two layers, with
the “plumbing” part separated from the “intelligence.” Makes sense, I guess. But
if you sell switches and routers then the intelligence must be integrated in with
the infrastructure. Now I get it. Or maybe I’m missing the point, what if you
sell both appliances
and infrastructure
?

I believe that we’re currently forced to deploy in defense in depth due to the shortcomings of solutions today.  I believe the "network" will not and cannot deliver all the security required.  I believe we’re going to have to invest more in secure operating systems and protocols.  I further believe that we need to be data-centric in our application of security.  I do not believe in single-point product "appliances" that are fundamentally functionally handicapped.  As a delivery mechanism to deliver security that matters across network I believe in this.

Again, the most important difference between what I believe and what Joseph points out above is that the normal class of "appliances" he’s trying to suggest I advocate simply aren’t what I advocate at all.  In fact, one might surprisingly confuse the solutions I do support as "infrastructure" — they look like high-powered switches with a virtualized blade architecture integrated into the solution.

It’s not an access switch, it’s not a single function appliance and it’s not a blade server and it doesn’t suffer from the closed proprietary single vendor’s version of the truth.  To answer the question, if you sell and expect to produce both secure appliances and infrastructure, one of them will come up short.   There are alternatives, however.

So why leave your endpoints,
the ones that have all those vulnerabilities that created the security industry
in the first place, to be hit on by bots, “guests,” and anyone else that wants
to? I don’t know about you, but I would want both something on the endpoint,
knowing it won’t be 100% but better than nothing, and also something in the
network to stop the nasty stuff, preferably before it even got in.

I have nothing to disagree with in the paragraph above — short of the example of mixing active network defense with admission/access control in the same sentence; I think that’s confusing two points.   Back to the religious debate as Joseph drops back to the "Nevis is going to replace all switches in the wiring closet" approach to security via network admission/access control:

Now, let’s talk about getting on
the network. If the switches are just dumb plumbing they will blindly let
anyone on, friend or foe, so you at least need to beef up the dumb plumbing
with admission enforcement points. And you want to put malware sensors where
they can be effective, ideally close to entry points, to minimize the risk of having
the network infrastructure taken down. So, where do you want to put the
intelligence, close to the entry enforcement points or someplace further in the
bowels of the network where the dumb plumbing might have plugged-and-played a
path around your expensive intelligent appliance?

That really depends upon what you’re trying to protect; the end point, the network or the resources connected to it.  Also, I won’t/can’t argue about wanting to apply access/filtering (sounds like IPS in the above example) controls closest to the client at the network layer.  Good design philosophy.   However, depending upon how segmented your network is, the types, value and criticality of the hosts in these virtual/physical domains, one may choose to isolate by zone or VLAN and not invest in yet another switch replacement at the access layer.

If the appliance is to be
effective, it has to sit at a choke point and really be and enforcement point.
And it has to have some smarts of its own. Like the secure switch that we make.

Again, that depends upon your definition of enforcement and applicability.  I’d agree that in flat networks, you’d like to do it at the port/host level, though replacing access switches to do so is usually not feasible in large networks given investments in switching architectures.  Typical fixed configuration appliances overlaid don’t scale, either.

Furthermore, depending upon your definition of what an enforcement zone and it’s corresponding diameter is (port, VLAN, IP Subnet) you may not care.  So putting that "appliance" in place may not be as foreboding as you wager, especially if it overlays across these boundaries satisfactorily.

We will see how long before these new-fangled switch vendors that used to be SSL VPN’s, that then became IPS appliances that have now "evolved" into NAC solutions, will become whatever the next buzzword/technology of tomorrow represents…especially now with Cisco’s revitalized technology refresh for "secure" access switches in the wiring closets.  Caymas, Array, and Vernier (amongst many) are perfect examples.

When it comes down to it, in the markets Crossbeam serves — and especially the largest enterprises — they are happy with their switches, they just want the best security choice on top of it provided in a consolidated, agile and scalable architecture to support it.

Amen.

/Hoff

Network Security is Dead…It’s All About the Host.

May 28th, 2007 6 comments

Securitycomputer2No, not entirely as it’s really about the data, but
I had an epiphany last week. 

I didn’t get any on me, but I was really excited about the — brace yourself — future of security in a meeting I had with Microsoft.  It reaffirmed my belief that while some low-hanging security fruit will be picked off by the network, the majority of the security value won’t be delivered by it.

I didn’t think I’d recognize just how much of it — in such a short time — will ultimately make its way back into the host (OS,) and perhaps you didn’t either.

We started with centralized host-based computing, moved to client-server.  We’ve had Web1.0, are in the  beginnings of WebX.0 and I ultimately believe that we’re headed back to a centralized host-based paradigm now that the network transport is fast, reliable and cheap.

That means that a bunch of the stuff we use today to secure the "network" will gravitate back towards the host. I’ve used Scott McNealy’s mantra as he intended it to in order to provide some color to conversations before, but I’m going to butcher it here. 

While I agree that in abstract the "Network is the Computer," in order to secure it, you’re going to have to treat the "network" like an OS…hard to do.   That’s why I think more and more security will make its way back to the actual
"computer" instead.

So much of the strategy linked to large security vendors sees an increase in footprint back on the host.  It’s showing back up there today in the guise of AV, HIPS, configuration management, NAC and Extrusion Prevention, but it’s going to play a much, much loftier role as time goes on as the level of interaction and interoperability must increase.  Rather than put 10+ agents on a box, imagine if that stuff was already built in?

Heresy, I suppose.

I wager that the "you can’t trust the endpoint" and "all security will make its way into the switch" crowds will start yapping on this point, but before that happens, let me explain…

The Microsoft Factor

Vista_box_2
I was fortunate enough to sit down with some of the key players in Microsoft’s security team last week and engage in a lively bit of banter regarding some both practical and esoteric elements of where security has been, is now and will be in the near future. 

On the tail of Mr. Chambers’ Interop keynote, the discussion was all abuzz regarding collaboration and WebX.0 and the wonders that will come of the technology levers in the near future as well as the, ahem, security challenges that this new world order will bring.  I’ll cover that little gem in another blog entry.

Some of us wanted to curl up into a fetal position.  Others saw a chance to correct material defects in the way in which the intersection of networking and security has been approached.  I think the combination of the two is natural and healthy and ultimately quite predictable in these conversations.

I did a bit of both, honestly.

As you can guess, given who I was talking to, much of what was discussed found its way back to a host-centric view of security with a heavy anchoring in the continued evolution of producing more secure operating systems, more secure code, more secure protocols and strong authentication paired with encryption.

I expected to roll my eyes a lot and figured that our conversation would gravitate towards UAC and that a bulk-helping of vapor functionality would be dispensed with the normal disclaimers urging "…when it’s available one day" as a helping would be ladled generously into the dog-food bowls the Microsofties were eating from.

I am really glad I was wrong, and it just goes to show you that it’s important to consider a balanced scorecard in all this; listen with two holes, talk with one…preferably the correct one 😉

I may be shot for saying this in the court of popular opinion, but I think Microsoft is really doing a fantastic job in their renewed efforts toward improving security.  It’s not perfect, but the security industry is such a fickle and bipolar mistress — if you’re not 100% you’re a zero.

After spending all this time urging people that the future of security will not be delivered in the network proper, I have not focused enough attention on the advancements that are indeed creeping their way into the OS’s toward a more secure future as  this inertia orthogonally reinforces my point.

Yes, I work for a company that provides network-centric security offerings.  Does this contradict the statement I just made?  I don’t think so, and neither did the folks from Microsoft.  There will always be a need to consolidate certain security functionality that does not fit within the context of the host — at least within an acceptable timeframe as the nature of security continues to evolve.  Read on.

The network will become transparent.  Why?

In this brave new world, mutually-authenticated and encrypted network communications won’t be visible to the majority of the plumbing that’s transporting it, so short of the specific shunts to the residual overlay solutions that will still be present to the network in forms of controls that will not make their way to the host, the network isn’t going to add much security value at all.

The Jericho EffectJerichoeps_2

What I found interesting is that I’ve enjoyed similar discussions with the distinguished fellows of the Jericho Forum wherein after we’ve debated the merits of WHAT you might call it, the notion of HOW "deperimeterization," "reperimeterization," (or my favorite) "radical externalization,"  weighs heavily on the evolution of security as we know it.

I have to admit that I’ve been a bit harsh on the Jericho boys before, but Paul Simmonds and I (or at least I did) came to the realization that my allergic reaction wasn’t to the concepts at hand, but rather the abrasive marketing of the message.  Live and learn.

Both sets of conversations basically see the pendulum effect of security in action in this oversimplification of what Jericho posits is the future of security and what Microsoft can deliver — today:

Take a host with a secured OS, connect it into any network using whatever means you find
appropriate, without regard for having to think about whether you’re on the "inside" or "outside." Communicate securely, access and exchange data in policy-defined "zones of trust" using open, secure, authenticated and encrypted protocols.

If you’re interested in the non-butchered more specific elements of the Jericho Forum’s "10 Commandments," see here.

What I wasn’t expecting in marrying these two classes of conversation is that this future of security is much closer and notably much more possible than I readily expected…with a Microsoft OS, no less.   In fact, I got a demonstration of it.  It may seem like no big deal to some of you, but the underlying architectural enhancements to Microsoft’s Vista and Longhorn OS’s are a fantastic improvement on what we have had to put up thus far.

One of the Microsoft guys fired up his laptop with a standard-issue off-the-shelf edition of Vista,  authenticated with his smartcard, transparently attached to the hotel’s open wireless network and then took me on a tour of some non-privileged internal Microsoft network resources.

Then he showed me some of the ad-hoc collaborative "People Near Me" peer2peer tools built into Vista — same sorts of functionality…transparent, collaborative and apparently quite secure (gasp!) all at the same time.

It was all mutually authenticated and encrypted and done so transparently to him.

He didn’t "do" anything; no VPN clients, no split-brain tunneling, no additional Active-X agents, no SSL or IPSec shims…it’s the integrated functionality provided by both IPv6 and IPSec in the NextGen IP stack present in Vista.

And in his words "it just works."   Yes it does.

He basically established connectivity and his machine reached out to an reachable read-only DC (after auth. and with encryption) which allowed him to transparently resolve "internal" vs. "external" resources.  Yes, the requirements of today expect that the OS must still evolve to prevent exploit of the OS, but this too shall become better over time.

No, it obviously doesn’t address what happens if you’re using a Mac or Linux, but the pressure will be on to provide the same sort of transparent, collaborative and secure functionality across those OS’s, too.

Allow me my generalizations — I know that security isn’t fixed and that we still have problems, but think of this as a half-glass full, willya!?

One of the other benefits I got form this conversation is the reminder that as Vista and Longhorn default to IPv6 natively (they can do both v4&v6 dynamically,) as enterprises upgrade, the network hardware and software (and hence the existing security architecture) must also be able to support IPv6 natively.  It’s not just the government pushing v6, large enterprises are now standardizing on it, too.

Here are some excellent links describing the Nextgen IP stack in Vista, the native support for IPSec (goodbye VPN market,) and IPv6 support.

Funny how people keep talking about Google being a threat to Microsoft.  I think that the network giants like Cisco might have their hands full with Microsoft…look at how each of them are maneuvering.

/Hoff
{ Typing this on my Mac…staring @ a Vista Box I’m waiting to open to install within Parallels 😉 }

Security: “Built-in, Overlay or Something More Radical?”

May 10th, 2007 No comments

Networkpill
I was reading Joseph Tardo’s (Nevis Networks) new Illuminations blog and found the topic of his latest post ""Built-in, Overlay or Something More Radical?" regarding the possible future of network security quite interesting.

Joseph (may I call you Joseph?) recaps the topic of a research draft from Stanford funded by the "Stanford Clean Slate Design for the Internet" project that discusses an approach to network security called SANE.   The notion of SANE (AKA Ethane) is a policy-driven security services layer that utilizes intelligent centrally-located services to replace many of the underlying functions provided by routers, switches and security products today:

Ethane is a new architecture for enterprise networks which provides a powerful yet simple management model and strong security guarantees.  Ethane allows network managers to define a single, network-wide, fine-grain policy, and then enforces it at every switch.  Ethane policy is defined over human-friendly names (such as "bob, "payroll-server", or "http-proxy) and  dictates who can talk to who and in which manner.  For example, a policy rule may specify that all guest users who have not authenticated can only use HTTP and that all of their traffic must traverse a local web proxy.

Ethane has a number of salient properties difficult to achieve
with network technologies today.  First, the global security policy is enforced at each switch in a manner that is resistant to poofing.  Second, all packets on an Ethane network can be
attributed back to the sending host and the physical location in
which the packet entered the network.  In fact, packets collected
in the past can also be attributed to the sending host at the time the packets were sent — a feature that can be used to aid in
auditing and forensics.  Finally, all the functionality within
Ethane is provided by very simple hardware switches.
      

The trick behind the Ethane design is that all complex
functionality, including routing, naming, policy declaration and
security checks are performed by a central
controller (rather than
in the switches as is done today).  Each flow on the network must
first get permission from the controller which verifies that the
communicate is permissible by the network policy.  If the controller allows a flow, it computes a route for the flow to
take, and adds an entry for that flow in each of the switches
along the path.
      

With all complex function subsumed by the controller, switches in
Ethane are reduced to managed flow tables whose entries can only be populated by the controller (which it does after each succesful permission check).  This allows a very simple design for Ethane
      switches using only SRAM (no power-hungry TCAMS) and a little bit
of logic.

   

I like many of the concepts here, but I’m really wrestling with the scaling concerns that arise when I forecast the literal bottlenecking of admission/access control proposed therein.

Furthermore, and more importantly, while SANE speaks to being able to define who "Bob"  is and what infrastructure makes up the "payroll server,"  this solution seems to provide no way of enforcing policy based on content in context of the data flowing across it.  Integrating access control with the pseudonymity offered by integrating identity management into policy enforcement is only half the battle.

The security solutions of the future must evolve to divine and control not only vectors of transport but also the content and relative access that the content itself defines dynamically.

I’m going to suggest that by bastardizing one of the Jericho Forum’s commandments for my own selfish use, the network/security layer of the future must ultimately respect and effect disposition of content based upon the following rule (independent of the network/host):

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component. 

 

Deviating somewhat from Jericho’s actual meaning, I am intimating that somehow, somewhere, data must be classified and self-describe the policies that govern how it is published and consumed and ultimately this security metadata can then be used by the central policy enforcement mechanisms to describe who is allowed to access the data, from where, and where it is allowed to go.

…Back to he topic at hand, SANE:

As Joseph alluded, SANE would require replacing (or not using much of the functionality of) currently-deployed routers, switches and security kit.  I’ll let your imagination address the obvious challenges with this design.

Without delving deeply, I’ll use Joseph’s categorization of “interesting-but-impractical”

/Hoff

It’s a sNACdown! Cage Match between Captain Obvious and Me, El Rational.

April 4th, 2007 3 comments

Smackdown
CAUTION:  I use the words "Nostradramatic prescience" in this blog posting.  Anyone easily offended by such poetic buggery should stop reading now.  You have been forewarned.

That’s it.  I’ve had it.  I’ve taken some semi-humorous jabs at Mr. Stiennon before, but my contempt for what is just self-serving PFD (Pure F’ing Dribble) has hit an all time high.  This is, an out-and-out, smackdown.  I make no bones about it.

Richard is at it again.  It seems that stating the obvious and taking credit for it has become an art form. 

Richard expects to be congratulated for his prophetic statements that
are basically a told-you-so to any monkey dumb enough to rely only on
Network Admission Control (see below) as his/her only security defense.  Furthermore, he has the gaul to suggest that by obfuscating the bulk of the arguments made to the contradiction of his point, he wins by default and he’s owed some sort of ass-kissing:

And for my fellow bloggers who I rarely call out using my own blog:
are you ready to retract your "founded on quicksand" statements and
admit that you were wrong and Stiennon was right once again?  🙂

Firstly, there’s a REASON you "rarely call out" other people on your blog, Richard. It has something to do with a lack of frequency of actually being right, or more importantly others being wrong.  

I mean the rest of us poor ig’nant blogger folk just cower in the shadows of your earth-shattering predictions for 2007: Cybercrime is on the rise, identify theft is a terrible problem, attacks against financial services companies will increase and folks will upload illegal videos to YouTube. 

I’m sure the throngs of those who rise up against Captain Obvious are already sending their apology Hallmarks.  I’ll make sure to pre-send those congratulatory balloons now so I can save on shipping, eh?

Secondly, suggesting that others are wrong when you only present 1/10th of the debate is like watching two monkeys screw a football.  It’s messy, usually ends up with one chimp having all the fun and nobody will end up wanting to play ball again with the "winner."  Congratulations, champ.

What the heck am I talking about?  Way back when, a bunch of us had a debate concerning the utility of NAC.  More specifically, we had a debate about the utility, efficacy and value of NAC as part of an overall security strategy.  The debate actually started between Richard and Alan Shimmel. 

I waded in because I found them both to be right and both to be wrong.  What I suggested is that NAC by ITSELF is not effective and must be deployed as part of a well-structured layered defense.  I went so far as to  suggest that Richard’s ideas that the network ‘fabric’ could also do this by itself were also flawed.  Interestingly, we all agreed that trusting the end-point ALONE to report on its state and gain admission to the network was a flawed idea.

Basically, I suggested that securing one’s assets came down to common sense, the appropriate use of layered defense in both the infrastructure and on top of it and utilizing NAC when and how appropriate.  You know, rational security.

The interesting thing to come out of that debate is that to Richard, it became clear that the acronym "NAC" appeared to only mean Network ADMISSION Control.  Even more specifically, it meant Cisco’s version of Network ADMISSION Control.  Listen to the Podcast.  Read the blogs.  It’s completely one dimensional and unrealistic to group every single NAC product and compare it to Cisco.  He did this intentionally so as to prove an equally one dimensional point.  Everyone already knows that pre-admission control is nothing you solely rely on for assured secure connectivity.

To the rest of us who participated in that debate, NAC meant not only Network ADMISSION Control, but also Network ACCESS Control…and not just Cisco’s which we all concluded, pretty much sucked monkey butt.  The problem is that Richard’s assessment of (C)NAC is so myopic that he renders any argument concerning NAC (both) down to a single basal point that nobody actually made.

It goes something like this and was recorded thusly by his lordship himself from up on high on a tablet somewhere.  Richard’s "First Law of Network Security":

Thou shalt not trust an end point to report its own state

Well, no shit.  Really!?  Isn’t it more important to not necessarily trust that the state reported is accurate but take the status with a grain of salt and use it as a component of assessing the fitness of a host to participate as a citizen of the network?   Trust but verify?

Are there any other famous new laws of yours I should know about?  Maybe like:

Thou shalt not use default passwords
Thou shalt not click on hyperlinks in emails
Thou shalt not use eBanking apps on shared computers in Chinese Internet Cafes
Thou shalt not deploy IDS’ and not monitor them
Thou shalt not use "any any any allow" firewall/ACL rules
Thou shalt not allow SMTP relaying
Thou shalt not use the handle hornyhussy in the #FirewallAdminSingles IRC channel

{By the way, I think using the phrase ‘…shalt not’ is actually a double-negative?} [Ed: No, it’s not]

Today Richard blew his own horn to try and reinforce his Nostradramatic prescience when he commented on how presenters at Blackhat further demonstrated that you can spoof reporting compliance checks of an end-point to the interrogator using Cisco’s NAC product using a toolkit created to do just that. 

Oh, the horror!  You mean Malware might actually fake an endpoint into thinking it’s not compromised or spoof the compliance in the first place!?  What a novel idea.  Not.  Welcome to the world of amorphous polymorphic malware.  Been there, done that, bought the T-Shirt.  AV has been dealing with this for quite a while.  It ain’t new.  Bound to happen again.

Does it make NAC useless.  Nope.  Does it mean that we need greater levels of integrity checking and further in-depth validation of state.  Yep.   ‘Nuff said. 

Let me give you Hoff’s "First Law of Network Security" Blogging:

Thou shalt not post drivel bait, Troll.

It’s not as sexy sounding as yours, but it’s immutable, non-negotiable and 100% free of trans-fatty acids.

/Hoff

(Written from the lobby of the Westford Regency Hotel.  Drinking…nothing, unfortunately.)
Bloggerstickerprototype

If it walks like a duck, and quacks like duck, it must be…?

April 2nd, 2007 5 comments

Blackhatvswhitehat
Seriously, this really wasn’t a thread about NAC.  It’s a great soundbite to get people chatting (arguing) but there’s a bit more to it than that.  I didn’t really mean to offend those NAC-Addicts out there.

My last post was the exploration of security functions and their status (or even migration/transformation)  as either a market or feature included in a larger set of features.  Alan Shimel responded to my comments; specifically regarding my opinion that NAC is now rapidly becoming a feature and won’t be a competitive market for much longer. 

Always the quick wit, Alan suggested that UTM was a "technology" that is going to become a feature much like my description of NAC’s fate.  Besides the fact that UTM isn’t a technology but rather a consolidation of lots of other technologies that won’t stand alone, I found a completely orthogonal statement that Alan made to cause my head to spin as a security practitioner. 

My reaction stems from the repeated belief that there should be separation of delivery between the network plumbing, the security service layers and ultimately the application(s) that run across them.  Note well that I’m not suggesting that common instrumentation, telemetry and disposition shouldn’t be collaboratively shared, but their delivery and execution ought to be discrete.  Best tool for the job.

Of course, this very contention is the source of much of the disagreement between me and many others who believe that security will just become absorbed into the "network."  It seems now that Alan is suggesting that the model of combining all three is going to be something in high demand (at least in the SME/SMB) — much in the same way Cisco does:

The day is rapidly coming when people will ask why would they buy a box
that all it does is a bunch of security stuff.  If it is going to live
on the network, why would the network stuff not be on there too or the
security stuff on the network box.

Firstly, multi-function devices that blend security and other features on the "network" aren’t exactly new.

That’s what the Cisco ISR platform is becoming now what with the whole Branch Office battle waging, and back in ’99 (the first thing that pops into my mind) a bunch of my customers bought and deployed WhistleJet multi-function servers which had DHCP, print server, email server, web server, file server, and security functions such as a firewall/NAT baked in.

But that’s neither here nor there, because the thing I’m really, really interested in Alan’s decidedly non-security focused approach to prioritizing utility over security, given that he works for a security company, that is.

I’m all for bang for the buck, but I’m really surprised that he would make a statement like this within the context of a security discussion.

That is what Mitchell has been
talking about in terms of what we are doing and we are going to go
public Monday.  Check back then to see the first small step in the leap
of UTM’s becoming a feature of Unified Network Platforms.

Virtualization is a wonderful thing.  It’s also got some major shortcomings.  The notion that just because you *can* run everything under the sun on a platform doesn’t always mean that you *should* and often it means you very much get what you pay for.  This is what I meant when I quoted Lee Iacocca when he said "People want economy and they will pay any price to get it."

How many times have you tried to consolidate all those multi-function devices (PDA, phone, portable media player, camera, etc.) down into one device.  Never works out, does it?  Ultimately you get fed up with inconsistent quality levels, you buy the next megapixel camera that comes out with image stabilization.  Then you get the new video iPod, then…

Alan’s basically agreed with me on my original point discussing features vs. markets and the UTM vs. UNP thing is merely a handwaving marketing exercise.  Move on folks, nothing to see here.

’nuff said.

/Hoff

(Written sitting in front of my TV watching Bill Maher drinking a Latte)