Archive

Archive for the ‘De-Perimeterization’ Category

Take5 (Episode #5) – Five Questions for Allwyn Sequeira, SVP of Product Operations, Blue Lane

August 21st, 2007 18 comments

This fifth episode of Take5 interviews Allwyn Sequeira, SVP of Product Operations for Blue Lane.  

First a little background on the victim:

Allwyn
Allwyn Sequeira is Senior Vice President of Product Operations at Blue
Lane Technologies, responsible for managing the overall product life
cycle, from concept through research, development and test, to delivery
and support. He was previously the Senior Vice President of Technology
and Operations at netVmg, an intelligent route control company acquired
by InterNap in 2003, where he was responsible for the architecture,
development and deployment of the industry-leading flow control
platform. Prior to netVmg, he was founder, Chief Technology Officer and
Executive Vice President of Products and Operations at First Virtual
Corporation (FVC), a multi-service networking company that had a
successful IPO in 1998. Prior to FVC, he was Director of the Network
Management Business Unit at Ungermann-Bass, the first independent local
area network company. Mr. Sequeira has previously served as a Director
on the boards of FVC and netVmg.


Mr. Sequeira started his career as a software developer at HP in the
Information Networks Division, working on the development of TCP/IP
protocols. During the early 1980’s, he worked on the CSNET project, an
early realization of the Internet concept. Mr. Sequeira is a recognized
expert in data networking, with twenty five years of experience in the
industry, and has been a featured speaker at industry leading forums
like Networld+Interop, Next Generation Networks, ISP Con and RSA
Conference.

Mr. Sequeira holds a Bachelor of Technology degree in Computer
Science from the Indian Institute of Technology, Bombay, and a Master
of Science in Computer Science from the University of Wisconsin,
Madison.   

Allwyn, despite all this good schoolin’ forgot to send me a picture, so he gets what he deserves 😉
(Ed: Yes, those of you quick enough were smart enough to detect that the previous picture was of Brad Pitt and not Allwyn.  I apologize for the unnecessary froth-factor.)

 Questions:

1) Blue Lane has two distinct product lines, VirtualShield and PatchPoint.  The former is a software-based solution which provides protection for VMware Infrastructure 3 virtual servers as an ESX VM plug-in whilst the latter offers a network appliance-based solution for physical servers.  How are these products different than either virtual switch IPS’ like Virtual Iron or in-line network-based IPS’s?

IPS technologies have been charged with the incredible mission of trying to protect everything from anything.  Overall they’ve done well, considering how much the perimeter of the network has changed and how sophisticated hackers have become. Much of their core technology, however, was relevant and useful when hackers could be easily identified by their signatures. As many have proclaimed, those days are coming to an end.

A defense department official recently quipped, "If you offer the same protection for your toothbrushes and your diamonds you are bound to lose fewer toothbrushes and more diamonds."  We think that data center security similarly demands specialized solutions.  The concept of an enterprise network has become so ambiguous when it comes to endpoints and devices and supply chain partners, etc. we think its time to think more realistically in terms of trusted, yet highly available zones within the data center.

It seems clear at this point that different parts of the network need very different security capabilities.  Servers, for example need highly accurate solutions that do not block or impede good traffic and can correct bad traffic, especially when it comes to closing network-facing vulnerability windows.  They need to maintain availability with minimal latency for starters; and that has been a sort of Achilles heel for signature-based approaches.  Of course, signatures also bring considerable management burdens over and beyond their security capabilities.

No one is advocating turning off the IPS, but rather approaching servers with more specialized capabilities.  We started focusing on servers years ago and established very sophisticated application and protocol intelligence, which has allowed us to correct traffic inline without the noise, suspense and delay that general purpose network security appliance users have come to expect.

IPS solutions depend on deep packet inspection typically at the perimeter based on regexp pattern matching for exploits.  Emerging challenges with this approach have made alert and block modes absolutely necessary as most IPS solutions aren’t accurate enough to be trusted in full library block. 

Blue Lane uses a vastly different approach.  We call it deep flow inspection/correction for known server vulnerabilities based on stateful decoding up to layer 7.  We can alert, block and correct, but most of are deployments are in correct mode, with our full capabilities enabled. From an operational standpoint we have substantially different impacts.

A typical IPS may have 10K signatures while experts recommend turning on just a few hundred.  That kind of marketing shell game (find out what really works) means that there will be plenty of false alarms, false positives and negatives and plenty of tuning.  With polymorphic attacks signature libraries can increase exponentially while not delivering meaningful improvements in protection. 

Blue Lane supports about 1000 inline security patches across dozens of very specific server vulnerabilities, applications and operating systems.  We generate very few false alarms and minimal latency.  We don’t require ANY tuning.  Our customers run our solution in automated, correct mode.

The traditional static signature IPS category has evolved into an ASIC war between some very capable players for the reasons we just discussed.Exploding variations of exploits and vectors means that exploit-centric approaches will require more processing power.

Virtualization is pulling the data center into an entirely different direction, driven by commodity processors.  So of course our VirtualShield solution was a much cleaner setup with a hypervisor; we can plug into the hypervisor layer and run on top of existing hardware, again with minimal latency and footprint.

You don’t have to be a Metasploit genius to evade IPS signatures.  Our higher layer 7 stateful decoding is much more resilient. 

2) With zero-days on the rise, pay-for-play vulnerability research and now Zero-Bay (WabiSabiLabi) vulnerability auctions and the like, do you see an uptake in customer demand for vulnerability shielding solutions?

Exploit-signature technologies are meaningless in the face of evanescent, polymorphic threats, resulting in 0-day exploits. Slight modifications to signatures can bypass IPSes, even against known vulnerabilities.  Blue Lane technology provides 0-day protection for any variant of an exploit against known vulnerabilities.  No technology can provide ultimate protection against 0-day exploits based on 0-day vulnerabilities. However, this requires a different class of hacker.

3) As large companies start to put their virtualization strategies in play, how do you see customers addressing securing their virtualized infrastructure?  Do they try to adapt existing layered security methodologies and where do these fall down in a virtualized world?

I’ve explored this topic in depth at the Next Generation Data Center conference last week. Also, your readers might be interested in listening to a recent podcast: The Myths and Realities of Virtualization Security: An Interview. 

To summarize, there are a few things that change with virtualization, that folks need to be aware of.  It represents a new architecture.  The hypervisor layer represents the un-tethering and clustering of VMs, and centralized control.  It introduces a new virtual network layer.  There are entirely new states of servers, not anticipated by traditional static security approaches (like instant create, destroy, clone, suspend, snapshot and revert to snapshot). 

Then you’ll see unprecedented levels of mobility and new virtual appliances and black boxing of complex stacks including embedded databases.  Organizations will have to work out who is responsible for securing this very fluid environment.  We’ll also see unprecedented scalability with Infiniband cores attaching LAN/SAN out to 100’s of ESX hypervisors and thousands of VMs.

Organizations will need the capability to shield these complex, fluid environments; because trying to keep track of individual VMs, states, patch levels, locations will make tuning an IPS for polymorphic attacks look like childs play in comparison.   Effective solutions will need to be highly accurate, low latency solutions deployed in correct mode. Gone will be the days of man-to-man blocking and tuning.  Here to stay are the days of zone defense.

4) VMware just purchased Determina and intends to integrate their memory firewall IPS product as an ESX VM plug-in.  Given your early partnership with VMware, are you surprised by this move?  Doesn’t this directly compete with the VirtualSheild offering?

I wouldn’t read too much into this. Determina hit the wall on sales, primarily because it’s original memory wall technology was too intrusive, and fell short of handling new vulnerabilities/exploits.

This necessitated the LiveShield product, which required ongoing updates, destroying the value proposition of not having to touch servers, once installed. So, this is a technology/people acquisition, not a product line/customer-base acquisition.

VMware was smart to get a very bright set of folks, with deep memory/paging/OS, and a core technology that would do well to be integrated into the hypervisor for the purpose of hypervisor hardening, and interVM isolation. I don’t see VMware entering the security content business soon (A/V, vulnerabilities, etc.). I see Blue Lane’s VirtualShield technology integrated into the virtual networking layer (vSwitch), as a perfect complement to anything that will come out of the Determina acquisition.

5) Citrix just acquired XenSource.  Do you have plans to offer VirtualShield for Xen? 

A smart move on Citrix’s part to get back into the game. Temporary market caps don’t matter. Virtualization matters. If Citrix can make this a two or three horse race, it will keep the VMware, Citrix, Microsoft triumvirate on their toes, delivering better products, and net good for the customer.

Regarding BlueLane, and Citrix/Xensource, we will continue to pay attention to what customers are buying as they virtualize their data centers. For now, this is a one horse show 🙂

Quick Post of a Virtualization Security Presentation: “Virtualization and the End of Network Security As We Know It…”

August 20th, 2007 7 comments

Virtualizationagenda
"Virtualization and the End of Network Security As We Know It…
The feel good hit of the summer!"

Ye olde blog gets pinged quite a lot with searches and search engine redirects for folks looking for basic virtualization and virtualized security information. 

I had to drum up a basic high-level virtualization security presentation for the ISSA Charlotte Metro gathering back in April and I thought I may as well post it.

It’s in .PDF format.  If you want it in .PPT or Keynote, let me know, I’ll be glad to send it to you.  If it’s useful or you need some explanation regarding the visual slides, please get back to me and I’ll be more than glad to address anything you want.  I had 45 minutes to highlight how folks were and might deal with "securing virtualization by virtualizing security."

Yes, some of it is an ad for the company I used to work for who specializes in virtualized security service layers (Crossbeam) but I’m sure you can see how it is relevant in the preso.  You’ll laugh, you’ll cry, you’ll copy/paste the text and declare your own brilliance.  Here’s the summary slide so those of you who haven’t downloaded this yet will know the sheer genius you will be missing if you don’t:

Issavirtualization034

At any rate, it’s not earth shattering but does a decent job at the high level of indicating some of the elements regarding virtualized security. I apologize for the individual animation slide page build-ups.  I’ll re-upload without them when I can get around to it. (Ed: Done.  I also uploaded the correct version 😉

Here’s the PDF.

/Hoff

(As of 11pm EST — 5.5 hours later 1:45pm EST the next day, you lot have downloaded this over 150 380 times.  Since there are no comments, it’s either the biggest piece of crap I’ve ever produced or you are all just so awe stricken you are unable to type.  Newby, you are not allowed to respond to this rhetorical question…)

Oh SNAP! VMware acquires Determina! Native Security Integration with the Hypervisor?

August 19th, 2007 12 comments

Determinalogo
Hot on the trails of becoming gigagillionaires, the folks at VMware make my day with this.  Congrats to the folks @ Determina.

Methinks that for the virtualization world, it’s a very, very good thing.  A step in the right direction.

I’m going to prognosticate that this means that Citrix will buy Blue Lane or Virtual Iron next (see bottom of the post) since their acquisition of XenSource leaves them with the exact same problem that this acquisition for VMware tries to solve:

VMware Inc., the market leader in virtualization software, has acquired
Determina Inc., a Silicon Valley maker of host intrusion prevention
products.

…the security of virtualized
environments has been something of an unknown quantity due to the
complexity of the technology and the ways in which hypervisors interact
with the host OS. 
Determina’s technology is designed specifically to protect the OS
from malicious code, regardless of the origin of the attack, so it
would seem to be a sensible fit for VMware, analysts say.Memoryfirewall

In his analysis of the deal, Gartner’s MacDonald sounded many of
the same notes. "By potentially integrating Memory Firewall into the
ESX hypervisor, the hypervisor itself can provide an additional level
of protection against intrusions. We also believe the memory protection
will be extended to guest OSs as well: VMware’s extensive use of binary
emulation for virtualization puts the ESX hypervisor in an advantageous
position to exploit this style of protection," he wrote.

I’ve spoken a lot recently  about how much I’ve been dreading the notion that security was doomed to repeat itself with the accelerated take off of server virtualization since we haven’t solved many of the most basic security problem classes.  Malicious code is getting more targeted and more intelligent and when you combine an emerging market using hot technology without an appropriate level of security… 

Basically, my concerns have stemmed from the observation that if we can’t do a decent job protecting physically-seperate yet interconnected network elements with all the security fu we have, what’s going to happen when the "…network is the computer" (or vice versa.)  Just search for "virtualization" via the Lijit Widget above for more posts on this…

Some options for securing virtualized guest OS’s in a VM are pretty straight foward:

  1. Continue to deploy layered virtualized security services across VLAN segments of which each VM is a member (via IPS’s, routers, switches, UTM devices…)
  2. Deploy software like Virtual Iron’s which looks like a third party vSwitch IPS on each VM
  3. Integrate something like Blue Lane’s ESX plugin-in which interacts with and at the VMM level
  4. As chipset level security improves, enable it
  5. Deploy HIPS as part of every guest OS.

Each of these approaches has its own sets of pros and cons, and quite honestly, we’ll probably see people doing all five at the same time…layered defense-in-depth.  Ugh.

What was really annoying to me, however, is that it really seemed that in many cases, the VM solution providers were again expecting that we’d just be forced to bolt security ON TO our VM environments instead of BAKING IT IN.  This was looking like a sad reality.

I’ll get into details in another post about Determina’s solution, but I am encouraged by VMware’s acquisition of a security company which will be integrated into their underlying solution set.  I don’t think it’s  a panacea, but quite honestly, the roadmap for solving these sorts of problems were blowing in the wind for VMware up until this point.

"Further, by
using the LiveShield capabilities, the ESX hypervisor could be used
‘introspectively’ to shield the hypervisor and guest OSs from attacks
on known vulnerabilities in situations where these have not yet been
patched. Both Determina technologies are fairly OS- and
application-neutral, providing VMware with an easy way to protect ESX
as well as Linux- and Windows-based guest OSs."

Quite honestly, I hoped they would have bought Blue Lane since the ESX Hypervisor is now going to be a crowded space for them…

We’ll see how well this gets integrated, but I smiled when I read this.

Oh, and before anyone gets excited, I’m sure it’s going to be 100% undetectable! 😉

/Hoff

On-Demand SaaS Vendors Able to Secure Assets Better than Customers?

August 16th, 2007 4 comments

Puzzle
I’m a big advocate of software as a service (SaaS) — have been for years.  This evangelism started for me almost 5 years ago when I become a Qualys MSSP customer listening to Philippe Courtot espouse the benefits of SaaS for vulnerability management.  This was an opportunity to allow me to more efficiently, effectively and cheaply manage my VA problem.  They demonstrated how they were good custodians of the data (my data) that they housed and how I could expect they would protect it.

I did not, however, feel *more* secure because they housed my VA data.  I felt secure enough that how they housed it should not fall into the wrong hands.  It’s called an assessment of risk and exposure.  I performed it and was satisfied it matched my company’s appetite and business requirements.

Not one to appear unclear on where I stand, I maintain that the SaaS can bring utility, efficiency, cost effectiveness, enhanced capabilities and improved service levels to a corporation depending upon who, what, why, how, where and when the service is deployed.  Sometimes it can bring a higher level of security to an organization, but so can an armed squadron of pissed off armed Oompa Loompa’s — it’s all a matter of perspective.

Oompa
I suggest that attempting to qualify the benefits of SaaS by generalizing in any sense is, well, generally a risky thing to do.  It often turns what could be a valid point of interest into a point of contention.

Such is the case with a story I read in a UK edition of IT Week by Phil Muncaster titled "On Demand Security Issues Raised."  In this story, the author describes the methods in which the security posture of SaaS vendors may be measured, comparing the value, capabilities and capacity of the various options and the venue for evaluating an SaaS MSSP:  hire an external contractor or rely on the MSSP to furnish you the results of an internally generated assessment.

I think this is actually a very useful and valid discussion to have — whom to trust and why?  In many cases, these vendors house sensitive and sometimes confidential data regarding an enterprise, so security is paramount.  One would suggest that anyone looking to engage an MSSP of any sort, especially one offering a critical SaaS, would perform due diligence in one form or another before signing on the dotted line.

That’s not really what I wanted to discuss, however.

What I *did* want to address was the comment in the article coming from Andy Kellett, an analyst for Burton, that read thusly:

"Security is probably less a problem than in the end-user organisations
because [on-demand app providers] are measured by the service they provide,"
Kellett argued.

I *think* I probably understand what he’s saying here…that security is "less of a problem" for an MSSP because the pressures of the implied penalties associated with violating an SLA are so much more motivating to get security "right" that they can do it far more effectively, efficiently and better than a customer.

This is a selling point, I suppose?  Do you, dear reader, agree?  Does the implication of outsourcing security actually mean that you "feel" or can prove that you’re more secure or better secured than you could do yourself by using a SaaS MSSP?

"I don’t agree the end-user organisation’s pen tester of choice
should be doing the testing. The service provider should do it and make that
information available."

Um, why?  I can understand not wanting hundreds of scans against my service in an unscheduled way, but what do you have to hide?  You want me to *trust* you that you’re more secure or holding up your end of the bargain?  Um, no thanks.  It’s clear that this person has never seen the results of an internally generated PenTest and how real threats can be rationalized away into nothingness…

Clarence So of Salesforce.com
agreed, adding that most chief information officers today understand that
software-as-a-service (SaaS) vendors are able to secure data more effectively
than they can themselves.

Really!?  It’s not just that they gave into budget pressures, agreed to transfer the risk and reduce OpEx and CapEx?  Care to generalize more thoroughly, Clarence?  Can you reference proof points for me here?  My last company used Salesforce.com, but as the person who inherited the relationship, I can tell you that I didn’t feel at all more "secure" because SF was hosting my data.  In fact, I felt more exposed.

"I’m sure training companies have their own motives for advocating the need
for in-house skills such as penetration testing," he argued. "But any
suggestions the SaaS model is less secure than client-server software are well
wide of the mark."

…and any suggestion that they are *more* secure is pure horsecock marketing at its finest.  Prove it.  And please don’t send me your SAS-70 report as your example of security fu.

So just to be clear, I believe in SaaS.  I encourage its use if it makes good business sense.  I don’t, however, agree that you will automagically be *more* secure.  You maybe just *as* secure, but it should be more cost-effective to deploy and manage.  There may very well be cases (I can even think of some) where one could be more or less secure, but I’m not into generalizations.

Whaddya think?

/Hoff

Security Application Instrumentation: Reinventing the Wheel?

June 19th, 2007 No comments

Bikesquarewheel
Two of my favorite bloggers engaged in a trackback love-fest lately on the topic of building security into applications; specifically, enabling applications as a service delivery function to be able to innately detect, respond to and report attacks.

Richard Bejtlich wrote a piece called Security Application Instrumentation and Gunnar Peterson chimed in with Building Coordinated Response In – Learning from the Anasazis.  As usual, these are two extremely well-written pieces and arrive at a well constructed conclusion that we need a standard methodology and protocol for this reporting.  I think that this exquisitely important point will be missed by most of the security industry — specifically vendors.

While security vendor’s hearts are in the right place (stop laughing,) the "security is the center of the universe" approach to telemetry and instrumentation will continue to fall on deaf ears because there are no widely-adopted standard ways of reporting across platforms, operating systems and applications that truly integrate into a balanced scorecard/dashboard that demonstrates security’s contribution to service availability across the enterprise.   I know what you’re thinking…"Oh God, he’s going to talk about metrics!  Ack!"  No.  That’s Andy’s job and he does it much better than I.

This mess is exactly why the SEIM market emerged to clean up the cesspool of log dumps that spew forth from devices that are, by all approximation, utterly unaware of the rest of ecosystem in which they participate.  Take all these crappy log dumps via Syslog and SNMP (which can still be proprietary,) normalize if possible, correlate "stuff" and communicate that something "bad" or "abnormal" has occurred.

How does that communicate what this really means to the business, its ability to function, deliver servce and ultimately the impact on risk posture?  It doesn’t because security reporting is the little kid wearing a dunce hat standing in the corner because it doesn’t play well with others.

Gunnar stated this well:

Coordinated detection and response is the logical conclusion to defense
in depth security architecture. I think the reason that we have
standards for authentication, authorization, and encryption is because
these are the things that people typically focus on at design time.
Monitoring and auditing are seen as runtime operational acitivities,
but if there were standards based ways to communicate security
information and events, then there would be an opportunity for the
tooling and processes to improve, which is ultimately what we need.

So, is the call for "security
application instrumentation"
doomed to fail because we in the security industry will try to reinvent the wheel with proprietary solutions and suggest
that the current toolsets and frameworks which are available as part of
a much larger enterprise management and reporting strategy not enough? 

Bejtlich remarked on the need for mechanisms that report application state must be built into the application and must report more than just performance:

Today we need to talk about applications defending themselves. When
they are under attack they need to tell us, and when they are abused,
subverted, or breached they would ideally also tell us

I would like to see the next innovation be security application instrumentation,
where you devise your application to report not only performance and
fault logging, but also security and compliance logging. Ideally the
application will be self-defending as well, perhaps offering less
vulnerability exposure as attacks increase (being aware of DoS
conditions of course).

I would agree, but I get the feeling that without integrating this telemetry and the output metrics and folding it into response systems whose primary role is to talk about delivery and service levels — of which "security" is a huge factor — the relevance of this data within the visible single pane of glass of enterprise management is lost.

So, rather than reinvent the wheel and incrementally "innovate," why don’t we take something like the Open Group’s Application Response Measurement (ARM) standard, make sure we subscribe to a telemetry/instrumentation format that speaks to the real issues and enable these systems to massage our output in terms of the language of business (risk?) and work to extend what is already a well-defined and accepted enterprise response management toolset to include security?

To wit:

The Application Response Measurement (ARM) standard describes a common
method for integrating enterprise applications as manageable entities.
The ARM standard allows users to extend their enterprise management
tools directly to applications creating a comprehensive end-to-end
management capability that includes measuring application availability,
application performance, application usage, and end-to-end transaction
response time.

Or how about something like EMC’s Smarts:

Maximize availability and performance of
mission-critical IT resources—and the business services they support.
EMC Smarts software provides powerful solutions for managing complex
infrastructures end-to-end, across technologies, and from the network
to the business level. With EMC Smarts innovative technology you can:

  • Model components and their relationships across networks, applications, and storage to understand effect on services.
  • Analyze data from multiple sources to pinpoint root cause problems—automatically, and in real time.
  • Automate discovery, modeling, analysis, workflow, and updates for dramatically lower cost of ownership.

            

…add security into these and you’ve got a winner.   

There are already industry standards (or at least huge market momentum)
around intelligent automated IT infrastructure, resource management and service level reporting.
We should get behind a standard that elevates the perspective of how security contributes to service delivery (and dare I say risk management) instead of trying to reinvent the wheel…unless you happen to like the Hamster Wheel of Pain…

/Hoff

Really, There’s More to Security than Admission/Access Control…

June 16th, 2007 2 comments

Wired_science_religion
Dr. Joseph Tardo over at the Nevis Networks Illuminations blog composed a reasonably well-balanced commentary regarding one or more of my posts in which I was waxing on philosophically about about my beliefs regarding keeping the network plumbing dumb and overlaying security as a flexible, agile, open and extensible services layer.

It’s clear he doesn’t think this way, but I welcome the discourse.  So let me make something clear:

Realistically, and especially in non-segmented flat networks, I think there are certain low-level security functions that will do well by being served up by switching infrastructure as security functionality commoditizes, but I’m not quite sure for the most part how or where yet I draw the line between utility and intelligence.  I do, however, think that NAC is one of those utility services.

I’m also unconvinced that access-grade, wiring closet switches are architected to scale in either functionality, efficacy or performance to provide any more value or differentiation other than port density than the normal bolt-on appliances which continue to cause massive operational and capital expenditure due to continued forklifts over time.  Companies like Nevis and Consentry quietly admit this too, which is why they have both "secure switches" AND appliances that sit on top of the network…

Joseph suggested he was entering into a religious battle in which he summarized many of the approaches to security that I have blogged about previously and I pointed out to him on his blog that this is exactly why I practice polytheism 😉 :

In case you aren’t following the
religious wars going on in the security blogs and elsewhere, let me bring you
up to date.

It goes like this. If you are in
the client software
business, then security has to be done in the endpoints and the network is just
dumb “plumbing,” or rather, it might as well be because you can’t assume
anything about it. If you sell appliances
that sit here and there in the network, the network sprouts two layers, with
the “plumbing” part separated from the “intelligence.” Makes sense, I guess. But
if you sell switches and routers then the intelligence must be integrated in with
the infrastructure. Now I get it. Or maybe I’m missing the point, what if you
sell both appliances
and infrastructure
?

I believe that we’re currently forced to deploy in defense in depth due to the shortcomings of solutions today.  I believe the "network" will not and cannot deliver all the security required.  I believe we’re going to have to invest more in secure operating systems and protocols.  I further believe that we need to be data-centric in our application of security.  I do not believe in single-point product "appliances" that are fundamentally functionally handicapped.  As a delivery mechanism to deliver security that matters across network I believe in this.

Again, the most important difference between what I believe and what Joseph points out above is that the normal class of "appliances" he’s trying to suggest I advocate simply aren’t what I advocate at all.  In fact, one might surprisingly confuse the solutions I do support as "infrastructure" — they look like high-powered switches with a virtualized blade architecture integrated into the solution.

It’s not an access switch, it’s not a single function appliance and it’s not a blade server and it doesn’t suffer from the closed proprietary single vendor’s version of the truth.  To answer the question, if you sell and expect to produce both secure appliances and infrastructure, one of them will come up short.   There are alternatives, however.

So why leave your endpoints,
the ones that have all those vulnerabilities that created the security industry
in the first place, to be hit on by bots, “guests,” and anyone else that wants
to? I don’t know about you, but I would want both something on the endpoint,
knowing it won’t be 100% but better than nothing, and also something in the
network to stop the nasty stuff, preferably before it even got in.

I have nothing to disagree with in the paragraph above — short of the example of mixing active network defense with admission/access control in the same sentence; I think that’s confusing two points.   Back to the religious debate as Joseph drops back to the "Nevis is going to replace all switches in the wiring closet" approach to security via network admission/access control:

Now, let’s talk about getting on
the network. If the switches are just dumb plumbing they will blindly let
anyone on, friend or foe, so you at least need to beef up the dumb plumbing
with admission enforcement points. And you want to put malware sensors where
they can be effective, ideally close to entry points, to minimize the risk of having
the network infrastructure taken down. So, where do you want to put the
intelligence, close to the entry enforcement points or someplace further in the
bowels of the network where the dumb plumbing might have plugged-and-played a
path around your expensive intelligent appliance?

That really depends upon what you’re trying to protect; the end point, the network or the resources connected to it.  Also, I won’t/can’t argue about wanting to apply access/filtering (sounds like IPS in the above example) controls closest to the client at the network layer.  Good design philosophy.   However, depending upon how segmented your network is, the types, value and criticality of the hosts in these virtual/physical domains, one may choose to isolate by zone or VLAN and not invest in yet another switch replacement at the access layer.

If the appliance is to be
effective, it has to sit at a choke point and really be and enforcement point.
And it has to have some smarts of its own. Like the secure switch that we make.

Again, that depends upon your definition of enforcement and applicability.  I’d agree that in flat networks, you’d like to do it at the port/host level, though replacing access switches to do so is usually not feasible in large networks given investments in switching architectures.  Typical fixed configuration appliances overlaid don’t scale, either.

Furthermore, depending upon your definition of what an enforcement zone and it’s corresponding diameter is (port, VLAN, IP Subnet) you may not care.  So putting that "appliance" in place may not be as foreboding as you wager, especially if it overlays across these boundaries satisfactorily.

We will see how long before these new-fangled switch vendors that used to be SSL VPN’s, that then became IPS appliances that have now "evolved" into NAC solutions, will become whatever the next buzzword/technology of tomorrow represents…especially now with Cisco’s revitalized technology refresh for "secure" access switches in the wiring closets.  Caymas, Array, and Vernier (amongst many) are perfect examples.

When it comes down to it, in the markets Crossbeam serves — and especially the largest enterprises — they are happy with their switches, they just want the best security choice on top of it provided in a consolidated, agile and scalable architecture to support it.

Amen.

/Hoff

Evan Kaplan and Co. (Aventail) Take the Next Step

June 12th, 2007 2 comments

Ekaplan_2
So Aventail’s being acquired by SonicWall?  I wish Evan Kaplan and his team well and trust that SonicWall will do their best to integrate the best of Aventail’s technology into their own.  It’s interesting that this news pops up today because I was just thinking about Aventail’s CEO today as part of a retrospective of security over the last 10+ years.

I’ve always admired Evan Kaplan’s messaging from afar and a couple of months ago I got to speak with him for an hour or so.  For someone who has put his stake in the ground the ground for last 11 years as a leader in the SSL VPN market, you might be surprised to know that Evan’s perspective on the world of networking and security isn’t limited to "tunnel vision" as one might expect.

One of my favorite examples of Evan’s beliefs is this article in Network World back in 2005 that resonated so very much with me then and still does today.  The title of the piece is "Smart Networks are Not a Good Investment" and was a "face off" feature between Evan and Cisco’s Rob Redford.

Evan’s point here — and what resonates at the core of what I believe should happen to security — is that the "network" ought to be separated into two strata, the "plumbing" (routers, switches, etc.) and intelligent "service layers" (security being one of them.)

Evan calls these layers "connectivity" and "intelligence."

The plumbing should be fast, resilient, reliable, and robust providing the connectivity and the service layers should be agile, open, interoperable, flexible and focused on delivering service as a core competency.   

Networking vendors who want to leverage the footprint they already have in port density and extend their stranglehold single vendor version of the truth obviously disagree with this approach.  So do those who ultimately suggest that "good enough" is good enough.

Evan bangs the drum:

Network intelligence as promoted by the large network vendors is the
Star Wars defense system of our time – monolithic, vulnerable and
inherently unreliable. Proponents of smart networks want to extend
their hegemony by incorporating application performance and security into a unified, super-intelligent infrastructure. They want to integrate everything into the network and embed security into every node. In theory, you would then have centralized control and strong perimeter defense.

Yup.  As I blogged recently, "Network Intelligence is an Oxymoron."  The port puppets will have you believe that you can put all this intelligence in the routers in switches and solve all the problems these platforms were never designed to solve whilst simultaneously scale performance and features against skyrocketing throughput requirements, extreme latency thresholds, emerging technologies and an avalanche of compounding threats and vulnerabilities…all from one vendor, of course.

While on the surface this sounds reasonable, a deeper look reveals
that this kind of approach presents significant risk for users and
service providers. It runs counter to the clear trends in network
communication, such as today’s radical growth in broadband and wireless networks
, and increased virtualization of corporate networks through use of
public infrastructure. As a result of these trends, much network
traffic is accessing corporate data centers from public networks rather
than the private LAN, and the boundaries of the enterprise are
expanding. Companies must grow by embracing these trends and fully
leveraging public infrastructure and the power of the Internet.

Exactly.  Look at BT’s 21CN network architecture as a clear and unequivocal demonstration of this strategy; a fantastic high-performance, resilient and reliable foundational transport coupled with an open, agile, flexible and equally high-performance and scalable security service layer.  If BT is putting 18 billion pounds of their money investing in a strategy like this and don’t reckon they can rely on "embedded" security, why would you?

Network
vendors are right in recognizing and trying to address the two
fundamental challenges of network communications: application
performance and security. However, they are wrong in believing the best
way to address these concerns is to integrate application performance
and security into the underlying network.

The alternative is to avoid building increasing intelligence into the physical network, which I call the connectivity lane, and building it instead into a higher-level plane I call the intelligence plane.
                     

The connectivity plane covers end-to-end network connectivity in its broadest sense, leveraging IPv4 and eventually IPv6
. This plane’s characteristics are packet-level performance and high
availability. It is inherently insecure but incredibly resilient. The
connectivity plane should be kept highly controlled and standardized,
because it is heavy to manage and expensive to build and update. It
should also be kept dumb, with change happening slowly.

He’s on the money here again.  Let the network evolve at its pace using standards-based technology and allow innovation to deliver service at the higher levels.  The network evolves much more slowly and at a pace that demands stability.  The experientially-focused intelligence layer needs to be much more nimble and agile, taking advantage of the opportunism and the requirements to respond to rapidly emerging technologies and threat/vulnerabilities.

Look at how quickly solutions like DLP and NAC have stormed onto the market.  If we had to wait for Cisco to get their butt in gear and deliver solutions that actually work as an embedded function within the "network," we’d be out of business by now.

I don’t have the time to write it again, but the security implications of having the fox guarding the henhouse by embedding security into the "fabric" is scary.  Just look at the number of security vulnerabilities Cisco has had in their routing, switching, and security products in the last 6 months.  Guess what happens when they’re all one?   I sum it up here and here as examples.

Conversely, the intelligence plane is application centric and policy
driven, and is an overlay to the connectivity plane. The intelligence
plane is where you build relationships, security and policy, because it
is flexible and cost effective. This plane is network independent,
multi-vendor and adaptive, delivering applications and performance
across a variety of environments, systems, users and devices. The
intelligence plane allows you to extend the enterprise boundary using
readily available public infrastructure. Many service and product
vendors offer products that address the core issues of security and
performance on the intelligence plane.

Connectivity vendors should focus their efforts on building faster, easier to manage and more reliable networks. Smart networks
are good for vendors, not customers.

Wiser words have not been spoken…except by me agreeing with them, of course 😉  Not too shabby for an SSL VPN vendor way back in 2005.

Evan, I do hope you won’t disappear and will continue to be an outspoken advocate of flushing the plumbing…best of luck to you and your team as you integrate into SonicWall.

/Hoff

Profiling Data At the Network-Layer and Controlling It’s Movement Is a Bad Thing?

June 3rd, 2007 2 comments

Carcrash
I’ve been watching what appears like a car crash in slow-motion and for some strange reason I share some affinity and/or responsibility for what is unfolding in the debate between Rory and Rob.

What motivated me to comment on this on-going exploration of data-centric security was Rory’s last post in which he appears to refer to some points I raised in my original post but still bent on the idea that the crux of my concept was tied to DRM:

So .. am I anti-security? Nope I’m extremely pro-security. My feeling
is however that the best way to implement security is in ways which
it’s invisable to users. Every time you make ordinary business people
think about security (eg, usernames/passwords) they try their darndest
to bypass those requirements.

That’s fine and I agree.  The concept of ADAPT is completely transparent to "users."  This doesn’t obviate the fact that someone will have to be responsible for characterizing what is important and relevant to the business in terms of "assets/data," attaching weight/value to them, and setting some policies regarding how to mitigate impact and ultimately risk.

Personally I’m a great fan of network segregation and defence in
depth at the network layer. I think that devices like the ones
crossbeam produce are very useful in coming up with risk profiles, on a
network by network basis rather than a data basis and
managing traffic in that way. The reason for this is that then the
segregation and protections can be applied without the intervention of
end-users and without them (hopefully) having to know about what
security is in place.

So I think you’re still missing my point.  The example I gave of the X-Series using ADAPT takes a combination of best-of-breed security software components such as FW, IDP, WAF, XML, AV, etc. and provides you with segregation as you describe.  HOWEVER, the (r)evolutionary delta here is that the ADAPT profiling of content set by policies which are invisible to the user at the network layer allows one to make security decisions on content in context and control how data moves.

So to use the phrase that I’ve seen in other blogs on this subject,
I think that the "zones of trust" are a great idea, but the zone’s
shouldn’t be based on the data that flows over them, but the
user/machine that are used. It’s the idea of tagging all that data with
the right tags and controlling it’s flow that bugs me.

…and thus it’s obvious that I completely and utterly disagree with this statement.  Without tying some sort of identity (pseudonymity) to the user/machine AND combining it with identifying the channels (applications) and the content (payload) you simply cannot make an informed decision as to the legitimacy of the movement/delivery of this data.

I used the case of being able to utilize client-side tagging as an extension to ADAPT, NOT as a dependency.  Go back and re-read the post; it’s a network-based transparent tagging process that attaches the tag to the traffic as it moves around the network.  I don’t understand why that would bug you?

So that’s where my points in the previous post came from, and I
still reckon their correct. Data tagging and parsing relies on the
existance of standards and their uptake in the first instance and then
users *actually using them* and personally I think that’s not going to
happen in general companies and therefore is not the best place to be
focusing security effort…

Please explain this to me?  What standards need to exist in order to tag data — unless of course you’re talking about the heterogeneous exchange and integration of tagging data at the client side across platforms?  Not so if you do it at the network layer WITHIN the context of the architecture I outlined; the clients, switches, routers, etc. don’t need to know a thing about the process as it’s done transparently.

I wasn’t arguing that this is the end-all-be-all of data-centric security, but it’s worth exploring without deadweighting it to the negative baggage of DRM and the existing DLP/Extrusion Prevention technologies and methodologies that currently exist.

ADAPT is doable and real; stay tuned.

/Hoff

For Data to Survive, It Must ADAPT…

June 1st, 2007 2 comments

Adapt

Now that I’ve annoyed you by suggesting that network security will over time become irrelevant given lost visibility due to advances in OS protocol transport and operation, allow me to give you another nudge towards the edge and further reinforce my theories with some additionally practical data-centric security perspectives.

If any form of network-centric security solution is to succeed in adding value over time, the mechanics of applying policy and effecting disposition on flows as they traverse the network must be made on content in context.  That means we must get to a point where we can make “security” decisions based upon information and its “value” and classification as it moves about.

It’s not good enough to only make decisions on how flows/data should be characterized and acted on with the criteria being focused on the 5-tupule (header,) signature-driven profiling or even behavioral analysis that doesn’t characterize the content in context of where it’s coming from, where it’s going and who (machine, virtual machine and “user”) or what (application, service) intends to access and consume it.

In the best of worlds, we like to be able to classify data before it makes its way though the IP stack and enters the network and use this metadata as an attached descriptor of the ‘type’ of content that this data represents.  We could do this as the data is created by applications (thick or thin, rich or basic) either using the application itself or by using an agent (client-side) that profiles the data prior to storage or transmission.

Since I’m on my Jericho Forum kick lately, here’s how they describe how data ought to be controlled:

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component.

You would probably need client-side software to provide this
functionality.  As an example, we do this today with email compliance solutions that have primitive versions of
this sort of capability that force users to declare the classification
of an email before you can hit the send button or even the document info that can be created when one authors a Word document.

There are a bunch of ERM/DRM solutions in play today that are bandied about being sold as “compliance” solutions, but there value goes much deeper than that.  IP Leakage/Extrusion prevention systems (with or without client-side tie-ins) try to do similar things also.

Ideally, this metadata would be used as a fixed descriptor of the content that permanently attaches itself and follows that content around so it can be used to decide what content should be “routed” based upon policy.

If we’re not able to use this file-oriented static metadata, we’d like then for the “network” (or something in/on it) to be able to dynamically profile content at wirespeed and characterize the data as it moves around the network from origin to destination in the same way.

So, this is where Applied Data & Application Policy Tagging (ADAPT) comes in.  ADAPT is an approach that can make use of existing and new technology to profile and characterize content (by using content matching, signatures, regular expressions and behavioral analysis in hardware or software) to then apply policy-driven information “routing” functionality as flows traverse the network by using an 802.1 q-in-q VLAN tags (open approach) or applying a proprietary ADAPT tag-header as a descriptor to each flow as it moves around the network.

Think of it like a VLAN tag the describes the data within the packet/flow which is defined as seen fit;

The ADAPT tag/VLAN is user defined and can use any taxonomy that best suits the types of content that is interesting; one might use asset classification such as “confidential” or uses taxonomies such as “HIPAA” or “PCI” to describe what is contained in the flows.  One could combine and/or stack the tags, too.  The tag maps to one of these arbitrary categories which could be fed by interpreting metadata attached to the data itself (if in file form) or dynamically by on-the-fly profiling at the network level.

As data moves across the network and across what we call boundaries (zones) of trust, the policy tags are parsed and disposition effected based upon the language governing the rules.  If you use the “open” version using the q-in-q VLAN’s, you have something on the order of 4096 VLAN IDs to choose from…more than enough to accomodate most asset classification and still leave room for VLAN usage.  Enforcing the ACL’s can be done by pretty much ANY modern switch that supports q-in-q, very quickly.

Just like an ACL for IP addresses or VLAN policies, ADAPT does the same thing for content routing, but using VLAN ID’s (or the proprietary ADAPT header) to enforce it.

To enable this sort of functionality, either every switch/router in the network would need to either be q-in-q capable (which is most switches these days) or ADAPT enabled (which would be difficult since you’d need every network vendor to support the protocols.)  You could use an overlay UTM security services switch sitting on top of the network plumbing through which all traffic moving from one zone to another would be subject to the ADAPT policy since each flow has to go through said device.

Since the only device that needs to be ADAPT aware is this UTM security service switch (see the example below,) you can let the network do what it does best and utilize this solution to enforce the policy for you across these boundary transitions.  Said UTM security service switch needs to have an extremely high-speed content security engine that is able to characterize the data at wirespeed and add a tag to the frame as it moves through the switching fabric and processed prior to popping out onto the network.

Clearly this switch would have to have coverage across every network segment.  It wouldn’t work well in virtualized server environments or any topology where zoned traffic is not subject to transit through the UTM switch.

I’m going to be self-serving here and demonstrate this “theoretical” solution using a Crossbeam X80 UTM security services switch plumbed into a very fast, reliable, and resilient L2/L3 Cisco infrastructure.  It just so happens to have a wire-speed content security engine installed in it.  The reason the X-Series can do this is because once the flow enters its switching fabric, I own the ultimate packet/frame/cell format and can prepend any header functionality I like onto the structure to determine how it gets “routed.”

Take the example below where the X80 is connected to the layer-3 switches using 802.1q VLAN trunked interfaces.  I’ve made this an intentionally simple network using VLANs and L3 routing; you could envision a much more complex segmentation and routing environment, obviously.

AdaptjpgThis network is chopped up into 4 VLAN segments:

  1. General Clients (VLAN A)
  2. Finance & Accounting Clients (VLAN B)
  3. Financial Servers (VLAN C)
  4. HR Servers (VLAN D)

Each of the clients/servers in the respective VLANs default routes out to an IP address which belongs to the firewall cluster IP addresses which is proffered by the firewall application modules providing service in the X80.

Thus, to get from one VLAN to another VLAN, one must pass through the X80 and profiled by this content security engine and whatever additional UTM services are installed in the chassis (such as firewall, IDP, AV, etc.)

Let’s say then that a user in VLAN A (General Clients) attempts to access one or more resources in the VLAN D (HR Servers.)

Using solely IP addresses and/or L2 VLANs, let’s say the firewall and IPS policies allow this behavior as the clients in that VLAN have a legitimate need to access the HR Intranet server.  However, let’s say that this user tries to access data that exists on the HR Intranet server but contains personally identifiable information that falls under the governance/compliance mandates of HIPAA.

Let us further suggest that the ADAPT policy states the following:

Rule  Source                Destination            ADAPT Descriptor           Action
==============================================================

1        VLAN A             VLAN D                    HIPAA, Confidential        Deny
IP.1.1               IP.3.1

2        VLAN B             VLAN C                    PCI                                 Allow
IP.2.1             IP.4.1

Using rule 1 above, as the client makes the request, he transits from VLAN A to VLAN D.  The reply containing the requested information is profiled by the content security engine which is able to  characterize the data as containing information that matches our definition of either “HIPAA or Confidential” (purely arbitrary for the sake of this example.)

This could be done by reading the metadata if it exists as an attachment to the content’s file structure, in cooperation with an extrusion prevention application running in the chassis, or in the case of ad-hoc web-based applications/services, done dynamically.

According to the ADAPT policy above, this data would then be either silently dropped, depending upon what “deny” means, or perhaps the user would be redirected to a webpage that informs them of a policy violation.

Rule 2 above would allow authorized IP’s in VLANs to access PCI-classified data.

You can imagine how one could integrate IAM and extend the policies to include pseudonymity/identity as a function of access, also.  Or, one could profile the requesting application (browser, for example) to define whether or not this is an authorized application.  You could extend the actions to lots of stuff, too.

In fact, I alluded to it in the first paragraph, but if we back up a step and look at where consolidation of functions/services are being driven with virtualization, one could also use the principles of ADAPT to extend the ACL functionality that exists in switching environments to control/segment/zone access to/from virtual machines (VMs) of different asset/data/classification/security zones.

What this translates to is a workflow/policy instantiation that would use the same logic to prevent VM1 from communicating with VM2 if there was a “zone” mis-match; as we add data classification in context, you could have various levels of granularity that defines access based not only on VM but VM and data trafficked by them.

Furthermore, assuming this service was deployed internally and you could establish a trusted CA with certs that would support transparent MITM SSL decrypts, you could do this (with appropriate scale) with encrypted traffic also.

This is data-centric security that uses the network when needed, the host when it can and the notion of both static and dynamic network-borne data classification to enforce policy in real-time.

/Hoff

[Comments/Blogs on this entry you might be interested in but have no trackbacks set:

MCWResearch Blog

Rob Newby’s Blog

Alex Hutton’s Blog

Security Retentive Blog

Network Security is Dead…It’s All About the Host.

May 28th, 2007 6 comments

Securitycomputer2No, not entirely as it’s really about the data, but
I had an epiphany last week. 

I didn’t get any on me, but I was really excited about the — brace yourself — future of security in a meeting I had with Microsoft.  It reaffirmed my belief that while some low-hanging security fruit will be picked off by the network, the majority of the security value won’t be delivered by it.

I didn’t think I’d recognize just how much of it — in such a short time — will ultimately make its way back into the host (OS,) and perhaps you didn’t either.

We started with centralized host-based computing, moved to client-server.  We’ve had Web1.0, are in the  beginnings of WebX.0 and I ultimately believe that we’re headed back to a centralized host-based paradigm now that the network transport is fast, reliable and cheap.

That means that a bunch of the stuff we use today to secure the "network" will gravitate back towards the host. I’ve used Scott McNealy’s mantra as he intended it to in order to provide some color to conversations before, but I’m going to butcher it here. 

While I agree that in abstract the "Network is the Computer," in order to secure it, you’re going to have to treat the "network" like an OS…hard to do.   That’s why I think more and more security will make its way back to the actual
"computer" instead.

So much of the strategy linked to large security vendors sees an increase in footprint back on the host.  It’s showing back up there today in the guise of AV, HIPS, configuration management, NAC and Extrusion Prevention, but it’s going to play a much, much loftier role as time goes on as the level of interaction and interoperability must increase.  Rather than put 10+ agents on a box, imagine if that stuff was already built in?

Heresy, I suppose.

I wager that the "you can’t trust the endpoint" and "all security will make its way into the switch" crowds will start yapping on this point, but before that happens, let me explain…

The Microsoft Factor

Vista_box_2
I was fortunate enough to sit down with some of the key players in Microsoft’s security team last week and engage in a lively bit of banter regarding some both practical and esoteric elements of where security has been, is now and will be in the near future. 

On the tail of Mr. Chambers’ Interop keynote, the discussion was all abuzz regarding collaboration and WebX.0 and the wonders that will come of the technology levers in the near future as well as the, ahem, security challenges that this new world order will bring.  I’ll cover that little gem in another blog entry.

Some of us wanted to curl up into a fetal position.  Others saw a chance to correct material defects in the way in which the intersection of networking and security has been approached.  I think the combination of the two is natural and healthy and ultimately quite predictable in these conversations.

I did a bit of both, honestly.

As you can guess, given who I was talking to, much of what was discussed found its way back to a host-centric view of security with a heavy anchoring in the continued evolution of producing more secure operating systems, more secure code, more secure protocols and strong authentication paired with encryption.

I expected to roll my eyes a lot and figured that our conversation would gravitate towards UAC and that a bulk-helping of vapor functionality would be dispensed with the normal disclaimers urging "…when it’s available one day" as a helping would be ladled generously into the dog-food bowls the Microsofties were eating from.

I am really glad I was wrong, and it just goes to show you that it’s important to consider a balanced scorecard in all this; listen with two holes, talk with one…preferably the correct one 😉

I may be shot for saying this in the court of popular opinion, but I think Microsoft is really doing a fantastic job in their renewed efforts toward improving security.  It’s not perfect, but the security industry is such a fickle and bipolar mistress — if you’re not 100% you’re a zero.

After spending all this time urging people that the future of security will not be delivered in the network proper, I have not focused enough attention on the advancements that are indeed creeping their way into the OS’s toward a more secure future as  this inertia orthogonally reinforces my point.

Yes, I work for a company that provides network-centric security offerings.  Does this contradict the statement I just made?  I don’t think so, and neither did the folks from Microsoft.  There will always be a need to consolidate certain security functionality that does not fit within the context of the host — at least within an acceptable timeframe as the nature of security continues to evolve.  Read on.

The network will become transparent.  Why?

In this brave new world, mutually-authenticated and encrypted network communications won’t be visible to the majority of the plumbing that’s transporting it, so short of the specific shunts to the residual overlay solutions that will still be present to the network in forms of controls that will not make their way to the host, the network isn’t going to add much security value at all.

The Jericho EffectJerichoeps_2

What I found interesting is that I’ve enjoyed similar discussions with the distinguished fellows of the Jericho Forum wherein after we’ve debated the merits of WHAT you might call it, the notion of HOW "deperimeterization," "reperimeterization," (or my favorite) "radical externalization,"  weighs heavily on the evolution of security as we know it.

I have to admit that I’ve been a bit harsh on the Jericho boys before, but Paul Simmonds and I (or at least I did) came to the realization that my allergic reaction wasn’t to the concepts at hand, but rather the abrasive marketing of the message.  Live and learn.

Both sets of conversations basically see the pendulum effect of security in action in this oversimplification of what Jericho posits is the future of security and what Microsoft can deliver — today:

Take a host with a secured OS, connect it into any network using whatever means you find
appropriate, without regard for having to think about whether you’re on the "inside" or "outside." Communicate securely, access and exchange data in policy-defined "zones of trust" using open, secure, authenticated and encrypted protocols.

If you’re interested in the non-butchered more specific elements of the Jericho Forum’s "10 Commandments," see here.

What I wasn’t expecting in marrying these two classes of conversation is that this future of security is much closer and notably much more possible than I readily expected…with a Microsoft OS, no less.   In fact, I got a demonstration of it.  It may seem like no big deal to some of you, but the underlying architectural enhancements to Microsoft’s Vista and Longhorn OS’s are a fantastic improvement on what we have had to put up thus far.

One of the Microsoft guys fired up his laptop with a standard-issue off-the-shelf edition of Vista,  authenticated with his smartcard, transparently attached to the hotel’s open wireless network and then took me on a tour of some non-privileged internal Microsoft network resources.

Then he showed me some of the ad-hoc collaborative "People Near Me" peer2peer tools built into Vista — same sorts of functionality…transparent, collaborative and apparently quite secure (gasp!) all at the same time.

It was all mutually authenticated and encrypted and done so transparently to him.

He didn’t "do" anything; no VPN clients, no split-brain tunneling, no additional Active-X agents, no SSL or IPSec shims…it’s the integrated functionality provided by both IPv6 and IPSec in the NextGen IP stack present in Vista.

And in his words "it just works."   Yes it does.

He basically established connectivity and his machine reached out to an reachable read-only DC (after auth. and with encryption) which allowed him to transparently resolve "internal" vs. "external" resources.  Yes, the requirements of today expect that the OS must still evolve to prevent exploit of the OS, but this too shall become better over time.

No, it obviously doesn’t address what happens if you’re using a Mac or Linux, but the pressure will be on to provide the same sort of transparent, collaborative and secure functionality across those OS’s, too.

Allow me my generalizations — I know that security isn’t fixed and that we still have problems, but think of this as a half-glass full, willya!?

One of the other benefits I got form this conversation is the reminder that as Vista and Longhorn default to IPv6 natively (they can do both v4&v6 dynamically,) as enterprises upgrade, the network hardware and software (and hence the existing security architecture) must also be able to support IPv6 natively.  It’s not just the government pushing v6, large enterprises are now standardizing on it, too.

Here are some excellent links describing the Nextgen IP stack in Vista, the native support for IPSec (goodbye VPN market,) and IPv6 support.

Funny how people keep talking about Google being a threat to Microsoft.  I think that the network giants like Cisco might have their hands full with Microsoft…look at how each of them are maneuvering.

/Hoff
{ Typing this on my Mac…staring @ a Vista Box I’m waiting to open to install within Parallels 😉 }