Archive

Archive for May, 2007

Off to Orlando for Check Point Experience 2007 This Week…

May 29th, 2007 No comments

Chkp_experience2007
Off to sunny (?) Orlando this week for Check Point Experience 2007.  I’ll be there from Tuesday through Friday.  We’ll be playing party favorites like "Taunt the Nokia Rep." and seeing who dies first in Fear Factor Live.

You know the drill.  If you’re going to be there, ping me.  I’m delivering one of the keynotes (can you call it that?) on Wednesday afternoon.  I’m Staying at the Ritz, so you can send the assassination squads there, please.

For those of you who aren’t aware, the venue for Check Point Experience in Paris was surrendered, so if we were going to connect there, it will probably have to wait until Bangkok in August or Munich in September, assuming all goes well with the planning for those venues.

/Hoff

Categories: Travel Tags:

Heisenbugs: The Case of the Visibly Invisible Rogue Virtual Machine

May 28th, 2007 No comments

Pulloutyourhair
A Heisenbug is defined by frustrated programmers trying to mitigate a defect as:

     A bug that disappears or    alters its
behavior when one
     attempts to probe or isolate it.

In the case of a hardened rogue virtual machine (VM) sitting somewhere on your network, trying to probe or isolate it yields a similar frustration index for the IDS analyst as compared to that of the pissed off code jockey unable to locate a bug in a trace. 

In many cases, simply nuking it off the network is not good enough.  You want to know where it is (logically and physically,) how it got there, and whose it is.

Here’s the scenario I was reminded of last week when discussing a nifty experience I had in this regard.  It’s dumbed down and wouldn’t pass a

Here’s what transpired for about an hour or so one Monday morning:

1) IDP console starts barfing about an unallocated RFC address space emergence being used by a host on an internal network segment.  Traffic not malicious, but it seems to be talking to the Internet, some DNS on (actual name) "attacker.com" domain…

2) We see the same address popping up on the external firewall rulesets in the drop rule.

3) We start to work backwards from the sensor on the beaconing segment as well as the perimeter firewall.

4) Ping it.  No response.

5) Traceroute it.  Stops at the local default with nowhere to go since the address space is not routed.

6) Look in CAM tables for interfaces usage in the switch(es).  Coming through the trunk uplink ports.

7) Traced it to a switch.  Isolate the MAC address and isolate based on something unique?  Ummm…

8) On a segment with a collection of 75+ hosts with workgroup hubs…can’t find the damned thing.  IDP console still barfing.

9) One of the security team comes back from lunch.  Sits down near us and logs in.  Reboots a PC.

10) IDP alerts go dead.  All eyes on Cubicle #3.

…turns out that he was working @ home the previous night on his laptop upon which he uses (on his home LAN) VMWare for security research to test for how our production systems will react under attack.  He was using the NAT function and was spoofing the MAC as part of one of his tests.  The machine was talking to Windowsupdate and his local DNS hosts on the (now) imaginary network.

He bought lunch for the rest of us that day and was reminded that while he was authorized to use such tools based upon policy and job function, he shouldn’t use them on machines plugged into the internal network…or at least turn VMWare off ;(

/Hoff

Categories: Virtualization, VMware Tags:

Network Security is Dead…It’s All About the Host.

May 28th, 2007 6 comments

Securitycomputer2No, not entirely as it’s really about the data, but
I had an epiphany last week. 

I didn’t get any on me, but I was really excited about the — brace yourself — future of security in a meeting I had with Microsoft.  It reaffirmed my belief that while some low-hanging security fruit will be picked off by the network, the majority of the security value won’t be delivered by it.

I didn’t think I’d recognize just how much of it — in such a short time — will ultimately make its way back into the host (OS,) and perhaps you didn’t either.

We started with centralized host-based computing, moved to client-server.  We’ve had Web1.0, are in the  beginnings of WebX.0 and I ultimately believe that we’re headed back to a centralized host-based paradigm now that the network transport is fast, reliable and cheap.

That means that a bunch of the stuff we use today to secure the "network" will gravitate back towards the host. I’ve used Scott McNealy’s mantra as he intended it to in order to provide some color to conversations before, but I’m going to butcher it here. 

While I agree that in abstract the "Network is the Computer," in order to secure it, you’re going to have to treat the "network" like an OS…hard to do.   That’s why I think more and more security will make its way back to the actual
"computer" instead.

So much of the strategy linked to large security vendors sees an increase in footprint back on the host.  It’s showing back up there today in the guise of AV, HIPS, configuration management, NAC and Extrusion Prevention, but it’s going to play a much, much loftier role as time goes on as the level of interaction and interoperability must increase.  Rather than put 10+ agents on a box, imagine if that stuff was already built in?

Heresy, I suppose.

I wager that the "you can’t trust the endpoint" and "all security will make its way into the switch" crowds will start yapping on this point, but before that happens, let me explain…

The Microsoft Factor

Vista_box_2
I was fortunate enough to sit down with some of the key players in Microsoft’s security team last week and engage in a lively bit of banter regarding some both practical and esoteric elements of where security has been, is now and will be in the near future. 

On the tail of Mr. Chambers’ Interop keynote, the discussion was all abuzz regarding collaboration and WebX.0 and the wonders that will come of the technology levers in the near future as well as the, ahem, security challenges that this new world order will bring.  I’ll cover that little gem in another blog entry.

Some of us wanted to curl up into a fetal position.  Others saw a chance to correct material defects in the way in which the intersection of networking and security has been approached.  I think the combination of the two is natural and healthy and ultimately quite predictable in these conversations.

I did a bit of both, honestly.

As you can guess, given who I was talking to, much of what was discussed found its way back to a host-centric view of security with a heavy anchoring in the continued evolution of producing more secure operating systems, more secure code, more secure protocols and strong authentication paired with encryption.

I expected to roll my eyes a lot and figured that our conversation would gravitate towards UAC and that a bulk-helping of vapor functionality would be dispensed with the normal disclaimers urging "…when it’s available one day" as a helping would be ladled generously into the dog-food bowls the Microsofties were eating from.

I am really glad I was wrong, and it just goes to show you that it’s important to consider a balanced scorecard in all this; listen with two holes, talk with one…preferably the correct one 😉

I may be shot for saying this in the court of popular opinion, but I think Microsoft is really doing a fantastic job in their renewed efforts toward improving security.  It’s not perfect, but the security industry is such a fickle and bipolar mistress — if you’re not 100% you’re a zero.

After spending all this time urging people that the future of security will not be delivered in the network proper, I have not focused enough attention on the advancements that are indeed creeping their way into the OS’s toward a more secure future as  this inertia orthogonally reinforces my point.

Yes, I work for a company that provides network-centric security offerings.  Does this contradict the statement I just made?  I don’t think so, and neither did the folks from Microsoft.  There will always be a need to consolidate certain security functionality that does not fit within the context of the host — at least within an acceptable timeframe as the nature of security continues to evolve.  Read on.

The network will become transparent.  Why?

In this brave new world, mutually-authenticated and encrypted network communications won’t be visible to the majority of the plumbing that’s transporting it, so short of the specific shunts to the residual overlay solutions that will still be present to the network in forms of controls that will not make their way to the host, the network isn’t going to add much security value at all.

The Jericho EffectJerichoeps_2

What I found interesting is that I’ve enjoyed similar discussions with the distinguished fellows of the Jericho Forum wherein after we’ve debated the merits of WHAT you might call it, the notion of HOW "deperimeterization," "reperimeterization," (or my favorite) "radical externalization,"  weighs heavily on the evolution of security as we know it.

I have to admit that I’ve been a bit harsh on the Jericho boys before, but Paul Simmonds and I (or at least I did) came to the realization that my allergic reaction wasn’t to the concepts at hand, but rather the abrasive marketing of the message.  Live and learn.

Both sets of conversations basically see the pendulum effect of security in action in this oversimplification of what Jericho posits is the future of security and what Microsoft can deliver — today:

Take a host with a secured OS, connect it into any network using whatever means you find
appropriate, without regard for having to think about whether you’re on the "inside" or "outside." Communicate securely, access and exchange data in policy-defined "zones of trust" using open, secure, authenticated and encrypted protocols.

If you’re interested in the non-butchered more specific elements of the Jericho Forum’s "10 Commandments," see here.

What I wasn’t expecting in marrying these two classes of conversation is that this future of security is much closer and notably much more possible than I readily expected…with a Microsoft OS, no less.   In fact, I got a demonstration of it.  It may seem like no big deal to some of you, but the underlying architectural enhancements to Microsoft’s Vista and Longhorn OS’s are a fantastic improvement on what we have had to put up thus far.

One of the Microsoft guys fired up his laptop with a standard-issue off-the-shelf edition of Vista,  authenticated with his smartcard, transparently attached to the hotel’s open wireless network and then took me on a tour of some non-privileged internal Microsoft network resources.

Then he showed me some of the ad-hoc collaborative "People Near Me" peer2peer tools built into Vista — same sorts of functionality…transparent, collaborative and apparently quite secure (gasp!) all at the same time.

It was all mutually authenticated and encrypted and done so transparently to him.

He didn’t "do" anything; no VPN clients, no split-brain tunneling, no additional Active-X agents, no SSL or IPSec shims…it’s the integrated functionality provided by both IPv6 and IPSec in the NextGen IP stack present in Vista.

And in his words "it just works."   Yes it does.

He basically established connectivity and his machine reached out to an reachable read-only DC (after auth. and with encryption) which allowed him to transparently resolve "internal" vs. "external" resources.  Yes, the requirements of today expect that the OS must still evolve to prevent exploit of the OS, but this too shall become better over time.

No, it obviously doesn’t address what happens if you’re using a Mac or Linux, but the pressure will be on to provide the same sort of transparent, collaborative and secure functionality across those OS’s, too.

Allow me my generalizations — I know that security isn’t fixed and that we still have problems, but think of this as a half-glass full, willya!?

One of the other benefits I got form this conversation is the reminder that as Vista and Longhorn default to IPv6 natively (they can do both v4&v6 dynamically,) as enterprises upgrade, the network hardware and software (and hence the existing security architecture) must also be able to support IPv6 natively.  It’s not just the government pushing v6, large enterprises are now standardizing on it, too.

Here are some excellent links describing the Nextgen IP stack in Vista, the native support for IPSec (goodbye VPN market,) and IPv6 support.

Funny how people keep talking about Google being a threat to Microsoft.  I think that the network giants like Cisco might have their hands full with Microsoft…look at how each of them are maneuvering.

/Hoff
{ Typing this on my Mac…staring @ a Vista Box I’m waiting to open to install within Parallels 😉 }

Yeah, I don’t get Symantec, either…HuaMantec?

May 27th, 2007 1 comment

Dogateappliance
Alan beat me in blogging about something I discussed @ our Interop Blogger’s dinner last week, namely the absolute bewildering announcement made by Symantec:

Symantec Corp. and Huawei Technologies Co., Ltd. are forming a joint
venture company to develop and distribute security and storage
appliances to global telecommunications carriers and enterprises.

The joint venture will help operators and enterprises address
challenges arising from maintaining IP networks and IT systems that
support a growing number of connections. This requires balancing
increasing performance and availability requirements with system
security and data integrity.

Initially the offering will include security and storage appliances
addressing those issues. The new company will be headquartered in
Chengdu, China, with Huawei owning 51 percent of the joint venture and
Symantec owning 49 percent.

Huawei will contribute its telecommunications storage and security
businesses including its integrated supply chain and integrated product
development management practices. Additionally, the new company will
have access to Huawei’s intellectual property (IP) licenses, research
and development capabilities.

Symantec will contribute some of its enterprise storage and security
software licenses, working capital, and its management expertise into
the new company. Symantec will also contribute US$150 million toward
the joint venture’s growth and expansion.

The joint venture is expected to close late in the calendar year, pending required regulatory and governmental approvals.

What he hell, over!?  Perhaps they forgot about this announcement almost around the same time last year wherein ’twas quoted:

The announcement
is evidence that Symantec is shifting its strategy away from being a
"one stop shop" for security wares, and will focus on lucrative
security management and services, said John Pescatore, a vice president
at Gartner.

Symantec
announced the changes internally yesterday, saying it was a "change in
its investment strategy in the network and gateway security business."
The news was accompanied by lay-offs affecting approximately 80
employees in the company’s SGS unit, a company spokeswoman said.

…after the 3Com buyout of the last venture between 3Com and Huawei, perhaps they’re going to pick up the pieces?  Are we going to see a yellow version of the M.I.A. 3Com M160 since they’re not doing anything with it?

Wow.

Perhaps the first thing they can do for the Chinese market is to fix the Symantec Autoupdate feature:

According to reports from the Chinese state media last night, an
automatic update to the Chinese version of the Norton anti-virus
software sent out last Friday identified two critical Windows XP files
as malware and deleted them.

As a result, millions of Chinese
PC users have had to re-install their operating systems or, if they
have planned ahead (and are lucky), used the RESTORE function from the
XP emergency recovery menu.

China Daily says that many companies are threatening to sue Symantec for large sums of money for lost working time. Symantec has reportedly made formal apology on Wednesday.

/Hoff

Read more…

Categories: Uncategorized Tags:

My IPS (and FW, WAF, XML, DBF, URL, AV, AS) *IS* Bigger Than Yours Is…

May 23rd, 2007 No comments

Butrule225Interop has has been great thus far.  One of the most visible themes of this year’s show is (not suprisingly) the hyped emergence of 10Gb/s Ethernet.  10G isn’t new, but the market is now ripe with products supporting it: routers, switches, servers and, of course, security kit.

With this uptick in connectivity as well as the corresponding float in compute power thanks to Mr. Moore AND some nifty evolution of very fast, low latency, reasonably accurate deep packet inspection (including behavioral technology,) the marketing wars have begun on who has the biggest, baddest toys on the block.

Whenever this discussion arises, without question the notion of "carrier class" gets bandied about in order to essentially qualify a product as being able to withstand enormous amounts of traffic load without imposing latency. 

One of the most compelling reasons for these big pieces of iron (which are ultimately a means to an end to run software, afterall) is the service provider/carrier/mobile operator market which certainly has its fair share of challenges in terms of not only scale and performance but also security.

I blogged a couple of weeks ago regarding the resurgence of what can be described as "clean pipes" wherein a service provider applies some technology that gets rid of the big lumps upstream of the customer premises in order to deliver more sanitary network transport.

What’s interesting about clean pipes is that much of what security providers talk about today is only actually a small amount of what is actually needed.  Security providers, most notably IPS vendors, anchor the entire strategy of clean pipes around "threat protection" that appears somewhat one dimensional.

This normally means getting rid of what is generically referred to today as "malware," arresting worm propagation and quashing DoS/DDoS attacks.  It doesn’t speak at all to the need for things that aren’t purely "security" in nature such as parental controls (URL filtering,) anti-spam, P2P, etc.  It appears that in the strictest definition, these aren’t threats?

So, this week we’ve seen the following announcements:

  • ISS announces their new appliance that offers 6Gb/s of IPS
  • McAfee announces thei new appliance that offers 10Gb/s of IPS

The trumpets sounded and the heavens parted as these products were announced touting threat protection via IPS at levels supposedly never approached before.  More appliances.  Lots of interfaces.  Big numbers.  Yet to be seen in action.  Also, to be clear a 2U rackmount appliance that is not DC powered and non-NEBS certified isn’t normally called "Carrier-Class."

I find these announcements interesting because even with our existing products (which run ISS and Sourcefire’s IDS/IPS software, by the way) we can deliver 8Gb/s of firewall and IPS today and have been able to for some time.

Lisa Vaas over @ eWeek just covered
the ISS and McAfee announcements and she was nice enough to talk about
our products and positioning.  One super-critical difference is that along with high throughput and low latency you get to actually CHOOSE which IPS you want to run — ISS, Sourcefire and shortly Check Point’s IPS-1.

You can then combine that with firewall, AV, AS, URL filtering, web app. and database firewalls and XML security gateways in the same chassis to name a few other functions — all best of breed from top-tier players — and this is what we call Enterprise and Provider-Class UTM folks.

Holistically approaching threat management across the entire spectrum is really important along with the speeds and feeds and we’ve all seen what happens when more and more functionality is added to the feature stack — you turn a feature on and you pay for it performance-wise somewhere else.  It’s robbing Peter to pay Paul.  The processing requirements necessary at 10G line rates to do IPS is different when you add AV to the mix.

The next steps will be interesting and we’ll have to see how the switch and overlay vendors rev up to make their move to have the biggest on the block.  Hey, what ever did happen to that 3Com M160?

Then there’s that little company called Cisco…

{Ed: Oops.  I made a boo-boo and talked about some stuff I shouldn’t have.  You didn’t notice, did you?  Ah, the perils of the intersection of Corporate Blvd. and Personal Way!  Lesson learned. 😉 }

 

Network Intelligence is an Oxymoron & The Myth of Security Packet Cracking

May 21st, 2007 No comments

Cia[Live from Interop’s Data Center Summit]

Jon Oltsik crafted an interesting post today regarding the bifurcation of opinion on where the “intelligence” ought to sit in a networked world: baked into the routers and switches or overlaid using general-purpose compute engines that ride Moore’s curve.

I think that I’ve made it pretty clear where I stand.   I submit that you should keep the network dumb, fast, reliable and resilient and add intelligence (such as security) via flexible and extensible service layers that scale both in terms of speed but also choice.

You should get to define and pick what best of breed means to you and add/remove services at the speed of your business, not the speed of an ASIC spin or an acquisition of technology that is neither in line with the pace and evolution of classes of threats and vulnerabilities or the speed of an agile business. 

The focal point of his post, however, was to suggest that the real issue is the fact that all of this intelligence requires exposure to the data streams which means that each component that comprises it needs to crack the packet before processing.   Jon suggests that you ought to crack the packet once and then do interesting things to the flows.  He calls this COPM (crack once, process many) and suggests that it yields efficiencies — of what, he did not say, but I will assume he means latency and efficacy.

So, here’s my contentious point that I explain below:

Cracking the packet really doesn’t contribute much to the overall latency equation anymore thanks to high-speed hardware, but the processing sure as heck does!  So whether you crack once or many times, it doesn’t really matter, what you do with the packet does.

Now, on to the explanation…

I think that it’s fair to say that many of the underlying mechanics of security are commoditizing so things like anti-virus, IDS, firewalling, etc. can be done without a lot of specialization – leveraging prior art is quick and easy and thus companies can broaden their product portfolios by just adding a feature to an existing product.

Companies can do this because of the agility that software provides, not hardware.  Hardware can give you scales of economy as it relates to overall speed (for certain things) but generally not flexibility. 

However, software has it’s own Moore’s curve or sorts and I maintain that unfortunately its lifecycle, much like what we’re hearing @ Interop regarding CPU’s, does actually have a shelf life and point of diminishing return for reasons that you’re probably not thinking about…more on this from Interop later.

John describes the stew of security componenty and what he expects to see @ Interop this week:

I expect network intelligence to be the dominant theme at this week’s Interop show in Las Vegas. It may be subtle but its definitely there. Security companies will talk about cracking packets to identify threats, encrypt bits, or block data leakage. The WAN optimization crowd will discuss manipulating protocols and caching files, Application layer guys crow about XML parsing, XSLT transformation, and business logic. It’s all about stuffing networking gear with fat microprocessors to perform one task or another.

That’s a lot of stuff tied to a lot of competing religious beliefs about how to do it all as Jon rightly demonstrates and ultimately highlights a nasty issue:

The problem now is that we are cracking packets all over the place. You can’t send an e-mail, IM, or ping a router without some type of intelligent manipulation along the way.

<nod>  Whether it’s in the network, bolted on via an appliance or done on the hosts, this is and will always be true.  Here’s the really interesting next step:

I predict that the next bit wave in this evolution will be known as COPM for "Crack once, process many." In this model, IP packets are stopped and inspected and then all kinds of security, acceleration, and application logic actions occur. Seems like a more efficient model to me.

To do this, it basically means that this sort of solution requires Proxy (transparent or terminating) functionality.  Now, the challenge is that whilst “cracking the packets” is relatively easy and cheap even at 10G line rates due to hardware, the processing is really, really hard to do well across the spectrum of processing requirements if you care about things such as quality, efficacy, and latency and is “expensive” in all of those categories.

The intelligence of deciding what to process and how once you’ve cracked the packets is critical. 

This is where embedding this stuff into the network is a lousy idea. 

How can a single vendor possibly provide anything more than “good enough” security in a platform never designed to solve this sort of problem whilst simultaneously trying to balance delivery and security at line rate? 

This will require a paradigm shift for the networking folks that will either mean starting from scratch and integrating high-speed networking with general-purpose compute blades, re-purposing a chassis (like, say, a Cat65K) and stuffing it with nothing but security cards and grafting it onto the switches or stack appliances (big or small – single form factor or in blades) and graft them onto the switches once again.   And by the way, simply adding networking cards to a blade server isn’t an effective solution, either.  "Regular" applications (and esp. SOA/Web 2.0 apps) aren’t particularly topology sensitive.  Security "applications" on the other hand, are wholly dependent and integrated with the topologies into which they are plumbed.

It’s the hamster wheel of pain.

Or, you can get one of these which offers all the competency, agility, performance, resilience and availability of a specialized networking component combined with an open, agile and flexible operating and virtualized compute architecture that scales with parity based on Intel chipsets and Moore’s law.

What this gives you is an ecosystem of loosely-coupled BoB security services that can be intelligently combined in any order once cracked and ruthlessly manipulated as it passes through them governed by policy – and ultimately dependent upon making decisions on how and what to do to a packet/flow based upon content in context.

The consolidation of best of breed security functionality delivered in a converged architecture yields efficiencies that is spread across the domains of scale, performance, availability and security but also on the traditional economic scopes of CapEx and OpEx.

Cracking packets, bah!  That’s so last Tuesday.

/Hoff

Off to Interop Las Vegas and Palo Alto Next Week…

May 16th, 2007 No comments

VegasOff to Interop next week.  I’ll be there from Sunday (Data Center Summit) through Wednesday mid-morning.  If you’re going to be there, let’s grab a beer and chat.

I’ll be in Palo Alto on the 23rd, flying back to Boston on the 24th.

/Hoff

(…and in an advanced planning compendium, during May-June, I’ll be in Orlando, D.C., Dallas, New York, Atlanta, and some chunk of Europe for Crossbeam’s Next Generation Product  Launch activities)

Categories: Travel Tags:

Should Vendors Mitigate All Vulnerabilities Immediately?

May 15th, 2007 1 comment

Swvulnerability
I read an interesting piece by Roger Grimes @ InfoWorld wherein he described the situation of a vendor who was not willing to patch an unsupported version of software even though it was vulnerable and shown to be (remotely) exploitable.

Rather, the vendor suggested that using some other means (such as blocking the offending access port) was the most appropriate course of action to mitigate the threat.

What’s interesting about the article is not that the vendor is refusing to patch older unsupported code, but that ultimately Roger suggests that irrespective of severity, vendors should immediately patch ANY exploitable vulnerability — with or without public disclosure.

A reader who obviously works for a software vendor commented back with a reply that got Roger thinking and it did for me, also.   The reader suggests that they don’t patch lower severity vulnerabilities immediately (they actually "sit on them" until a customer raises a concern) but instead focus on the higher-severity discoveries:

The reader wrote
to say that his company often sits on security bugs until they are
publicly announced or until at least one customer complaint is made.
Before you start disagreeing with this policy, hear out the rest of his
argument.

“Our
company spends significantly to root out security issues," says the
reader. "We train all our programmers in secure coding, and we follow
the basic tenets of secure programming design and management. When bugs
are reported, we fix them. Any significant security bug that is likely
to be high risk or widely used is also immediately fixed. But if we
internally find a low- or medium-risk security bug, we often sit on the
bug until it is reported publicly. We still research the bug and come
up with tentative solutions, but we don’t patch the problem.”

In the best of worlds, I’d agree with Roger — vendors should patch all vulnerabilities as quickly as possible once discovered, irrespective of whether or not the vulnerability or exploit is made public.  The world would be much better — assuming of course that the end-user could actually mitigate the vulnerability by applying the patch in the first place.

Let’s play devil’s advocate for a minute…

Back here on planet Earth, the prioritization of mitigating vulnerabilities and the resource allocation to mitigate the vulnerability is approached by vendors not unlike the way in which the consumers choose to apply patches of the same; most look at the severity of a vulnerability and start from the highest severity and make their way down.  That’s just the reality of my observation.   

So, for the bulk of these consumers, is the vendor’s response out of line?  It seems in total alignment.

As a counterpoint to my own discussion here, I’d suggest that using prudent risk management best practice, one would protect those assets that matter most.  Sometimes this means that one would mitigate a Sev3 (medium) vulnerability over a Sev5 (highest) based upon risk exposure…this is where solutions like Skybox come in to play.  Vendors can’t attach a weight to an asset, all they can do is assess the impact that an exploitable vulnerability might have on their product…

The reader’s last comment caps it off neatly with a challenge:

“Industry pundits such as yourself often say that
it benefits customers more when a company closes all known security
holes, but in my 25 years in the industry, I haven’t seen that to be
true. In fact I’ve seen the exact opposite. And before you reply, I
haven’t seen an official study that says otherwise. Until you can
provide me with a research paper, everything you say in reply is just
your opinion. With all this said, once the hole is publicly announced,
or becomes high-risk, we close it. And we close it fast because we
already knew about it, coded a solution, and tested it.”

I’m not sure I need an official study to respond to this point, but I’d be interested in if there were such a thing.  Gerhard Eschelbeck has been studying vulnerabilities and their half-lives for some time.  I’d be interested to see how this plays.

So, read the gentleman’s posts; in some cases his comments are understandable and in others they’re hard to swallow…this definitely depends upon which (if not both) side of the fence you stand.  All vendors are ultimately consumers in one form or another…

Thoughts?

/Hoff

BeanSec! 9 – May 16th – 6PM to ?

May 14th, 2007 5 comments

Beansec3
Yo!  BeanSec! 9 is upon us.

BeanSec! is an informal meetup of information security professionals, researchers and academics in the Greater Boston area that meets the third Wednesday of each month.  When I’m able to attend (and that’s most of the time) I buy the booze and appetizers.  It’s how we roll.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join up”, present a zero-day exploit, or defend your dissertation to attend.
Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139

Categories: BeanSec! Tags:

Security: “Built-in, Overlay or Something More Radical?”

May 10th, 2007 No comments

Networkpill
I was reading Joseph Tardo’s (Nevis Networks) new Illuminations blog and found the topic of his latest post ""Built-in, Overlay or Something More Radical?" regarding the possible future of network security quite interesting.

Joseph (may I call you Joseph?) recaps the topic of a research draft from Stanford funded by the "Stanford Clean Slate Design for the Internet" project that discusses an approach to network security called SANE.   The notion of SANE (AKA Ethane) is a policy-driven security services layer that utilizes intelligent centrally-located services to replace many of the underlying functions provided by routers, switches and security products today:

Ethane is a new architecture for enterprise networks which provides a powerful yet simple management model and strong security guarantees.  Ethane allows network managers to define a single, network-wide, fine-grain policy, and then enforces it at every switch.  Ethane policy is defined over human-friendly names (such as "bob, "payroll-server", or "http-proxy) and  dictates who can talk to who and in which manner.  For example, a policy rule may specify that all guest users who have not authenticated can only use HTTP and that all of their traffic must traverse a local web proxy.

Ethane has a number of salient properties difficult to achieve
with network technologies today.  First, the global security policy is enforced at each switch in a manner that is resistant to poofing.  Second, all packets on an Ethane network can be
attributed back to the sending host and the physical location in
which the packet entered the network.  In fact, packets collected
in the past can also be attributed to the sending host at the time the packets were sent — a feature that can be used to aid in
auditing and forensics.  Finally, all the functionality within
Ethane is provided by very simple hardware switches.
      

The trick behind the Ethane design is that all complex
functionality, including routing, naming, policy declaration and
security checks are performed by a central
controller (rather than
in the switches as is done today).  Each flow on the network must
first get permission from the controller which verifies that the
communicate is permissible by the network policy.  If the controller allows a flow, it computes a route for the flow to
take, and adds an entry for that flow in each of the switches
along the path.
      

With all complex function subsumed by the controller, switches in
Ethane are reduced to managed flow tables whose entries can only be populated by the controller (which it does after each succesful permission check).  This allows a very simple design for Ethane
      switches using only SRAM (no power-hungry TCAMS) and a little bit
of logic.

   

I like many of the concepts here, but I’m really wrestling with the scaling concerns that arise when I forecast the literal bottlenecking of admission/access control proposed therein.

Furthermore, and more importantly, while SANE speaks to being able to define who "Bob"  is and what infrastructure makes up the "payroll server,"  this solution seems to provide no way of enforcing policy based on content in context of the data flowing across it.  Integrating access control with the pseudonymity offered by integrating identity management into policy enforcement is only half the battle.

The security solutions of the future must evolve to divine and control not only vectors of transport but also the content and relative access that the content itself defines dynamically.

I’m going to suggest that by bastardizing one of the Jericho Forum’s commandments for my own selfish use, the network/security layer of the future must ultimately respect and effect disposition of content based upon the following rule (independent of the network/host):

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component. 

 

Deviating somewhat from Jericho’s actual meaning, I am intimating that somehow, somewhere, data must be classified and self-describe the policies that govern how it is published and consumed and ultimately this security metadata can then be used by the central policy enforcement mechanisms to describe who is allowed to access the data, from where, and where it is allowed to go.

…Back to he topic at hand, SANE:

As Joseph alluded, SANE would require replacing (or not using much of the functionality of) currently-deployed routers, switches and security kit.  I’ll let your imagination address the obvious challenges with this design.

Without delving deeply, I’ll use Joseph’s categorization of “interesting-but-impractical”

/Hoff