BigFix Comes Out Swinging — With a Gun-Toting Vulcan Marine Hottie…

June 5th, 2007 12 comments

Bigfixad
No, this is not an ad for Bigfix. It is about an ad for Bigfix, however.  If you’re at Garter, methinks Pescatore might describe what I’m highlighting as "Security Marketing 3.0" 😉

Anywho, I was reading the USA Today this morning as was dutifully delivered to my hotel room by the fine folks at Marriott and as I pawed through the business section, I hit page 8B.

Page 8B features a full-page black and white ad from Bigfix.  No big deal, you say, there are lots of ads from IT companies in newspapers.

Sure, but generally they’re not from IT Security companies, they’re usually not from companies this size, they’re usually not a full page, they usually don’t feature big-breasted, gun-toting, Vulcan, Marine recon soldiers, and they usually don’t say things like this:

[Overlaid on top of picture of said big-breasted, gun-toting, Vulcan, Marine recon person…]

Contrary to the impotent baloney from McAfee/Symantec/et al, it doesn’t take weeks and an army of servers to secure all your computers.  You just need one can of BIGFIX whup-ass.

What can you do from one console with a single, policy-driven BIGFIX agent? How about continuous discovering, assessing, remediating, optimizing and enforcing the health/security of hundreds of thousands of computers in minutes?  Yup.  Minutes.

Windows, Vista, Linux/Unix and mac systems.  Nobody else can do this.  And we’re making sure everyone else is more than a little embarrassed about it.  Ooh-rah!

Interesting…picture, text, messaging…  It got my attention.  I wonder if it will get the attention of anyone else — or more importantly the right set of people?  Was it just in this edition or countrywide?  Any other papers?

Amrit, you have anything to do with this?

/Hoff


Categories: Marketing Tags:

Profiling Data At the Network-Layer and Controlling It’s Movement Is a Bad Thing?

June 3rd, 2007 2 comments

Carcrash
I’ve been watching what appears like a car crash in slow-motion and for some strange reason I share some affinity and/or responsibility for what is unfolding in the debate between Rory and Rob.

What motivated me to comment on this on-going exploration of data-centric security was Rory’s last post in which he appears to refer to some points I raised in my original post but still bent on the idea that the crux of my concept was tied to DRM:

So .. am I anti-security? Nope I’m extremely pro-security. My feeling
is however that the best way to implement security is in ways which
it’s invisable to users. Every time you make ordinary business people
think about security (eg, usernames/passwords) they try their darndest
to bypass those requirements.

That’s fine and I agree.  The concept of ADAPT is completely transparent to "users."  This doesn’t obviate the fact that someone will have to be responsible for characterizing what is important and relevant to the business in terms of "assets/data," attaching weight/value to them, and setting some policies regarding how to mitigate impact and ultimately risk.

Personally I’m a great fan of network segregation and defence in
depth at the network layer. I think that devices like the ones
crossbeam produce are very useful in coming up with risk profiles, on a
network by network basis rather than a data basis and
managing traffic in that way. The reason for this is that then the
segregation and protections can be applied without the intervention of
end-users and without them (hopefully) having to know about what
security is in place.

So I think you’re still missing my point.  The example I gave of the X-Series using ADAPT takes a combination of best-of-breed security software components such as FW, IDP, WAF, XML, AV, etc. and provides you with segregation as you describe.  HOWEVER, the (r)evolutionary delta here is that the ADAPT profiling of content set by policies which are invisible to the user at the network layer allows one to make security decisions on content in context and control how data moves.

So to use the phrase that I’ve seen in other blogs on this subject,
I think that the "zones of trust" are a great idea, but the zone’s
shouldn’t be based on the data that flows over them, but the
user/machine that are used. It’s the idea of tagging all that data with
the right tags and controlling it’s flow that bugs me.

…and thus it’s obvious that I completely and utterly disagree with this statement.  Without tying some sort of identity (pseudonymity) to the user/machine AND combining it with identifying the channels (applications) and the content (payload) you simply cannot make an informed decision as to the legitimacy of the movement/delivery of this data.

I used the case of being able to utilize client-side tagging as an extension to ADAPT, NOT as a dependency.  Go back and re-read the post; it’s a network-based transparent tagging process that attaches the tag to the traffic as it moves around the network.  I don’t understand why that would bug you?

So that’s where my points in the previous post came from, and I
still reckon their correct. Data tagging and parsing relies on the
existance of standards and their uptake in the first instance and then
users *actually using them* and personally I think that’s not going to
happen in general companies and therefore is not the best place to be
focusing security effort…

Please explain this to me?  What standards need to exist in order to tag data — unless of course you’re talking about the heterogeneous exchange and integration of tagging data at the client side across platforms?  Not so if you do it at the network layer WITHIN the context of the architecture I outlined; the clients, switches, routers, etc. don’t need to know a thing about the process as it’s done transparently.

I wasn’t arguing that this is the end-all-be-all of data-centric security, but it’s worth exploring without deadweighting it to the negative baggage of DRM and the existing DLP/Extrusion Prevention technologies and methodologies that currently exist.

ADAPT is doable and real; stay tuned.

/Hoff

None of you Bastadges Use Trackbacks Anymore!?

June 1st, 2007 11 comments

Trackhand
A personal plea…

I spend a decent amount of time trying to engage folks in discussion.  I blog and expect that there will be those who agree and those who disagree with my comments.  I am truly interested in seeing both perspectives in your responses.

In order to do that, however, I have to know that you’ve written something in response.  Unless you leave a comment I can’t tell that, especially if you’ve authored a response on your blog and don’t leave a trackback.

Really, how hard is that, exactly?

I do that with every post I reference.  Can I ask you a favor and do the same for me?

Your opinion counts.  Make it so, please.

/Hoff

Categories: General Rants & Raves Tags:

For Data to Survive, It Must ADAPT…

June 1st, 2007 2 comments

Adapt

Now that I’ve annoyed you by suggesting that network security will over time become irrelevant given lost visibility due to advances in OS protocol transport and operation, allow me to give you another nudge towards the edge and further reinforce my theories with some additionally practical data-centric security perspectives.

If any form of network-centric security solution is to succeed in adding value over time, the mechanics of applying policy and effecting disposition on flows as they traverse the network must be made on content in context.  That means we must get to a point where we can make “security” decisions based upon information and its “value” and classification as it moves about.

It’s not good enough to only make decisions on how flows/data should be characterized and acted on with the criteria being focused on the 5-tupule (header,) signature-driven profiling or even behavioral analysis that doesn’t characterize the content in context of where it’s coming from, where it’s going and who (machine, virtual machine and “user”) or what (application, service) intends to access and consume it.

In the best of worlds, we like to be able to classify data before it makes its way though the IP stack and enters the network and use this metadata as an attached descriptor of the ‘type’ of content that this data represents.  We could do this as the data is created by applications (thick or thin, rich or basic) either using the application itself or by using an agent (client-side) that profiles the data prior to storage or transmission.

Since I’m on my Jericho Forum kick lately, here’s how they describe how data ought to be controlled:

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component.

You would probably need client-side software to provide this
functionality.  As an example, we do this today with email compliance solutions that have primitive versions of
this sort of capability that force users to declare the classification
of an email before you can hit the send button or even the document info that can be created when one authors a Word document.

There are a bunch of ERM/DRM solutions in play today that are bandied about being sold as “compliance” solutions, but there value goes much deeper than that.  IP Leakage/Extrusion prevention systems (with or without client-side tie-ins) try to do similar things also.

Ideally, this metadata would be used as a fixed descriptor of the content that permanently attaches itself and follows that content around so it can be used to decide what content should be “routed” based upon policy.

If we’re not able to use this file-oriented static metadata, we’d like then for the “network” (or something in/on it) to be able to dynamically profile content at wirespeed and characterize the data as it moves around the network from origin to destination in the same way.

So, this is where Applied Data & Application Policy Tagging (ADAPT) comes in.  ADAPT is an approach that can make use of existing and new technology to profile and characterize content (by using content matching, signatures, regular expressions and behavioral analysis in hardware or software) to then apply policy-driven information “routing” functionality as flows traverse the network by using an 802.1 q-in-q VLAN tags (open approach) or applying a proprietary ADAPT tag-header as a descriptor to each flow as it moves around the network.

Think of it like a VLAN tag the describes the data within the packet/flow which is defined as seen fit;

The ADAPT tag/VLAN is user defined and can use any taxonomy that best suits the types of content that is interesting; one might use asset classification such as “confidential” or uses taxonomies such as “HIPAA” or “PCI” to describe what is contained in the flows.  One could combine and/or stack the tags, too.  The tag maps to one of these arbitrary categories which could be fed by interpreting metadata attached to the data itself (if in file form) or dynamically by on-the-fly profiling at the network level.

As data moves across the network and across what we call boundaries (zones) of trust, the policy tags are parsed and disposition effected based upon the language governing the rules.  If you use the “open” version using the q-in-q VLAN’s, you have something on the order of 4096 VLAN IDs to choose from…more than enough to accomodate most asset classification and still leave room for VLAN usage.  Enforcing the ACL’s can be done by pretty much ANY modern switch that supports q-in-q, very quickly.

Just like an ACL for IP addresses or VLAN policies, ADAPT does the same thing for content routing, but using VLAN ID’s (or the proprietary ADAPT header) to enforce it.

To enable this sort of functionality, either every switch/router in the network would need to either be q-in-q capable (which is most switches these days) or ADAPT enabled (which would be difficult since you’d need every network vendor to support the protocols.)  You could use an overlay UTM security services switch sitting on top of the network plumbing through which all traffic moving from one zone to another would be subject to the ADAPT policy since each flow has to go through said device.

Since the only device that needs to be ADAPT aware is this UTM security service switch (see the example below,) you can let the network do what it does best and utilize this solution to enforce the policy for you across these boundary transitions.  Said UTM security service switch needs to have an extremely high-speed content security engine that is able to characterize the data at wirespeed and add a tag to the frame as it moves through the switching fabric and processed prior to popping out onto the network.

Clearly this switch would have to have coverage across every network segment.  It wouldn’t work well in virtualized server environments or any topology where zoned traffic is not subject to transit through the UTM switch.

I’m going to be self-serving here and demonstrate this “theoretical” solution using a Crossbeam X80 UTM security services switch plumbed into a very fast, reliable, and resilient L2/L3 Cisco infrastructure.  It just so happens to have a wire-speed content security engine installed in it.  The reason the X-Series can do this is because once the flow enters its switching fabric, I own the ultimate packet/frame/cell format and can prepend any header functionality I like onto the structure to determine how it gets “routed.”

Take the example below where the X80 is connected to the layer-3 switches using 802.1q VLAN trunked interfaces.  I’ve made this an intentionally simple network using VLANs and L3 routing; you could envision a much more complex segmentation and routing environment, obviously.

AdaptjpgThis network is chopped up into 4 VLAN segments:

  1. General Clients (VLAN A)
  2. Finance & Accounting Clients (VLAN B)
  3. Financial Servers (VLAN C)
  4. HR Servers (VLAN D)

Each of the clients/servers in the respective VLANs default routes out to an IP address which belongs to the firewall cluster IP addresses which is proffered by the firewall application modules providing service in the X80.

Thus, to get from one VLAN to another VLAN, one must pass through the X80 and profiled by this content security engine and whatever additional UTM services are installed in the chassis (such as firewall, IDP, AV, etc.)

Let’s say then that a user in VLAN A (General Clients) attempts to access one or more resources in the VLAN D (HR Servers.)

Using solely IP addresses and/or L2 VLANs, let’s say the firewall and IPS policies allow this behavior as the clients in that VLAN have a legitimate need to access the HR Intranet server.  However, let’s say that this user tries to access data that exists on the HR Intranet server but contains personally identifiable information that falls under the governance/compliance mandates of HIPAA.

Let us further suggest that the ADAPT policy states the following:

Rule  Source                Destination            ADAPT Descriptor           Action
==============================================================

1        VLAN A             VLAN D                    HIPAA, Confidential        Deny
IP.1.1               IP.3.1

2        VLAN B             VLAN C                    PCI                                 Allow
IP.2.1             IP.4.1

Using rule 1 above, as the client makes the request, he transits from VLAN A to VLAN D.  The reply containing the requested information is profiled by the content security engine which is able to  characterize the data as containing information that matches our definition of either “HIPAA or Confidential” (purely arbitrary for the sake of this example.)

This could be done by reading the metadata if it exists as an attachment to the content’s file structure, in cooperation with an extrusion prevention application running in the chassis, or in the case of ad-hoc web-based applications/services, done dynamically.

According to the ADAPT policy above, this data would then be either silently dropped, depending upon what “deny” means, or perhaps the user would be redirected to a webpage that informs them of a policy violation.

Rule 2 above would allow authorized IP’s in VLANs to access PCI-classified data.

You can imagine how one could integrate IAM and extend the policies to include pseudonymity/identity as a function of access, also.  Or, one could profile the requesting application (browser, for example) to define whether or not this is an authorized application.  You could extend the actions to lots of stuff, too.

In fact, I alluded to it in the first paragraph, but if we back up a step and look at where consolidation of functions/services are being driven with virtualization, one could also use the principles of ADAPT to extend the ACL functionality that exists in switching environments to control/segment/zone access to/from virtual machines (VMs) of different asset/data/classification/security zones.

What this translates to is a workflow/policy instantiation that would use the same logic to prevent VM1 from communicating with VM2 if there was a “zone” mis-match; as we add data classification in context, you could have various levels of granularity that defines access based not only on VM but VM and data trafficked by them.

Furthermore, assuming this service was deployed internally and you could establish a trusted CA with certs that would support transparent MITM SSL decrypts, you could do this (with appropriate scale) with encrypted traffic also.

This is data-centric security that uses the network when needed, the host when it can and the notion of both static and dynamic network-borne data classification to enforce policy in real-time.

/Hoff

[Comments/Blogs on this entry you might be interested in but have no trackbacks set:

MCWResearch Blog

Rob Newby’s Blog

Alex Hutton’s Blog

Security Retentive Blog

Off to Gartner IT Security Summit in D.C. next week.

June 1st, 2007 No comments

Hypecycleexample
Off to DC for this coming week’s Gartner IT Security Summit at the Marriott Wardman Park Hotel in D.C.

I will be there from Sunday – Wednesday. 

Speaking on Tuesday from 4:00-4:30 regarding the Generation Crossbeam Security Ecosystem.

You know the drill…ping me if you’re going to be there.  Evil shall ensue.  Please bring bail money.

/Hoff

Categories: Travel Tags:

Off to Orlando for Check Point Experience 2007 This Week…

May 29th, 2007 No comments

Chkp_experience2007
Off to sunny (?) Orlando this week for Check Point Experience 2007.  I’ll be there from Tuesday through Friday.  We’ll be playing party favorites like "Taunt the Nokia Rep." and seeing who dies first in Fear Factor Live.

You know the drill.  If you’re going to be there, ping me.  I’m delivering one of the keynotes (can you call it that?) on Wednesday afternoon.  I’m Staying at the Ritz, so you can send the assassination squads there, please.

For those of you who aren’t aware, the venue for Check Point Experience in Paris was surrendered, so if we were going to connect there, it will probably have to wait until Bangkok in August or Munich in September, assuming all goes well with the planning for those venues.

/Hoff

Categories: Travel Tags:

Heisenbugs: The Case of the Visibly Invisible Rogue Virtual Machine

May 28th, 2007 No comments

Pulloutyourhair
A Heisenbug is defined by frustrated programmers trying to mitigate a defect as:

     A bug that disappears or    alters its
behavior when one
     attempts to probe or isolate it.

In the case of a hardened rogue virtual machine (VM) sitting somewhere on your network, trying to probe or isolate it yields a similar frustration index for the IDS analyst as compared to that of the pissed off code jockey unable to locate a bug in a trace. 

In many cases, simply nuking it off the network is not good enough.  You want to know where it is (logically and physically,) how it got there, and whose it is.

Here’s the scenario I was reminded of last week when discussing a nifty experience I had in this regard.  It’s dumbed down and wouldn’t pass a

Here’s what transpired for about an hour or so one Monday morning:

1) IDP console starts barfing about an unallocated RFC address space emergence being used by a host on an internal network segment.  Traffic not malicious, but it seems to be talking to the Internet, some DNS on (actual name) "attacker.com" domain…

2) We see the same address popping up on the external firewall rulesets in the drop rule.

3) We start to work backwards from the sensor on the beaconing segment as well as the perimeter firewall.

4) Ping it.  No response.

5) Traceroute it.  Stops at the local default with nowhere to go since the address space is not routed.

6) Look in CAM tables for interfaces usage in the switch(es).  Coming through the trunk uplink ports.

7) Traced it to a switch.  Isolate the MAC address and isolate based on something unique?  Ummm…

8) On a segment with a collection of 75+ hosts with workgroup hubs…can’t find the damned thing.  IDP console still barfing.

9) One of the security team comes back from lunch.  Sits down near us and logs in.  Reboots a PC.

10) IDP alerts go dead.  All eyes on Cubicle #3.

…turns out that he was working @ home the previous night on his laptop upon which he uses (on his home LAN) VMWare for security research to test for how our production systems will react under attack.  He was using the NAT function and was spoofing the MAC as part of one of his tests.  The machine was talking to Windowsupdate and his local DNS hosts on the (now) imaginary network.

He bought lunch for the rest of us that day and was reminded that while he was authorized to use such tools based upon policy and job function, he shouldn’t use them on machines plugged into the internal network…or at least turn VMWare off ;(

/Hoff

Categories: Virtualization, VMware Tags:

Network Security is Dead…It’s All About the Host.

May 28th, 2007 6 comments

Securitycomputer2No, not entirely as it’s really about the data, but
I had an epiphany last week. 

I didn’t get any on me, but I was really excited about the — brace yourself — future of security in a meeting I had with Microsoft.  It reaffirmed my belief that while some low-hanging security fruit will be picked off by the network, the majority of the security value won’t be delivered by it.

I didn’t think I’d recognize just how much of it — in such a short time — will ultimately make its way back into the host (OS,) and perhaps you didn’t either.

We started with centralized host-based computing, moved to client-server.  We’ve had Web1.0, are in the  beginnings of WebX.0 and I ultimately believe that we’re headed back to a centralized host-based paradigm now that the network transport is fast, reliable and cheap.

That means that a bunch of the stuff we use today to secure the "network" will gravitate back towards the host. I’ve used Scott McNealy’s mantra as he intended it to in order to provide some color to conversations before, but I’m going to butcher it here. 

While I agree that in abstract the "Network is the Computer," in order to secure it, you’re going to have to treat the "network" like an OS…hard to do.   That’s why I think more and more security will make its way back to the actual
"computer" instead.

So much of the strategy linked to large security vendors sees an increase in footprint back on the host.  It’s showing back up there today in the guise of AV, HIPS, configuration management, NAC and Extrusion Prevention, but it’s going to play a much, much loftier role as time goes on as the level of interaction and interoperability must increase.  Rather than put 10+ agents on a box, imagine if that stuff was already built in?

Heresy, I suppose.

I wager that the "you can’t trust the endpoint" and "all security will make its way into the switch" crowds will start yapping on this point, but before that happens, let me explain…

The Microsoft Factor

Vista_box_2
I was fortunate enough to sit down with some of the key players in Microsoft’s security team last week and engage in a lively bit of banter regarding some both practical and esoteric elements of where security has been, is now and will be in the near future. 

On the tail of Mr. Chambers’ Interop keynote, the discussion was all abuzz regarding collaboration and WebX.0 and the wonders that will come of the technology levers in the near future as well as the, ahem, security challenges that this new world order will bring.  I’ll cover that little gem in another blog entry.

Some of us wanted to curl up into a fetal position.  Others saw a chance to correct material defects in the way in which the intersection of networking and security has been approached.  I think the combination of the two is natural and healthy and ultimately quite predictable in these conversations.

I did a bit of both, honestly.

As you can guess, given who I was talking to, much of what was discussed found its way back to a host-centric view of security with a heavy anchoring in the continued evolution of producing more secure operating systems, more secure code, more secure protocols and strong authentication paired with encryption.

I expected to roll my eyes a lot and figured that our conversation would gravitate towards UAC and that a bulk-helping of vapor functionality would be dispensed with the normal disclaimers urging "…when it’s available one day" as a helping would be ladled generously into the dog-food bowls the Microsofties were eating from.

I am really glad I was wrong, and it just goes to show you that it’s important to consider a balanced scorecard in all this; listen with two holes, talk with one…preferably the correct one 😉

I may be shot for saying this in the court of popular opinion, but I think Microsoft is really doing a fantastic job in their renewed efforts toward improving security.  It’s not perfect, but the security industry is such a fickle and bipolar mistress — if you’re not 100% you’re a zero.

After spending all this time urging people that the future of security will not be delivered in the network proper, I have not focused enough attention on the advancements that are indeed creeping their way into the OS’s toward a more secure future as  this inertia orthogonally reinforces my point.

Yes, I work for a company that provides network-centric security offerings.  Does this contradict the statement I just made?  I don’t think so, and neither did the folks from Microsoft.  There will always be a need to consolidate certain security functionality that does not fit within the context of the host — at least within an acceptable timeframe as the nature of security continues to evolve.  Read on.

The network will become transparent.  Why?

In this brave new world, mutually-authenticated and encrypted network communications won’t be visible to the majority of the plumbing that’s transporting it, so short of the specific shunts to the residual overlay solutions that will still be present to the network in forms of controls that will not make their way to the host, the network isn’t going to add much security value at all.

The Jericho EffectJerichoeps_2

What I found interesting is that I’ve enjoyed similar discussions with the distinguished fellows of the Jericho Forum wherein after we’ve debated the merits of WHAT you might call it, the notion of HOW "deperimeterization," "reperimeterization," (or my favorite) "radical externalization,"  weighs heavily on the evolution of security as we know it.

I have to admit that I’ve been a bit harsh on the Jericho boys before, but Paul Simmonds and I (or at least I did) came to the realization that my allergic reaction wasn’t to the concepts at hand, but rather the abrasive marketing of the message.  Live and learn.

Both sets of conversations basically see the pendulum effect of security in action in this oversimplification of what Jericho posits is the future of security and what Microsoft can deliver — today:

Take a host with a secured OS, connect it into any network using whatever means you find
appropriate, without regard for having to think about whether you’re on the "inside" or "outside." Communicate securely, access and exchange data in policy-defined "zones of trust" using open, secure, authenticated and encrypted protocols.

If you’re interested in the non-butchered more specific elements of the Jericho Forum’s "10 Commandments," see here.

What I wasn’t expecting in marrying these two classes of conversation is that this future of security is much closer and notably much more possible than I readily expected…with a Microsoft OS, no less.   In fact, I got a demonstration of it.  It may seem like no big deal to some of you, but the underlying architectural enhancements to Microsoft’s Vista and Longhorn OS’s are a fantastic improvement on what we have had to put up thus far.

One of the Microsoft guys fired up his laptop with a standard-issue off-the-shelf edition of Vista,  authenticated with his smartcard, transparently attached to the hotel’s open wireless network and then took me on a tour of some non-privileged internal Microsoft network resources.

Then he showed me some of the ad-hoc collaborative "People Near Me" peer2peer tools built into Vista — same sorts of functionality…transparent, collaborative and apparently quite secure (gasp!) all at the same time.

It was all mutually authenticated and encrypted and done so transparently to him.

He didn’t "do" anything; no VPN clients, no split-brain tunneling, no additional Active-X agents, no SSL or IPSec shims…it’s the integrated functionality provided by both IPv6 and IPSec in the NextGen IP stack present in Vista.

And in his words "it just works."   Yes it does.

He basically established connectivity and his machine reached out to an reachable read-only DC (after auth. and with encryption) which allowed him to transparently resolve "internal" vs. "external" resources.  Yes, the requirements of today expect that the OS must still evolve to prevent exploit of the OS, but this too shall become better over time.

No, it obviously doesn’t address what happens if you’re using a Mac or Linux, but the pressure will be on to provide the same sort of transparent, collaborative and secure functionality across those OS’s, too.

Allow me my generalizations — I know that security isn’t fixed and that we still have problems, but think of this as a half-glass full, willya!?

One of the other benefits I got form this conversation is the reminder that as Vista and Longhorn default to IPv6 natively (they can do both v4&v6 dynamically,) as enterprises upgrade, the network hardware and software (and hence the existing security architecture) must also be able to support IPv6 natively.  It’s not just the government pushing v6, large enterprises are now standardizing on it, too.

Here are some excellent links describing the Nextgen IP stack in Vista, the native support for IPSec (goodbye VPN market,) and IPv6 support.

Funny how people keep talking about Google being a threat to Microsoft.  I think that the network giants like Cisco might have their hands full with Microsoft…look at how each of them are maneuvering.

/Hoff
{ Typing this on my Mac…staring @ a Vista Box I’m waiting to open to install within Parallels 😉 }

Yeah, I don’t get Symantec, either…HuaMantec?

May 27th, 2007 1 comment

Dogateappliance
Alan beat me in blogging about something I discussed @ our Interop Blogger’s dinner last week, namely the absolute bewildering announcement made by Symantec:

Symantec Corp. and Huawei Technologies Co., Ltd. are forming a joint
venture company to develop and distribute security and storage
appliances to global telecommunications carriers and enterprises.

The joint venture will help operators and enterprises address
challenges arising from maintaining IP networks and IT systems that
support a growing number of connections. This requires balancing
increasing performance and availability requirements with system
security and data integrity.

Initially the offering will include security and storage appliances
addressing those issues. The new company will be headquartered in
Chengdu, China, with Huawei owning 51 percent of the joint venture and
Symantec owning 49 percent.

Huawei will contribute its telecommunications storage and security
businesses including its integrated supply chain and integrated product
development management practices. Additionally, the new company will
have access to Huawei’s intellectual property (IP) licenses, research
and development capabilities.

Symantec will contribute some of its enterprise storage and security
software licenses, working capital, and its management expertise into
the new company. Symantec will also contribute US$150 million toward
the joint venture’s growth and expansion.

The joint venture is expected to close late in the calendar year, pending required regulatory and governmental approvals.

What he hell, over!?  Perhaps they forgot about this announcement almost around the same time last year wherein ’twas quoted:

The announcement
is evidence that Symantec is shifting its strategy away from being a
"one stop shop" for security wares, and will focus on lucrative
security management and services, said John Pescatore, a vice president
at Gartner.

Symantec
announced the changes internally yesterday, saying it was a "change in
its investment strategy in the network and gateway security business."
The news was accompanied by lay-offs affecting approximately 80
employees in the company’s SGS unit, a company spokeswoman said.

…after the 3Com buyout of the last venture between 3Com and Huawei, perhaps they’re going to pick up the pieces?  Are we going to see a yellow version of the M.I.A. 3Com M160 since they’re not doing anything with it?

Wow.

Perhaps the first thing they can do for the Chinese market is to fix the Symantec Autoupdate feature:

According to reports from the Chinese state media last night, an
automatic update to the Chinese version of the Norton anti-virus
software sent out last Friday identified two critical Windows XP files
as malware and deleted them.

As a result, millions of Chinese
PC users have had to re-install their operating systems or, if they
have planned ahead (and are lucky), used the RESTORE function from the
XP emergency recovery menu.

China Daily says that many companies are threatening to sue Symantec for large sums of money for lost working time. Symantec has reportedly made formal apology on Wednesday.

/Hoff

Read more…

Categories: Uncategorized Tags:

My IPS (and FW, WAF, XML, DBF, URL, AV, AS) *IS* Bigger Than Yours Is…

May 23rd, 2007 No comments

Butrule225Interop has has been great thus far.  One of the most visible themes of this year’s show is (not suprisingly) the hyped emergence of 10Gb/s Ethernet.  10G isn’t new, but the market is now ripe with products supporting it: routers, switches, servers and, of course, security kit.

With this uptick in connectivity as well as the corresponding float in compute power thanks to Mr. Moore AND some nifty evolution of very fast, low latency, reasonably accurate deep packet inspection (including behavioral technology,) the marketing wars have begun on who has the biggest, baddest toys on the block.

Whenever this discussion arises, without question the notion of "carrier class" gets bandied about in order to essentially qualify a product as being able to withstand enormous amounts of traffic load without imposing latency. 

One of the most compelling reasons for these big pieces of iron (which are ultimately a means to an end to run software, afterall) is the service provider/carrier/mobile operator market which certainly has its fair share of challenges in terms of not only scale and performance but also security.

I blogged a couple of weeks ago regarding the resurgence of what can be described as "clean pipes" wherein a service provider applies some technology that gets rid of the big lumps upstream of the customer premises in order to deliver more sanitary network transport.

What’s interesting about clean pipes is that much of what security providers talk about today is only actually a small amount of what is actually needed.  Security providers, most notably IPS vendors, anchor the entire strategy of clean pipes around "threat protection" that appears somewhat one dimensional.

This normally means getting rid of what is generically referred to today as "malware," arresting worm propagation and quashing DoS/DDoS attacks.  It doesn’t speak at all to the need for things that aren’t purely "security" in nature such as parental controls (URL filtering,) anti-spam, P2P, etc.  It appears that in the strictest definition, these aren’t threats?

So, this week we’ve seen the following announcements:

  • ISS announces their new appliance that offers 6Gb/s of IPS
  • McAfee announces thei new appliance that offers 10Gb/s of IPS

The trumpets sounded and the heavens parted as these products were announced touting threat protection via IPS at levels supposedly never approached before.  More appliances.  Lots of interfaces.  Big numbers.  Yet to be seen in action.  Also, to be clear a 2U rackmount appliance that is not DC powered and non-NEBS certified isn’t normally called "Carrier-Class."

I find these announcements interesting because even with our existing products (which run ISS and Sourcefire’s IDS/IPS software, by the way) we can deliver 8Gb/s of firewall and IPS today and have been able to for some time.

Lisa Vaas over @ eWeek just covered
the ISS and McAfee announcements and she was nice enough to talk about
our products and positioning.  One super-critical difference is that along with high throughput and low latency you get to actually CHOOSE which IPS you want to run — ISS, Sourcefire and shortly Check Point’s IPS-1.

You can then combine that with firewall, AV, AS, URL filtering, web app. and database firewalls and XML security gateways in the same chassis to name a few other functions — all best of breed from top-tier players — and this is what we call Enterprise and Provider-Class UTM folks.

Holistically approaching threat management across the entire spectrum is really important along with the speeds and feeds and we’ve all seen what happens when more and more functionality is added to the feature stack — you turn a feature on and you pay for it performance-wise somewhere else.  It’s robbing Peter to pay Paul.  The processing requirements necessary at 10G line rates to do IPS is different when you add AV to the mix.

The next steps will be interesting and we’ll have to see how the switch and overlay vendors rev up to make their move to have the biggest on the block.  Hey, what ever did happen to that 3Com M160?

Then there’s that little company called Cisco…

{Ed: Oops.  I made a boo-boo and talked about some stuff I shouldn’t have.  You didn’t notice, did you?  Ah, the perils of the intersection of Corporate Blvd. and Personal Way!  Lesson learned. 😉 }