Archive

Archive for June, 2007

A Poke in the eEye – First a DoS with Ross and now…?

June 11th, 2007 No comments

Patchtues
The drama continues?

Mitchell blogged last week about the release of the new eEye Preview Service, a "three-tiered security
intelligence program" from eEye and what he describes as an apparent change in focus for the company. 

With Ross Brown’s departure (and subsequent blog p0wnership) and Mitchell’s interesting observation of what appears to be a company migrating away from a product to a service orientation with continued focus on research, my eyebrows (sorry) raised today when I perused one of my syndicated intelligence gathering sites (OK, it’s a rumor mill, but it’s surprisingly reliable) and found this entry:

Is the end in sight for eEye?
Rumor has it more layoffs went down at eEye
this week. Rumor has it the company fired most of their senior
developers, most of the QA staff and demoted their VP of Engineering.
When: 6/8/2007
Company: eEye Digital Security
Severity: 70
Points: 170

Look, this is from f’d Company, so before we start singing hymnals, let’s take a breath and digest a few notes.

This was posted on the 8th. I can’t reasonably tell whether or not this round of RIF is the same to which the InfoSecSellout(s) [they appear to have admitted — accidentally or not — in a post to be plural] referred to on their blog.  I’d hate not to reference them as even a potential source, lest I be skewered in the court of Blogdom…

This appears to be the second round of layoff’s in the last few months or so for eEye and it indicates interesting changes in the VA landscape as well the overall move to SaaS (Security as a Service) for those companies who are looking for differentiation in a crowded room.

Of course, this could also be a realignment of the organization to focus on the new service offerings, so please take the prescribed dose of NaCl with all of this.  We all know how accurate the Internet is…It would be interesting to reconcile this against any substantiated confirmations of this activity.

I hate to see any company thrash.  If the rumors are true, I wish anyone that might have been let go a soft landing and a quick recovery.

This story leads into a forthcoming post on SaaS (Security as a Service.)

/Hoff

Categories: Uncategorized Tags:

Redux: Liability of Security Vulnerability Research…The End is Nigh!

June 10th, 2007 3 comments

Hackers_cartoons
I posited the potential risks of vulnerability research in this blog entry here.   Specifically I asked about reverse engineering and implications related to IP law/trademark/copyright, but the focus was ultimately on the liabilities of the researchers engaging in such activities.

Admittedly I’m not a lawyer and my understanding of some of the legal and ethical dynamics are amateur at best, but what was very interesting to me was the breadth of the replies from both the on and off-line responses to my request for opinion on the matter. 

I was contacted by white, gray and blackhats regarding this meme and the results were divergent across legal, political and ideological lines.

KJH (Kelly Jackson Higgins — hey, Kel!) from Dark Reading recently posted an interesting collateral piece titled "Laws Threaten Security Researchers" in which she outlines the results of a CSI working group chartered to investigate and explore the implications that existing and pending legislation would have on vulnerability research and those who conduct it.  Folks like Jeremiah Grossman (who comments on this very story, here) and Billy Hoffman participate on this panel.

What is interesting is the contrast in commentary between how folks responded to my post versus these comments based upon the CSI working group’s findings:

In the report, some Web researchers say that even if they
find a bug accidentally on a site, they are hesitant to disclose it to
the Website’s owner for fear of prosecution. "This opinion grew
stronger the more they learned during dialogue with working group
members from the Department of Justice," the report says.

I believe we’ve all seen the results of some overly-litigious responses on behalf of companies against whom disclosures related to their products or services have been released — for good or bad.

Ask someone like Dave Maynor if the pain is ultimately worth it.  Depending upon your disposition, your mileage may vary. 

That revelation is unnerving to Jeremiah Grossman, CTO and
founder of WhiteHat Security and a member of the working group. "That
means only people that are on the side of the consumer are being
silenced for fear of prosecution," and not the bad guys.

"[Web] researchers are terrified about what they can and
can’t do, and whether they’ll face jail or fines," says Sara Peters,
CSI editor and author of the report. "Having the perspective of legal
people and law enforcement has been incredibly valuable. [And] this is
more complicated than we thought."

This sort of response didn’t come across that way at all from folks who both privately or publicly responded to my blog; most responses were just the opposite, stated with somewhat of a sense of entitlement and immunity.   I expect to query those same folks again on the topic. 

Check this out:

The report discusses several methods of Web research, such as
gathering information off-site about a Website or via social
engineering; testing for cross-site scripting by sending HTML mail from
the site to the researcher’s own Webmail account; purposely causing
errors on the site; and conducting port scans and vulnerability scans.

Interestingly, DOJ representatives say that using just one of
these methods might not be enough for a solid case against a [good or
bad] hacker. It would take several of these activities, as well as
evidence that the researcher tried to "cover his tracks," they say. And
other factors — such as whether the researcher discloses a
vulnerability, writes an exploit, or tries to sell the bug — may
factor in as well, according to the report.

Full disclosure and to whom you disclose it and when could mean the difference between time in the spotlight or time in the pokey!

/Hoff

Gartner Solutions Expo a Good Gauge of the Security Industry?

June 9th, 2007 No comments

Gartnerparties
Mark Wood from nCircle blogged about his recent experience at the Gartner IT Security Summit in D.C.  Alan Shimel commented on Mark’s summary and both of them make an interesting argument about how Gartner operates as the overall gauge of the security industry.  Given that I was  also there, I thought I’d add some color to Mark’s commentary:

In 2006, there were two types of solutions that seemed to dominate
the floor: network admission control and data leakage (with the old
reliable identity and access management coming in a strong third). This
year, the NAC vendors were almost all gone and there were many fewer
data leakage vendors than I had expected. Nor was there any one type of
solution that really seemed to dominate.

…that’s probably because both of those "markets" are becoming "features" (see here and here) and given how Gartner proselytizes to their clients, features and those who sell them need to spend their hype-budgets wisely and depending upon where one is on the hype cycle (and what I say below,) you’ll see less vendors participating when the $ per lead isn’t stellar.  Lots and lots of vendors in a single quadrant makes it difficult to differentiate.

 

The question is: What does this mean? On the one hand, I continue to
be staggered by the number of new vendors in the security space. They
seem to be like ants in the kitchen — acquire one and two more crawl
out of the cracks in the window sill. It’s madness, I tell you! There
were a good half a dozen names I had never seen before and I wonder if
the number of companies that continue to pop up is good or bad for our
industry. It’s certainly good that technological innovation continues,
but I wonder about the financial status of these companies as funding
for security startups continues to be more difficult to get. There sure
is a lot of money that’s been poured into security and I’m not sure how
investors are going to get it back.

Without waxing on philosophically on the subconscious of the security market, let me offer a far more simple and unfortunate explanation:

Booth space at the Gartner show is one of, if not the most, expensive shows on the planet when you consider how absolutely miserable the scheduling of the expo hours are for the vendors.  They open the vendor expo at lunch time and during track sessions when everyone is usually eating, checking email, or attending the conference sessions!  It’s a purely economic issue, not some great temperature taking of the industry.

I suppose one could argue that if the industry were flush with cash, everyone showing up here would indicate overall "health," but I really do think it’s not such a complex interdependency.  Gartner is a great place for a booth if you’re one of those giant, hamster wheel confab "We Do Everything" vendors like Verisign, IBM or BT.

I spoke to about 5 vendors who had people at the show but no booth.  Why?  Because they would get sucked dry on booth costs and given the exposure (unless you’re a major sponsor with speaking opportunities or a party sponsor) it’s just not worth it.  I spoke with Ted Julian prior to his guest Matasano blog summary, and we looked at each other shaking our heads.

While the quality of the folks visiting are usually decision makers, the foot traffic is limited in the highly-compressed windows of availability.  The thing you really want to do is get some face time with the analysts and key customers and stick and move. 

The best bang for the exposure buck @ Gartner is the party at the end of the second day.  Crossbeam was a platinum sponsor this year; we had a booth (facing a wall in the back,) had two speaking sessions and sponsored a party.  The booth position and visibility sucked for us (and others) while the party had folks lined out the door for food, booze and (believe it or not) temporary tattoos with grown men and women stripping off clothing to get inked.  Even Stiennon showed up to our party! 😉

On the other hand, it seemed that there was much less hysteria than
in years past. No
"we-can-make-every-one-of-your-compliance-problems-vanish-overnight" or
"confidential-data-is-seeping-through-the-cracks-in-your-network-while-you-sleep-Run!-Run!"
pitches this year. There seems to be more maturity in how the industry
is addressing its buying audience and I find this fairly encouraging.
Despite the number of companies, maybe the industry is slowing growing
up after all. It’ll be interesting to see how this plays out.

Well, given the "Security 3.0 theme" which apparently overall trends toward mitigating and managing "risk", a bunch of technology box sprinkling hype doesn’t work well in that arena.  I would also ask whether or not this really does represent maturity or the "natural" byproduct of survival of the fittest — or those with the biggest marketing budgets?  Maybe it’s the same thing?

/Hoff

Alright Kids…It’s a Security Throughput Math Test! Step Right Up!

June 9th, 2007 6 comments

Throughput_2
I’ve got a little quiz for you.  I’ve asked this question 30 times over the last week and received an interesting set of answers.   One set of numbers represent "real world" numbers, the other is a set of "marketing" numbers.

Here’s the deal:

Take an appliance of your choice (let’s say a security appliance like an IPS) that has 10 x 1Gb/s Ethernet interfaces.

Connect five of those interfaces interfaces to the test rig that generates traffic and connect the remaining five interfaces to the receiver.

Let’s say that you send 5 Gb/s from the sender (Avalanche in the example above) across interfaces 1-5.

The traffic passes from the MAC’s up the stack and through the appliance under test and then out through interfaces 6-10 where the traffic is received by the receiver (Reflector in the example above.)

So you’ve got 5Gb/s of traffic into the DUT and 5Gb/s of traffic out of the DUT with zero% loss.

You’re question is as follows:

Using whatever math you desire (Cisco or otherwise,) what is the throughput of the traffic going through the DUT?

I ask this question because of the recent sets of claims by certain vendors over the last few weeks.   Let’s not get into stacking/manipulating the test traffic patterns — I don’t want to cloud the issue.

{Ed: Let me give you some guidance on the two most widely applicable answers to this question that I have received thus far. 85% of those surveyed said that the  answer was 5Gb/s while a smaller minority asserts that it’s 10Gb/s)  It comes down to how one measures "aggregate" throughput.  Please read comments below regarding measurement specifics.

So, what’s your answer?  Please feel free to ‘splain your logic.  I will comment with my response once comments show up so as not to color the results.

/Hoff

Categories: General Rants & Raves Tags:

BeanSec! 10 – June 20th – 6PM to ?

June 7th, 2007 No comments

Beansec3
Yo!  BeanSec! 10 is upon us.

BeanSec! is an informal meetup of information security
professionals, researchers and academics in the Greater Boston area
that meets the third Wednesday of each month.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join
up”, present a zero-day exploit, or defend your dissertation to attend.
Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139


ALERT!:  I Apologize as once again I have will not be able to attend.  I am traveling tomorrow and won’t  be able to make it.  If you’re looking for free food/booze like when I’m there…um, well…
next time? ;(

Categories: BeanSec! Tags:

BigFix Comes Out Swinging — With a Gun-Toting Vulcan Marine Hottie…

June 5th, 2007 12 comments

Bigfixad
No, this is not an ad for Bigfix. It is about an ad for Bigfix, however.  If you’re at Garter, methinks Pescatore might describe what I’m highlighting as "Security Marketing 3.0" 😉

Anywho, I was reading the USA Today this morning as was dutifully delivered to my hotel room by the fine folks at Marriott and as I pawed through the business section, I hit page 8B.

Page 8B features a full-page black and white ad from Bigfix.  No big deal, you say, there are lots of ads from IT companies in newspapers.

Sure, but generally they’re not from IT Security companies, they’re usually not from companies this size, they’re usually not a full page, they usually don’t feature big-breasted, gun-toting, Vulcan, Marine recon soldiers, and they usually don’t say things like this:

[Overlaid on top of picture of said big-breasted, gun-toting, Vulcan, Marine recon person…]

Contrary to the impotent baloney from McAfee/Symantec/et al, it doesn’t take weeks and an army of servers to secure all your computers.  You just need one can of BIGFIX whup-ass.

What can you do from one console with a single, policy-driven BIGFIX agent? How about continuous discovering, assessing, remediating, optimizing and enforcing the health/security of hundreds of thousands of computers in minutes?  Yup.  Minutes.

Windows, Vista, Linux/Unix and mac systems.  Nobody else can do this.  And we’re making sure everyone else is more than a little embarrassed about it.  Ooh-rah!

Interesting…picture, text, messaging…  It got my attention.  I wonder if it will get the attention of anyone else — or more importantly the right set of people?  Was it just in this edition or countrywide?  Any other papers?

Amrit, you have anything to do with this?

/Hoff


Categories: Marketing Tags:

Profiling Data At the Network-Layer and Controlling It’s Movement Is a Bad Thing?

June 3rd, 2007 2 comments

Carcrash
I’ve been watching what appears like a car crash in slow-motion and for some strange reason I share some affinity and/or responsibility for what is unfolding in the debate between Rory and Rob.

What motivated me to comment on this on-going exploration of data-centric security was Rory’s last post in which he appears to refer to some points I raised in my original post but still bent on the idea that the crux of my concept was tied to DRM:

So .. am I anti-security? Nope I’m extremely pro-security. My feeling
is however that the best way to implement security is in ways which
it’s invisable to users. Every time you make ordinary business people
think about security (eg, usernames/passwords) they try their darndest
to bypass those requirements.

That’s fine and I agree.  The concept of ADAPT is completely transparent to "users."  This doesn’t obviate the fact that someone will have to be responsible for characterizing what is important and relevant to the business in terms of "assets/data," attaching weight/value to them, and setting some policies regarding how to mitigate impact and ultimately risk.

Personally I’m a great fan of network segregation and defence in
depth at the network layer. I think that devices like the ones
crossbeam produce are very useful in coming up with risk profiles, on a
network by network basis rather than a data basis and
managing traffic in that way. The reason for this is that then the
segregation and protections can be applied without the intervention of
end-users and without them (hopefully) having to know about what
security is in place.

So I think you’re still missing my point.  The example I gave of the X-Series using ADAPT takes a combination of best-of-breed security software components such as FW, IDP, WAF, XML, AV, etc. and provides you with segregation as you describe.  HOWEVER, the (r)evolutionary delta here is that the ADAPT profiling of content set by policies which are invisible to the user at the network layer allows one to make security decisions on content in context and control how data moves.

So to use the phrase that I’ve seen in other blogs on this subject,
I think that the "zones of trust" are a great idea, but the zone’s
shouldn’t be based on the data that flows over them, but the
user/machine that are used. It’s the idea of tagging all that data with
the right tags and controlling it’s flow that bugs me.

…and thus it’s obvious that I completely and utterly disagree with this statement.  Without tying some sort of identity (pseudonymity) to the user/machine AND combining it with identifying the channels (applications) and the content (payload) you simply cannot make an informed decision as to the legitimacy of the movement/delivery of this data.

I used the case of being able to utilize client-side tagging as an extension to ADAPT, NOT as a dependency.  Go back and re-read the post; it’s a network-based transparent tagging process that attaches the tag to the traffic as it moves around the network.  I don’t understand why that would bug you?

So that’s where my points in the previous post came from, and I
still reckon their correct. Data tagging and parsing relies on the
existance of standards and their uptake in the first instance and then
users *actually using them* and personally I think that’s not going to
happen in general companies and therefore is not the best place to be
focusing security effort…

Please explain this to me?  What standards need to exist in order to tag data — unless of course you’re talking about the heterogeneous exchange and integration of tagging data at the client side across platforms?  Not so if you do it at the network layer WITHIN the context of the architecture I outlined; the clients, switches, routers, etc. don’t need to know a thing about the process as it’s done transparently.

I wasn’t arguing that this is the end-all-be-all of data-centric security, but it’s worth exploring without deadweighting it to the negative baggage of DRM and the existing DLP/Extrusion Prevention technologies and methodologies that currently exist.

ADAPT is doable and real; stay tuned.

/Hoff

None of you Bastadges Use Trackbacks Anymore!?

June 1st, 2007 11 comments

Trackhand
A personal plea…

I spend a decent amount of time trying to engage folks in discussion.  I blog and expect that there will be those who agree and those who disagree with my comments.  I am truly interested in seeing both perspectives in your responses.

In order to do that, however, I have to know that you’ve written something in response.  Unless you leave a comment I can’t tell that, especially if you’ve authored a response on your blog and don’t leave a trackback.

Really, how hard is that, exactly?

I do that with every post I reference.  Can I ask you a favor and do the same for me?

Your opinion counts.  Make it so, please.

/Hoff

Categories: General Rants & Raves Tags:

For Data to Survive, It Must ADAPT…

June 1st, 2007 2 comments

Adapt

Now that I’ve annoyed you by suggesting that network security will over time become irrelevant given lost visibility due to advances in OS protocol transport and operation, allow me to give you another nudge towards the edge and further reinforce my theories with some additionally practical data-centric security perspectives.

If any form of network-centric security solution is to succeed in adding value over time, the mechanics of applying policy and effecting disposition on flows as they traverse the network must be made on content in context.  That means we must get to a point where we can make “security” decisions based upon information and its “value” and classification as it moves about.

It’s not good enough to only make decisions on how flows/data should be characterized and acted on with the criteria being focused on the 5-tupule (header,) signature-driven profiling or even behavioral analysis that doesn’t characterize the content in context of where it’s coming from, where it’s going and who (machine, virtual machine and “user”) or what (application, service) intends to access and consume it.

In the best of worlds, we like to be able to classify data before it makes its way though the IP stack and enters the network and use this metadata as an attached descriptor of the ‘type’ of content that this data represents.  We could do this as the data is created by applications (thick or thin, rich or basic) either using the application itself or by using an agent (client-side) that profiles the data prior to storage or transmission.

Since I’m on my Jericho Forum kick lately, here’s how they describe how data ought to be controlled:

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component.

You would probably need client-side software to provide this
functionality.  As an example, we do this today with email compliance solutions that have primitive versions of
this sort of capability that force users to declare the classification
of an email before you can hit the send button or even the document info that can be created when one authors a Word document.

There are a bunch of ERM/DRM solutions in play today that are bandied about being sold as “compliance” solutions, but there value goes much deeper than that.  IP Leakage/Extrusion prevention systems (with or without client-side tie-ins) try to do similar things also.

Ideally, this metadata would be used as a fixed descriptor of the content that permanently attaches itself and follows that content around so it can be used to decide what content should be “routed” based upon policy.

If we’re not able to use this file-oriented static metadata, we’d like then for the “network” (or something in/on it) to be able to dynamically profile content at wirespeed and characterize the data as it moves around the network from origin to destination in the same way.

So, this is where Applied Data & Application Policy Tagging (ADAPT) comes in.  ADAPT is an approach that can make use of existing and new technology to profile and characterize content (by using content matching, signatures, regular expressions and behavioral analysis in hardware or software) to then apply policy-driven information “routing” functionality as flows traverse the network by using an 802.1 q-in-q VLAN tags (open approach) or applying a proprietary ADAPT tag-header as a descriptor to each flow as it moves around the network.

Think of it like a VLAN tag the describes the data within the packet/flow which is defined as seen fit;

The ADAPT tag/VLAN is user defined and can use any taxonomy that best suits the types of content that is interesting; one might use asset classification such as “confidential” or uses taxonomies such as “HIPAA” or “PCI” to describe what is contained in the flows.  One could combine and/or stack the tags, too.  The tag maps to one of these arbitrary categories which could be fed by interpreting metadata attached to the data itself (if in file form) or dynamically by on-the-fly profiling at the network level.

As data moves across the network and across what we call boundaries (zones) of trust, the policy tags are parsed and disposition effected based upon the language governing the rules.  If you use the “open” version using the q-in-q VLAN’s, you have something on the order of 4096 VLAN IDs to choose from…more than enough to accomodate most asset classification and still leave room for VLAN usage.  Enforcing the ACL’s can be done by pretty much ANY modern switch that supports q-in-q, very quickly.

Just like an ACL for IP addresses or VLAN policies, ADAPT does the same thing for content routing, but using VLAN ID’s (or the proprietary ADAPT header) to enforce it.

To enable this sort of functionality, either every switch/router in the network would need to either be q-in-q capable (which is most switches these days) or ADAPT enabled (which would be difficult since you’d need every network vendor to support the protocols.)  You could use an overlay UTM security services switch sitting on top of the network plumbing through which all traffic moving from one zone to another would be subject to the ADAPT policy since each flow has to go through said device.

Since the only device that needs to be ADAPT aware is this UTM security service switch (see the example below,) you can let the network do what it does best and utilize this solution to enforce the policy for you across these boundary transitions.  Said UTM security service switch needs to have an extremely high-speed content security engine that is able to characterize the data at wirespeed and add a tag to the frame as it moves through the switching fabric and processed prior to popping out onto the network.

Clearly this switch would have to have coverage across every network segment.  It wouldn’t work well in virtualized server environments or any topology where zoned traffic is not subject to transit through the UTM switch.

I’m going to be self-serving here and demonstrate this “theoretical” solution using a Crossbeam X80 UTM security services switch plumbed into a very fast, reliable, and resilient L2/L3 Cisco infrastructure.  It just so happens to have a wire-speed content security engine installed in it.  The reason the X-Series can do this is because once the flow enters its switching fabric, I own the ultimate packet/frame/cell format and can prepend any header functionality I like onto the structure to determine how it gets “routed.”

Take the example below where the X80 is connected to the layer-3 switches using 802.1q VLAN trunked interfaces.  I’ve made this an intentionally simple network using VLANs and L3 routing; you could envision a much more complex segmentation and routing environment, obviously.

AdaptjpgThis network is chopped up into 4 VLAN segments:

  1. General Clients (VLAN A)
  2. Finance & Accounting Clients (VLAN B)
  3. Financial Servers (VLAN C)
  4. HR Servers (VLAN D)

Each of the clients/servers in the respective VLANs default routes out to an IP address which belongs to the firewall cluster IP addresses which is proffered by the firewall application modules providing service in the X80.

Thus, to get from one VLAN to another VLAN, one must pass through the X80 and profiled by this content security engine and whatever additional UTM services are installed in the chassis (such as firewall, IDP, AV, etc.)

Let’s say then that a user in VLAN A (General Clients) attempts to access one or more resources in the VLAN D (HR Servers.)

Using solely IP addresses and/or L2 VLANs, let’s say the firewall and IPS policies allow this behavior as the clients in that VLAN have a legitimate need to access the HR Intranet server.  However, let’s say that this user tries to access data that exists on the HR Intranet server but contains personally identifiable information that falls under the governance/compliance mandates of HIPAA.

Let us further suggest that the ADAPT policy states the following:

Rule  Source                Destination            ADAPT Descriptor           Action
==============================================================

1        VLAN A             VLAN D                    HIPAA, Confidential        Deny
IP.1.1               IP.3.1

2        VLAN B             VLAN C                    PCI                                 Allow
IP.2.1             IP.4.1

Using rule 1 above, as the client makes the request, he transits from VLAN A to VLAN D.  The reply containing the requested information is profiled by the content security engine which is able to  characterize the data as containing information that matches our definition of either “HIPAA or Confidential” (purely arbitrary for the sake of this example.)

This could be done by reading the metadata if it exists as an attachment to the content’s file structure, in cooperation with an extrusion prevention application running in the chassis, or in the case of ad-hoc web-based applications/services, done dynamically.

According to the ADAPT policy above, this data would then be either silently dropped, depending upon what “deny” means, or perhaps the user would be redirected to a webpage that informs them of a policy violation.

Rule 2 above would allow authorized IP’s in VLANs to access PCI-classified data.

You can imagine how one could integrate IAM and extend the policies to include pseudonymity/identity as a function of access, also.  Or, one could profile the requesting application (browser, for example) to define whether or not this is an authorized application.  You could extend the actions to lots of stuff, too.

In fact, I alluded to it in the first paragraph, but if we back up a step and look at where consolidation of functions/services are being driven with virtualization, one could also use the principles of ADAPT to extend the ACL functionality that exists in switching environments to control/segment/zone access to/from virtual machines (VMs) of different asset/data/classification/security zones.

What this translates to is a workflow/policy instantiation that would use the same logic to prevent VM1 from communicating with VM2 if there was a “zone” mis-match; as we add data classification in context, you could have various levels of granularity that defines access based not only on VM but VM and data trafficked by them.

Furthermore, assuming this service was deployed internally and you could establish a trusted CA with certs that would support transparent MITM SSL decrypts, you could do this (with appropriate scale) with encrypted traffic also.

This is data-centric security that uses the network when needed, the host when it can and the notion of both static and dynamic network-borne data classification to enforce policy in real-time.

/Hoff

[Comments/Blogs on this entry you might be interested in but have no trackbacks set:

MCWResearch Blog

Rob Newby’s Blog

Alex Hutton’s Blog

Security Retentive Blog

Off to Gartner IT Security Summit in D.C. next week.

June 1st, 2007 No comments

Hypecycleexample
Off to DC for this coming week’s Gartner IT Security Summit at the Marriott Wardman Park Hotel in D.C.

I will be there from Sunday – Wednesday. 

Speaking on Tuesday from 4:00-4:30 regarding the Generation Crossbeam Security Ecosystem.

You know the drill…ping me if you’re going to be there.  Evil shall ensue.  Please bring bail money.

/Hoff

Categories: Travel Tags: