Archive for June, 2008

The Final Frontier(?): Virtualizing the DMZ…

June 30th, 2008 5 comments

Alessandro from and I were chatting today regarding VMware’s latest best-practices document titled "DMZ Virtualization with VMware Infrastructure.

This is a nine page overview that does a reasonably good job of laying out many of the architectural/topological options available when thinking about taking the steps toward virtualizing what some consider the "final frontier" in the proving grounds of production-level virtualization — the (Internet-facing) DMZ.

The whitepaper was timely because I was just finishing up my presentation for Blackhat and was busy creating a similar set of high-level architectural examples to use in my presentation.  I decided to reference those in the document because they quite elegantly represent the starting points that many folks would use as a stepping off point in their virtual DMZ adventures.

…and I think it will be an adventure punctuated perhaps by evolutionary steps as documented in the options presented in the whitepaper.

As I read through the document, I had to remind myself of the fact that this was intended to be a high-level document and not designed to cover the hairy edges of network and security design. 

The whitepaper highlighted some of the reasonable trade-off’s in complexity, resiliency, management, functionality, operational expertise, and cost but given where my head and focus are today, I have to admit that it still gnawed at me from a security perspective which is still too weak for my liking.

I’ve hinted at why in my original Four Horsemen slide, and I’m going to be speaking for 75 minutes on the topic at Blackhat, so come get your VirtSec boogie on there for a full explanation…

Alessandro got dinged in a comment on his blog for a statement in which he suggested that partially-collapsed as well as fully-collapsed DMZ’s with virtual separation of trust zones "…should be avoided at all costs because they imply the inviolability of the hypervisor (at any level: from the virtual networking to the kernel) something that nor VMware neither any other virtualization vendor can grant."

This appears contradictory to his initial assessment of DMZ virtualization wherein he stated that "…there [is] nothing bad in virtualizing the DMZ as long as we are fully aware of the risks."  In a way, I think I understand exactly where Alessandro is coming from, even if I don’t completely agree with him (or at least I partially do…)

This really paints an altogether unfortunate and yet accurate picture of the circular arguments folks engage in when they combine the following topics in a single argument:

  • Securing virtualization
  • Virtualizing security
  • Security via virtualization

In the same way that we trust our operating system vendors who provide us with the operational underpinnings of our datacenters with the hope that they will approach a reasonable level of "security" in their products, we are basically at the same point with our virtualization (OS) platform providers.

Hope is not a strategy, but it seems we’ve at least accepted it for the time being… ;(

Sure there are new attack vectors and operational risks, but the slippery slope of not being able to really quantify whether you are more or less at risk based solely on the one-dimensional data point of the infallibility of the hypervisor  and then write the whole concept off seems a little odd to me.

If you’re truly assessing risk in the potential virtualization of your DMZ, you’ll take the operational/architectural guidelines as well as the subjective business impacts into consideration.  Simply stating that one should or should not virtualize a DMZ without a holistic approach is myopic.

To circle back on the topic, the choice of whether to — and how to — virtualize your DMZ  is really starting to gain traction.  I think the whitepaper took a decent first-pass stab at exploring how one might approach it, but the devil’s in the details — or at least the devil’s 4 horsemen are 😉


Blackhat 2008: Four Horsemen Of the Virtualization Apocalypse – Done!

June 30th, 2008 5 comments

Today was the deadline for submission for all selected Blackhat presentations. 

I’m giving a 75 minute talk titled "The Four Horsemen of the Virtualization Apocalypse" which is based upon my original blog posting here.

I dutifully uploaded my presentation to Ping and the gang at Blackhat HQ today (on time, that’s a first!) with a sigh of relief and accomplishment.  I’ve done hundreds of presentations over the years, but this one is special.

I have to say that I poured my heart and soul into this presentation.  I went all "Zen and the Art of Presentation" for most of it and I think that combined with the dozens of hours I put into the content, the diagrams and animations turned out purdy. 😉

Once BH is done, I’ll be posting it online with my narrative as I have my other presentations.

This cathartic little post is just the final little exhale of this project prior to numerous advance rehearsals, the first of which I will be inflicting upon my unwitting guests (75+ of them thus far) at my July 5th Pig Roast & Mojito festival in honor of another notch in the annual belt I’ve managed to stay alive on this hunk o’ rock.

Speaking of which, if you’re in the MA area and want an amazing cuban or southern-style pulled pork feast with all the trimmings, drop me a line as everyone’s welcome…many of the BeanSec’rs are coming, you should too!

Happy 4th/5th!


VirtSec Not A Market!? Fugghetaboutit!

June 23rd, 2008 11 comments

Thanks to Alan Shimel and his pre-Blackhat Security Bloggers Network commentary, a bunch of interesting folks are commenting on the topic of virtualization security (VirtSec) which is the focus of my preso at Blackhat this year.

Mike Rothman did his part this morning by writing up a thought-provoking piece opining on the lack of a near-term market for VirtSec solutions:

So I’m not going to talk about technical stuff. Yet, I do feel compelled to draw the conclusion that despite the dangers, it doesn’t matter. All the folks that are trying to make VirtSec into a market are basically just pushing on a rope.

That’s right. Now matter how hard you push (or how many blog postings you write), you are not going to make VirtSec into a market for at least 2 years. And that is being pretty optimistic. So for all those VCs that are thinking they’ve jumped onto the next big security opportunity, I hope your partnership will allow you to be patient.

Again, it’s not because the risks of virtualization aren’t real. If guys like Hoff and Thomas say they are, then I tend to believe them. But Mr. Market doesn’t care what smart guys say. Mr. Market cares about budget cycles and priorities and political affiliations, and none of these lead me to believe that VirtSec revenues are going to accelerate anytime soon.

Firstly, almost all markets take a couple of years to fully develop and mature and VirtSec is no different.  Nobody said that VirtSec will violate the laws of physics, but it’s also a very hot topic and consumers/adopters are recognizing that security is a piece of the puzzle that is missing.

In many cases this is because virtualization platform providers have simply marketed virtualization as being "as secure" or "more secure" than than their physical counterparts.  This, combined with the rapid adoption of virtualization, has caused a knee jerk reactive reaction.

By the way, this is completely par for the course in our industry.  If you act surprised, you deserve an Emmy 😉

Secondly, and most importantly to me, Mike did me a bit of a disservice by intimating that my pushing the issues regarding VirtSec are focused solely on the technical.  Sadly, that’s so far off base from my "fair and balanced" perspective on the matter because along with the technical issues, I constantly drum home the following:

"Nobody Puts Baby In the Corner"

Painting only one of the legs of the stool as my sole argument isn’t accurate and doesn’t portray what I have been talking about for some time — and agree with Mike about — that these challenges are more than one-dimensional.

The reality is that Mike is right — the budget, priority and politics will bracket VirtSec’s adoption, but only if you think of VirtSec as a technical problem.

Is VirtSec a market?  My opinion: it’s an instantiation of technology, practice and operational adjustment brought forth as a derivative of a disruptive technology and prevailing market conditions. 

Does that mean it’s a feature as opposed to a market?  No.  In my opinion, it’s an evolution of an existing market, rife with existing solutions and punctuated by emerging ones.

The next stop is how "security" will evolve from VirtSec to CloudSec…


Categories: Virtualization Tags:

New Fortinet Patents May Spell Nasty Trouble For UTM Vendors, Virtualization Vendors, App. Delivery Vendors, Routing/Switching Vendors…

June 23rd, 2008 11 comments

FortinetCheck out the update below…

Were I in the UTM business, I’d be engaging the reality distortion field and speed-dialing my patent attorneys at this point.

Fortinet has recently had some very interesting patent applications granted by the PTO.

Integrated network and application security, together with virtualization technologies, offer a powerful and synergistic approach for defending against an increasingly dangerous cyber-criminal environment. In combination with its extensive patent-pending applications and patents already granted, Fortinet’s newest patents address critical technologies that enable comprehensive network protection:

  • U.S. Patent #7,333,430 – Systems and Methods for Passing Network Traffic Data – directed to efficiently processing network traffic data to facilitate policy enforcement, including content scanning, source/destination verification, virus scanning, content detection and intrusion detection;

  • U.S. Patent #7,340,535 – System and Method for Controlling Routing in a Virtual Router System – directed to controlling the routing of network data, and providing efficient configuration of routing functionality and optimized use of available resources by applying functions to data packets in a virtual environment;

  • U.S. Patent #7,376,125 – Service Processing Switch – directed to providing IP services and IP packet processing in a virtual router-based system using IP flow caches, virtual routing engines, virtual services engines and advanced security engines;

  • U.S. Patent # 7,389,358 – Distributed Virtual System to Support Managed, Network-based Services – directed to a virtual routing system, which includes processing elements to manage and optimize IP traffic, useful for service provider switching functions at Internet point-of-presence (POP) locations.

These patents could have some potentially profound impact on vendors who offer "integrated security" by allowing for virtualized application of network security policy.  These patents could easily be enforced outside of the typically-defined UTM offerings, also.

I’m quite certain Cisco and Juniper are taking note as should be anyone in the business of offering virtualized routing/switching combined with security — that’s certainly a broad swath, eh?

On a wider note, I’ve actually been quite impressed with the IP portfolio that Fortinet has been assembling over the last couple of years.  If you’ve been paying attention, you will notice (for example) that that they have scooped up much of the remaining CoSine IP as well as recently acquired IPlocks’ database security portfolio.

If I were they, the next thing I’d look for (and would have a while ago) is to scoop up a Web Application Firewall/Proxy vendor…

I trust you can figure out why…why not hazard a guess in the comments?


Updated:  It occured to me that this may be much more far-reaching than just UTM vendors, that basically this could affect folks like Crossbeam, Check Point, StillSecure, Cisco, Juniper, Secure Computing, f5…basically anyone who sells a product that mixes the application of security policy with virtualized routing/switching capabilities…

How about those ASA’s or FWSMs?  How about those load balancers with VIPs?

Come to mention it, what of VMware?  How about the fact that in combining virtual networking with VMsafe, you’ve basically got what amounts to coverage by the first two patents:

U.S. Patent #7,333,430 – Systems and Methods for Passing Network Traffic Data – directed to efficiently processing network traffic data to facilitate policy enforcement, including content scanning, source/destination verification, virus scanning, content detection and intrusion detection;

U.S. Patent #7,340,535 – System and Method for Controlling Routing in a Virtual Router System – directed to controlling the routing of network data, and providing efficient configuration of routing functionality and optimized use of available resources by applying functions to data packets in a virtual environment;


Now, I’m not a lawyer, I just play one on teh Interwebs.

Visualization Through Virtualization…

June 23rd, 2008 2 comments

I’ve spent quite a bit of time investigating emerging technology solutions for virtualization security (VirtSec) lately.  I’ve made mention of an idea that conceptually didn’t gel until this last week.

I was speaking at TechTarget’s Financial Information Security Decisions show in New York and was paired up in the network workshop with Joel Snyder of Opus One.

Joel was presenting his 5 myths of Information Security and one of the myths was (paraphrasing) that Intrusion Detection solutions don’t detect solutions. 

What Joel went on to suggest is that what IDS solutions actually do is provide one with a perspective visibility across the network; determining what represents an actual "intrusion" is a contextual argument that goes to the efficacy and correlation capabilities of the platform(s.)

This got me thinking along the lines of some of the emerging IDP (intrusion detection and prevention) solutions from emerging vendors in the virtualization space.

Something rather profound but obvious dawned on me.

Given the integration for management of these "security" solutions with the management platforms of the virtualization platform providers AND the operational shift of who was managing the security solutions (see here) really means that these aren’t really virtualization security solutions at all, they are actually vitualization visualization solutions.

Virtualization management platforms provide the configuration and operational telemetry regarding the virtual environment to these solutions which does what most HostSec or NetSec solutions have been unable to do in the past: gain context regarding how the infrastructure the security solutions are protecting are actually configured.

HostSec and NetSec solutions have no context of the solutions outside of the host they are protecting or the network segment/IP address they are connected to respectively.  Not so with VirtSec solutions.

That’s pretty neat when you think of it.  Even though we’re substantially handicapped as to what these solutions can *do* with this capability today (see here) integrating this capability can dramatically and positively affect the way in which "security" administration and analytics manifests themselves over time.

"Yeah, but these are basically the same views someone might get looking at a firewall, IDS or IPS tool today," you might argue.  That’s right, except we already know that server and virtualization administrators (as well as most network folk) don’t have access to those tools…

So in many cases the administrators who will be looking at this information are not "security" folks by trade, so the (and you’ll excuse the wording) dumbing down of this information actually provides a very good perch upon which to troubleshoot and extend the forced simplicity of "checkbox" security in the virtualization platforms to this new class of security administrator.

This may be the first time some of these teams have had access to "security" telemetry of this kind.

In the long term, he challenge will be how, when you have multiple of these solutions, you gain a consolidated view, but the reality is that the NetSec and HostSec admins can use this same view and then click-through into the specific toolset management stacks for finer-grained configuration/analysis. 

This is actually an interesting way to think about how the re-integration of the server admins, network and security teams might become more cohesive operationally in the future…through the same lens of visualizing the environment.

Here are some ideas of what I’m talking about; these are some snapshots of management interfaces of upcoming VirtSec solution providers.  These are random shots of some of the different views of managing virtual appliances…


Blue Lane:



Thanks to Amir-Ben Afraim (Altor,) Greg Ness (Blue Lane,) Michael Berman (Catbird,) and Dave Devalk (Reflex) for getting these images to me.  Also, hat-tip to Joel Snyder for the noodle nudge…


Categories: Virtualization Tags:

Self Healing Intrusion Tolerance…

June 22nd, 2008 1 comment

Tim Greene from Computerworld wrote a story last week titled "Security software makes virtual servers a moving target.

This story draws attention to a story on the same topic that popped up a while ago (see Dark Reading) about some research led by George Mason University professor Arun Sood that is being productized and marketed as "Self Cleansing Intrusion Tolerance (SCIT)"

SCIT is based upon the premise that taking machines (within a virtualized environment) in and out of service rapidly and additionally substituting the underlying operating systems/application combinations reduces the exposure of attack and hastens the remediation/mitigation process by introducing the notion of what Sood calls "security by diversity."

Examples are given in the article suggesting the applicability of application types for SCIT:

SCIT is best suited to servers with short transaction times and has been tested with DNS, Web and single-sign-on servers, he says, which can perform effectively even if each virtual server is in use for just seconds.

In today’s multi-tier, SOA, web2.0, cloud-compute, mashup world, with or without the issue of preservation of state across even short-transactional applications, I’m not sure I see the practical utility in this approach.  The high-level concept, yes, the underlying operational reality…not so much.

Some of you might notice the, um, slightly different comparative version of Sood’s acronym reflecting my opinion of this approach in this blog entry’s title… 😉

I think that SCIT’s underlying principles lend themselves well to the notions I champion of resilient and survivable systems, but I think that the mechanical practicality of the proposed solutions — even within the highly dynamic and agile framework of virtualization — simply aren’t realistic today.

Real-time infrastructure with it’s dynamic orchestration, provisioning, governance, and security is certainly evolving and we might get to the point where heterogeneous systems are autonomously secured based upon global policy definitions up and down the stack, but we are quite some time away from being able to realize this vision.

You will no doubt notice that the focal element of SCIT is the concept of a security-centric perspective on lifecycle management of VM’s.  It’s quite obvious that VM lifecycle management is a hotly-contested topic for which many of the large infrastructure players are battling. 

Security will simply be a piece of this puzzle, not the focus of it.

This is not to say that this solution is not worthy of consideration as we look out across the horizon, and from a timing perspective it will likely surface again given it’s "ahead of it’s deployable time" status but I’m forced to consider what box I’d check in describing SCIT today:

  • Feature
  • Solution
  • Future

Neat stuff, but if you’re going to take investment and productize something, it’s got to be realistically deployable.  I’d suggest that baking this sort of functionality into the virtualization platforms themselves and allowing for universal telemetry (sort of like this) to allow for either "self cleansing intrusion tolerance" or even "self healing intrusion tolerance" is probably a more reasonable concept. 


Categories: Virtualization Tags:

Security Pros Say VirtSec Is An Operations Problem?

June 19th, 2008 14 comments

Mark Gaydos from Tripwire’s blog wrote an interesting article titled "Ops or Security: Who’s Responsible for Securing Virtualization?"  The outcome is pretty much inline with my prior points that the biggest challenges we have in virtualization are operational and organizational rather than technical.

To wit, I quoteth from Mark’s post:

Tripwire recently performed a 25 question survey on virtualization security.  Respondents broke down 78%/22% between management and administrator/staff respectively.  We will be publishing a report around this survey in the next two weeks. 

However, one of the interesting points that came out of the survey was that respondents feel that the operations team is responsible for securing a virtualized environment (almost two thirds of the respondents felt this way).  This includes over half of the actual  “security” personnel who took the survey who feel operations has this responsibility. 

That’s right!  Over half of the people covering security who responded to the survey said operations needs to secure virtual systems and not them.

My question is why?  Does security not want to deal with virtualization?  Do personnel feel that operations is closer to virtualization and they understand the issues?  Does security just want to wash their hands of the issue?  Or is management just leaning towards having operations handle everything around virtualization?

However, I wonder how much Mark read into the security personnel’s answers inasmuch as he suggests that they do "…not want to deal with virtualization" versus perhaps the fact that they don’t actually have the visibility or access to the tools to do so!*

Responsibility versus desire are two very different things!

Managing the "security" of virtualized environments today really centers around the deployments of virtual appliances and the configuration of the vSwitches.  That means in a VMware environment, you have to have access and rights via Virtualcenter.  The same is true in terms of Xen derivatives; if you don’t have access to configure and provision the networking and VM’s, you’re done.

Security in virtualized environments today is literally often thought of as a checkbox or two in a GUI somewhere.  (All things considered, it would be great to be able to realize that one day…)

Just like security folks have locked server and network admins out of *their* firewalls and IPS’s, and as network folks have done the same in *their* routers and switches, virtual SysAdmins have done the same in *their* virtual server environments.  If you don’t have access to the VM command and control, you can’t manage the security bits and pieces bolted onto it.

I don’t think it’s that the security folks *want* to surrender the responsibility, I think it’s that they never had it in the first place the moment the V-word entered the picture.

It ain’t rocket science.  It ain’t voodoo.  It ain’t a tectonic buck-passing conspiracy.  It’s access, separation of duties (by force,) visibility and capability, plain and simple.


*Update: Per Amrit’s excellent comments, I look forward to Tripwire releasing the report to gain clarity on the question(s) asked as it begs the point as to whether the answers Mark refers to were in regards to the mechanical operationalization of security (the "doing" part) or the policy, strategy, audit and monitoring  tasks.  Are we talking about "security management" in general or "security operations?"

In either circumstance the "security" team is — based upon my observation from feedback — being left out of both.

Categories: Virtualization Tags:

Verizon Business 2008 Data Breach Investigations Report

June 12th, 2008 14 comments

This is an excellent report culled from over four years and 500 forensic investigations performed by the Verizon Business RISK team.

There are some very interesting statistics presented in this report that may be very eye-opening to many (italicized comments added by me):

Who is behind data breaches?
73% resulted from external sources  <– So much for "insider risk trumps all"
18% were caused by insiders
39% implicated business partners
30% involved multiple parties

How do breaches occur?
62% were attributed to a significant error  <– Change control is as important as
59% resulted from hacking and intrusions   <– compensating controls
31% incorporated malicious code
22% exploited a vulnerability
15% were due to physical threats

What commonalities exist?
66%  involved data the victim did not know was on the system <– Know thy data/where it is!
75%  of breaches were not discovered by the victim  <– Manage and monitor!
83%  of attacks were not highly difficult
85%  of breaches were the result of opportunistic attacks
87%  were considered avoidable through reasonable controls <– So why aren’t they used?

Very, very interesting…

You can get the report free of charge here.


*Update: I’ve read quite a few bristling reviews of this document.  Some claim it doesn’t go far enough to describe how VzB collected and sampled the data and from whom.  Others suggest it’s FUD and obviously just meant to generate business for VzB.

It’s true we don’t know who the customers were.  We don’t necessarily know which segments of industry they came from or how big/small they were.  It’s not authored by a disinterested party.  Got it.

I guarantee that some of people who are amongst those being critical of the report will bitch about it and then use this data just like they have the FBI/CERT data over the years…

Take the report on face value and map it against others to see how it lines up.

This is not the definitive work on breaches, for sure, but it’s an interesting and useful data point to consider when exploring trending as well as for use in strategic planning in assessing your security program and preparing for an inevitable breach. 

Categories: Uncategorized Tags:

Notes from the IBM Global Innovation Outlook: Security and Society

June 12th, 2008 No comments

This week I had the privilege to attend IBM’s Global Innovation Outlook in Chicago which focused this go-round on the topic of security and society.   This was the last in the security and society series with prior sessions held in Moscow, Berlin, and Tokyo.

The mission of the GIO is as follows:

The GIO is rooted in the belief that if we are to surface the truly revolutionary innovations of our time, the ones that will change the world for the better, we are going to need everyone’s help. So for the past three years IBM has gathered together the brightest minds on the planet — from the worlds of business, politics, academia, and non-profits – and challenged them to work collaboratively on tackling some of the most vexing challenges on earth. Healthcare, the environment, transportation.

We do this through a global series of open and candid conversations called “deep dives.” These deep dives are typically done on location. Already, 25 GIO deep dives have brought together more than 375 influencers from three dozen countries on four continents. But this year we’re taking the conversation digital, and I’m going to help make that happen.

The focus on security and society seeks to address the following:

The 21st Century has brought with it a near total redefining of the notion of security. Be it identity theft, border security, or corporate espionage, the security of every nation, business, organization and individual is in constant flux thanks to sophisticated technologies and a growing global interdependence. All aspects of security are being challenged by both large and small groups — even individuals — that have a disruptive capability disproportionate to their size or resources.

At the same time, technology is providing unprecedented ways to sense and deter theft and other security breaches.  Businesses are looking for innovative ways to better protect their physical and digital assets, as well as the best interests of their customers. Policy makers are faced with the dilemma of enabling socioeconomic growth while mitigating security threats. And each of us is charged with protecting ourselves and our assets in this rapidly evolving, increasingly confusing, global security landscape.

The mixture of skill sets, backgrounds, passions and agendas of those in attendance was intriguing and impressive.  Some of the folks we had in attendance were:

  • Michael Barrett, the CISO of PayPal
  • Chris Kelly, the CPO of Facebook
  • Ann Cavoukian, the Information & Privacy Commissioner or Ontario
  • Dave Trulio, special assistant to the president/homeland security council
  • Carol Rizzo, CTO of Kaiser Permanente
  • Mustaque Ahamad, Director, Georgia Tech Information Security Center
  • Julie Ferguson, VP of Emerging Technology, Debix
  • Linda Foley, Founder of the Identity Theft Resource Center
  • Andrew Mack, Director, Human Security Report Project, Simon Fraser University

The 24 of us with the help of a moderator spent the day discussing, ideating and debating various elements of security and society as we clawed our way through pressing issues and events both current and some focused on the future state.

What was interesting to me — but not necessarily surprising — was that the discussions almost invariably found their way back to the issue of privacy, almost to the exclusion of anything else.

I don’t mean to suggest that privacy is not important — far from it — but I found that it became a blackhole into which much of the potential for innovation became gravitationally lured.   Security is, and likely always will be, at odds in a delicate (or not so) struggle with the need for privacy and it should certainly not take a back seat. 

However, given what we experienced, where privacy became the "yeah, but" that almost stunted discussions of innovation from starting, one might play devil’s advocate (and I did) and ask how we balance the issues at hand.  It was interesting to poke and prod to hear people’s reactions.

Given the workup of many of the attendees it’s not hard to see why things trended in this direction, but I don’t think we ever really got into the mode of discussing the solutions in lieu of being focused on the problems.

I certainly was responsible for some of that as Dan Briody, the event’s official blogger, highlighted a phrase I used to apologize in advance for some of the more dour aspects of what I wanted to ground us all with when I said “I know this conversation is supposed to be about rainbows and unicorns, but the Internet is horribly, horribly broken."

My goal was to ensure we talked about the future whilst also being mindful of the past and present — I didn’t expect we’d get stuck there, however.  I was hopeful that we could get past the way things were/are in the morning and move to the way things could be in the afternoon, but it didn’t really materialize.

There was a shining moment, as Dan wrote in the blog, that I found as the most interesting portion of the discussion, and it came from Andrew Mack.  Rather than paraphrase, I’m going to quote from Dan who summed it up perfectly:

Andrew Mack, the Director of the Human Security Report Project at the Simon Fraser University School for International Studies in Vancouver has a long list of data that supports the notion that, historically speaking, the planet is considerably more secure today than at any time. For example, the end of colonialism has created a more stable political environment. Likewise, the end of the Cold War has removed one of the largest sources of ideological tension and aggression from the global landscape. And globalization itself is building wealth in developing countries, increasing income per capita, and mitigating social unrest.

All in all, Mack reasons, we are in a good place. There have been sharp declines in political violence, global terrorism, and authoritarian states. Human nature is to worry. And as such, we often believe that the most dangerous times are the ones in which we live. Not true. Despite the many current and gathering threats to our near- and long-term security, we are in fact a safer, more secure global society.

I really wished we were able to spend more time exploring deeper these social issues in balance with the privacy and technology elements that dominated the discussion and actually unload the baggage to start thinking about novel ways of dealing with things 5 or 10 years out.

My feedback would be to split the sessions into two-day events.  Day one could be spent framing the problem sets and exploring the past and present.  This allows everyone to clearly define the problem space.  Day two would then focus on clearing the slate and mindmapping the opportunities for innovation and change to solve the challenges defined in day one.

In all, it was a great venue and I met some fantastic people and had great conversation.  I plan to continue to stay connected and work towards proposing and crafting solutions to some of the problems we discussed.

I hope I made a difference in a good way.


Categories: Innovation Tags:

Is There a Difference Between Data LOSS and Data LEAKAGE Prevention?

June 7th, 2008 21 comments

I was reading Stuart King’s blog entry titled "Is Data Loss Prevention Really Possible?"

Besides a very interesting and reasonable question to ask, I was also intrigued by a difference I spotted between the title of his article and the first sentence in the body.

Specifically, in the title Stuart asked if "Data Loss Prevention [is] Really Possible?" but in the body he asked if it "…is really possible to prevent data leakage?"

In my opinion, data loss and data leakage are two different issues, albeit with some degree of subtlety. I’m interested in your position.

I will explanin my opinion via an update here once folks comment so as to not color the outcome.

What’s your opinion?  Loss versus leakage?  Talk amongst yourselves.


Categories: DLP Tags: