Archive

Archive for the ‘Virtualization Security’ Category

My Information Security Magazine Cover Story: “Virtualization security dynamics get old, changes ahead”

November 4th, 2013 2 comments

ISM_cover_1113This month’s Search Security (nee Information Security Magazine) cover story was penned by none other than your’s truly and titled “Virtualization security dynamics get old, changes ahead”

I hope you enjoy the story; its a retrospective regarding the beginnings of security in the virtual space, where we are now, and we we’re headed.

I tried very hard to make this a vendor-neutral look at the state of the union of virtual security.

I hope that it’s useful.

You can find the story here.

/Hoff

Enhanced by Zemanta

Intel TPM: The Root Of Trust…Is Made In China

February 22nd, 2013 8 comments

This is deliciously ironic.

Intel‘s implementation of the TCG-driven TPM — the Trusted Platform Module — often described as a hardware root of trust, is essentially a cryptographic processor that allows for the storage (and retrieval) and attestation of keys.  There are all sorts of uses for this technology, including things I’ve written of and spoken about many times prior.  Here’s a couple of good links:

But here’s something that ought to make you chuckle, especially in light of current news and a renewed focus on supply chain management relative to security and trust.

The Intel TPM implementation that is used by many PC manufacturers, the same one that plays a large role in Intel’s TXT and Mt. Wilson Attestation platform, is apparently…wait for it…manufactured in…wait for it…China.

<thud>

I wonder how NIST feels about that?  ASSurance.

Intel_TPMROFLCoptr.  Hey, at least it’s lead-free. o_O

Talk amongst yourselves.

/Hoff

 

 

Enhanced by Zemanta

Wanna Be A Security Player? Deliver It In Software As A Service Layer…

January 9th, 2013 1 comment

As I continue to think about the opportunities that Software Defined Networking (SDN) and Network Function Virtualization (NFV) bring into focus, the capability to deliver security as a service layer is indeed exciting.

I wrote about how SDN and OpenFlow (as a functional example) and the security use cases provided by each will be a differentiating capability back in 2011: The Killer App For OpenFlow and SDN? SecurityOpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security, and Back To The Future: Network Segmentation & More Moaning About Zoning.

Recent activity in the space has done nothing but reinforce this opinion.  My day job isn’t exactly lacking in excitement, either 🙂

As many networking vendors begin to bring their SDN solutions to market — whether in the form of networking equipment or controllers designed to interact with them — one of the missing strategic components is security.  This isn’t a new phenomenon, unfortunately, and as such, predictably there are also now startups entering this space and/or retooling from the virtualization space and stealthily advertising themselves as “SDN Security” companies 🙂

Like we’ve seen many times before, security is often described (confused?) as a “simple” or “atomic” service and so SDN networking solutions are designed with the thought that security will simply be “bolted on” after the fact and deployed not unlike a network service such as “load balancing.”  The old “we’ll just fire up some VMs and TAMO (Then a Miracle Occurs) we’ve got security!” scenario.  Or worse yet, we’ll develop some proprietary protocol or insertion architecture that will magically get traffic to and from physical security controls (witness the “U-TURN” or “horseshoe” L2/L3 solutions of yesteryear.)

The challenge is that much of Security today is still very topologically sensitive and depends upon classical networking constructs to be either physically or logically plumbed between the “outside” and the asset under protection, or it’s very platform dependent and lacks the ability to truly define a policy that travels with the workload regardless of the virtualization, underlay OR overlay solutions.

Depending upon the type of control, security is often operationalized across multiple layers using wildly different constructs, APIs and context in terms of policy and disposition depending upon it’s desired effect.

Virtualization has certainly evolved our thinking about how we should think differently about security mostly due to the dynamism and mobility that virtualization has introduced, but it’s still incredibly nascent in terms of exposed security capabilities in the platforms themselves.  It’s been almost 5 years since I started raging about how we need(ed) platform providers to give us capabilities that function across stacks so we’d have a fighting chance.  To date, not only do we have perhaps ONE vendor doing some of this, but we’ve seen the emergence of others who are maniacally focused on providing as little of it as possible.

If you think about what virtualization offers us today from a security perspective, we have the following general solution options:

  1. Hypervisor-based security solutions which may apply policy as a function of the virtual-NIC card of the workloads it protects.
  2. Extensions of virtual-networking (i.e. switching) solutions that enable traffic steering and some policy enforcement that often depend upon…
  3. Virtual Appliance-based security solutions that require manual or automated provisioning, orchestration and policy application in user space that may or may not utilize APIs exposed by the virtual networking layer or hypervisor

There are tradeoffs across each of these solutions; scale, performance, manageability, statefulness, platform dependencies, etc.  There simply aren’t many platforms that natively offer security capabilities as a function of service delivery that allows arbitrary service definition with consistent and uniform ways of describing the outcome of the policies at these various layers.  I covered this back in 2008 (it’s a shame nothing has really changed) in my Four Horsemen Of the Virtual Security Apocalypse presentation.

As I’ve complained for years, we still have 20 different ways of defining how to instantiate a five-tupule ACL as a basic firewall function.

Out of the Darkness…

The promise of SDN truly realized — the ability to separate the control, forwarding, management and services planes — and deploy security as a function of available service components across overlays and underlays, means we will be able to take advantage of any of these models so long as we have a way to programmatically interface with the various strata regardless of whether we provision at the physical, virtual or overlay virtual layer.

It’s truly exciting.  We’re seeing some real effort to enable true security service delivery.

When I think about how to categorize the intersection of “SDN” and “Security,” I think about it the same way I have with virtualization and Cloud:

  • Securing SDN (Securing the SDN components)
  • SDN Security Services (How do I take security and use SDN to deliver security as a service)
  • Security via SDN (What NEW security capabilities can be derived from SDN)

There are numerous opportunities with each of these categories to really make a difference to security in the coming years.

The notion that many of our network and security capabilities are becoming programmatic means we *really* need to focus on securing SDN solutions, especially given the potential for abuse given the separation of the various channels. (See: Software Defined Networking (In)Security: All Your Control Plane Are Belong To Us…)

Delivering security as a service via SDN holds enormous promise for reasons I’ve already articulated and gives us an amazing foundation upon which to start building solutions we can’t imagine today given the lack of dynamism in our security architecture and design patterns.

Finally, the first two elements give rise to allow us to do things we can’t even imagine with today’s traditional physical and even virtual solutions.

I’ll be starting to highlight really interesting solutions I find (and am able to talk about) over the next few months.

Security enabled by SDN is going to be huge.

More soon.

/Hoff

Related articles

Enhanced by Zemanta

NIST’s Trusted Geolocation in the Cloud: PoC Implementation

December 22nd, 2012 3 comments

I was very interested and excited to learn what NIST researchers and staff had come up with when I saw the notification of the “Draft Interagency Report 7904, Trusted Geolocation in the Cloud: Proof of Concept Implementation.”

It turns out that this report is an iteration on the PoC previously created by VMware, Intel and RSA back in 2010 which utilized Intel’s TXT, VMWare’s virtualization platform and the RSA/Archer GRC platform, as this one does.

I haven’t spent much time to look at the differences, but I’m hoping as I read through it that we’ve made progress…

You can read about the original PoC here, and watch a video from 2010 about it here.  Then you can read about it again in its current iteration, here (PDF.)

I wrote about this topic back in 2009 and still don’t have a good firm answer to the question I asked in 2009 in a blog titled “Quick Question: Any Public Cloud Providers Using Intel TXT?” and the follow-on “More On High Assurance (via TPM) Cloud Environments

At CloudConnect 2011 I also filmed a session with the Intel/RSA/VMware folks titled “More On Cloud and Hardware Root Of Trust: Trusting Cloud Services with Intel® TXT

I think this is really interesting stuff and a valuable security and compliance capability, but is apparently still hampered with practical deployment challenges.

I’m also confused as to why RSA employees were not appropriately attributed under the NIST banner and this is very much a product-specific/vendor-specific set of solutions…I’m not sure I’ve ever seen a NIST-branded report like this.

At any rate, I am interested to see if we will get to the point where these solutions will have more heterogeneous uptake across platforms.

/Hoff

Enhanced by Zemanta

Should/Can/Will Virtual Firewalls Replace Physical Firewalls?

October 15th, 2012 6 comments
Simulação da participação de um Firewall entre...

Simulação da participação de um Firewall entre uma LAN e uma WAN Français : Schéma d’un pare-feu entre un LAN et un WAN (Photo credit: Wikipedia)

“Should/Can/Will Virtual Firewalls Replace Physical Firewalls?”

The answer is, as always, “Of course, but not really, unless maybe, you need them to…” 🙂

This discussion crops up from time-to-time, usually fueled by a series of factors which often lack the context to appropriately address it.

The reality is there exists the ever-useful answer of “it depends,” and frankly it’s a reasonable answer.

Back in 2008 when I created “The Four Horsemen of the Virtualization Security Apocalypse” presentation, I highlighted the very real things we needed to be aware of as we saw the rapid adoption of server virtualization…and the recommendations from virtualization providers as to the approach we should take in terms of securing the platforms and workloads atop them.  Not much has changed in almost five years.

However, each time I’m asked this question, I inevitably sound evasive when asking for more detail when the person doing the asking references “physical” firewalls and what it is they mean.  Normally the words “air-gap” are added to the mix.

The very interesting thing about how people answer this question is that in reality, the great many firewalls that are deployed today have the following features deployed in common:

  1. Heavy use of network “LAG” (link aggregation group) interface bundling/VLAN trunking and tagging
  2. Heavy network virtualization used, leveraging VLANs as security boundaries, trunked across said interfaces
  3. Increased use of virtualized contexts and isolated resource “virtual systems” and separate policies
  4. Heavy use of ASIC/FPGA and x86 architectures which make use of shared state tables, memory and physical hardware synced across fabrics and cluster members
  5. Predominant use of “stateful inspection” at layers 2-4 with the addition of protocol decoders at L5-7 for more “application-centric” enforcement
  6. Increasing use of “transparent proxies” at L2 but less (if any) full circuit or application proxies in the classic sense

So before I even START to address the use cases of the “virtual firewalls” that people reference as the comparison, nine times out of ten, that supposed “air gap” with dedicated physical firewalls that they reference usually doesn’t compute.

Most of the firewall implementations that people have meet most of the criteria mentioned in items 1-6 above.

Further, most firewall architectures today aren’t running full L7 proxies across dedicated physical interfaces like in the good old days (Raptor, etc.) for some valid reasons…(read the postscript for an interesting prediction.)

Failure domains and the threat modeling that illustrates cascading impact due to complexity, outright failure or compromised controls is usually what people are interested in when asking this question, but this gets almost completely obscured by the “physical vs. virtual” concern and we often never dig deeper.

There are some amazing things that can be done in virtual constructs that we can’t do in the physical and there are some pretty important things that physical firewalls can provide that virtual versions have trouble with.  It’s all a matter of balance, perspective, need, risk and reward…oh, and operational simplicity.

I think it’s important to understand what we’re comparing when asking that question before we conflate use cases, compare and mismatch expectations, and make summary generalizations (like I just did 🙂 about that which we are contrasting.

I’ll actually paint these use cases in a follow-on post shortly.

/Hoff

POSTSCRIPT:

I foresee that we will see a return of the TRUE application-level proxy firewall — especially with application identification, cheap hardware, more security and networking virtualized in hardware.  I see this being deployed both on-premise and as part of a security as a service offering (they are already, today — see CloudFlare, for example.)

If you look at the need to terminate SSL/TLS and provide for not only L4-L7 sanity, protect applications/sessions at L5-7 (web and otherwise) AND the renewed dependence upon XML, SOAP, REST, JSON, etc., it will drive even more interesting discussions in this space.  Watch as the hybrid merge of the WAF+XML security services gateway returns to vogue… (see also Cisco EOLing ACE while I simultaneously receive an email from Intel informing me I can upgrade to their Intel Expressway Service Gateway…which I believe (?) was from the Cervega Sarvega acqusition?)

Enhanced by Zemanta

The Cuban Cloud Missile Crisis…Weapons Of Mass Abstraction.

September 7th, 2012 2 comments
English: Coat of arms of Cuba. Español: Escudo...

English: Coat of arms of Cuba. Español: Escudo de Cuba. Русский: Герб Кубы. (Photo credit: Wikipedia)

In the midst of the Cold War in October of 1962, the United States and the Soviet Union stood periously on the brink of nuclear war as a small island some 90 miles off the coast of Florida became the focal point of intense foreign policy scrutiny, challenges to sovereignty and political arm wrestling the likes of which were never seen before.

Photographic evidence provided by a high altitude U.S. spy plane exposed the until-then secret construction of medium and intermediate ballistic nuclear missile silos, constructed by the Soviet Union, which were deliberately placed so as to be close enough to reach the continental United States.

The United States, alarmed by this unprecedented move by the Soviets and the already uneasy relations with communist Cuba, unsuccessfully attempted a CIA-led forceful invasion and overthrow of the Cuban regime at the Bay of Pigs.

This did not sit well with either the Cubans or Soviets.  A nightmare scenario ensued as the Soviets responded with threats of its own to defend its ally (and strategic missile sites) at any cost, declaring the American’s actions as unprovoked and unacceptable.

During an incredibly tense standoff, the U.S. mulled over plans to again attack Cuba both by air and sea to ensure the disarmament of the weapons that posed a dire threat to the country.

As posturing and threats continued to escalate from the Soviets, President Kennedy elected to pursue a less direct military action;  a naval blockade designed to prevent the shipment of supplies necessary for the completion and activation of launchable missiles.  Using this as a lever, the U.S. continued to demand that Russia dismantle and remove all nuclear weapons as they prevented any and all naval traffic to and from Cuba.

Soviet premier Krustchev protested such acts of “direct aggression” and communicated to president Kennedy that his tactics were plunging the world into the depths of potential nuclear war.

While both countries publicly traded threats of war, the bravado, posturing and defiance were actually a cover for secret backchannel negotiations involving the United Nations. The Soviets promised they would dismantle and remove nuclear weapons, support infrastructure and transports from Cuba, and the United States promised not to invade Cuba while also removing nuclear weapons from Turkey and Italy.

The Soviets made good on their commitment two weeks later.  Eleven months after the agreement, the United States complied and removed from service the weapons abroad.

The Cold War ultimately ended and the Soviet Union fell, but the political, economic and social impact remains even today — 40 years later we have uneasy relations with (now) Russia and the United States still enforces ridiculous economic and social embargoes on Cuba.

What does this have to do with Cloud?

Well, it’s a cute “movie of the week” analog desperately in need of a casting call for Nikita Khrushchev and JFK.  I hear Gary Busey and Aston Kutcher are free…

As John Furrier, Dave Vellante and I were discussing on theCUBE recently at VMworld 2012, there exists an uneasy standoff — a cold war — between the so-called “super powers” staking a claim in Cloud.  The posturing and threats currently in process don’t quite have the world-ending outcomes that nuclear war would bring, but it could have devastating technology outcomes nonetheless.

In this case, the characters of the Americans, Soviets, Cubans and the United Nations are played by networking vendors, SDN vendors, virtualization/abstraction vendors, cloud “stack” projects/efforts/products and underlying CPU/chipset vendors (not necessarily in that order…)  The rest of the world stands by as their fate is determined on the world’s stage.

If we squint hard enough at Cloud, we might find out very own version of the “Bay of Pigs,” with what’s going on with OpenStack.

The “community” effort behind OpenStack is one largely based on “industry” and if we think of OpenStack as Cuba, it’s being played as pawn in the much larger battle for global domination.  The munitions being stocked in this tiny little enclave threatens to disrupt relations of epic proportions.  That’s why we now see so much strategic movement around an initiative and technology that many outside of the navel gazers haven’t really paid much attention to.

Then there are players like Amazon Web Services who, like China of today, quietly amass their weapons of mass abstraction as the industry-jockeying and distractions play on (but that’s a topic for another post)

Cutting to the chase…if we step back for a minute

Intel is natively bundling more and more networking and virtualization capabilities into their CPU/Chipsets and a $7B investment in security company McAfee makes them a serious player there.  VMware is de-emphasizing the “hypervisor” and is instead positioning they are focused on end-to-end solutions which include everything from secure mobility, orchestration/provisioning and now, with Nicira, networking.  Networking companies like Cisco and Juniper continue to move up-stack to deeper integrate networking and security along with service overlays in order to remain relevant in light of virtualization and SDN.

…and OpenStack’s threat of disrupting all of those plays makes it important enough to pay attention to.  It’s a little island of technology that is causing huge behemoths to collide.  A molehill that has become a mountain.

If today’s announcements of VMware and Intel joining OpenStack as Gold Members along with the existing membership by other “super powers” doesn’t make it clear that we’re in the middle of an enormous power struggle, I’ve got a small Island to sell you 😉

Me?  I’m going to make some Lechon Asado, enjoy a mojito and a La Gloria Cubana.

Related articles

Enhanced by Zemanta

SiliconAngle Cube: Hoff On Security – Live At VMworld 2012

August 31st, 2012 3 comments

I was thrilled to be invited back to the SiliconAngle Cube at VMworld 2012 where John Furrier, Dave Vellante and I spoke in depth about security, virtualization and software defined networking (SDN)

I really like the way the chat turned out — high octane, fast pace and some great questions!

Here is the amazing full list of speakers during the event.  Check it out, ESPECIALLY Martin Casado’s talk.

As I told him, I think he is like my Obi Wan…my only hope for convincing my friends at VMware that networking and security require more attention and a real embrace of the ecosystem…

I’d love to hear your feedback on the video.

/Hoff

 

Enhanced by Zemanta

Back To The Future: Network Segmentation & More Moaning About Zoning

July 16th, 2012 5 comments

A Bit Of Context…

This image was selected as a picture of the we...

(Photo credit: Wikipedia)A Bit Of Context…

The last 3 years have been very interesting when engaging with large enterprises and service providers as they set about designing, selecting and deploying their “next generation” network architecture. These new networks are deployed in timescales that see them collide with disruptive innovation such as fabrics, cloud, big data and DevOps.

In most cases, these network platforms must account for the nuanced impact of virtualized design patterns, refreshes of programmatic architecture and languages, and the operational model differences these things introduce.  What’s often apparent is that no matter how diligent the review, by the time these platforms are chosen, many tradeoffs are made — especially when it comes to security and compliance — and we arrive at the old adage: “You can get fast, cheap or secure…pick two.”

…And In the Beginning, There Was Spanning Tree…

The juxtaposition of flatter and flatter physical networks, nee “fabrics” (compute, network and storage,) with what seems to be a flip-flop transition between belief systems and architects who push for either layer 2 or layer 3 (or encapsulated versions thereof) segmentation at the higher layers is again aggravated by continued push for security boundary definition that yields further segmentation based on policy at the application and information layer.

So what we end up with is the benefits of flatter, any-to-any connectivity at the physical networking layer with a “software defined” and virtualized networking context floating both alongside (Nicira, BigSwitch, OpenFlow) as well as atop it (VMware, Citrix, OpenStack Quantum, etc.) with a bunch of protocols ladled on like some protocol gravy blanketing the Chicken Fried Steak that represents the modern data center.

Oh!  You Mean the Cloud…

Now, there are many folks who don’t approach it this way, and instead abstract away much of what I just described.  In Amazon Web Services’ case as a service provider, they dumb down the network sufficiently and control the abstracted infrastructure to the point that “flatness” is the only thing customers get and if you’re going to run your applications atop, you must keep it simple and programmatic in nature else risk introducing unnecessary complexity into the “software stack.”

The customers who then depend upon these simplified networking services must then absorb the gaps introduced by a lack of features by architecturally engineering around them, becoming more automated, instrumented and programmatic in nature or add yet another layer of virtualized (and generally encrypted) transport and execution above them.

This works if you’re able to engineer your way around these gaps (or make them less relevant,) but generally this is where segmentation becomes an issue due to security and compliance design patterns which depend on the “complexity” introduced by the very flexible networking constructs available in most enterprise of SP networks.

It’s like a layered cake that keeps self-frosting.

Software Defined Architecture…

You can see the extreme opportunity for Software Defined *anything* then, can’t you? With SDN, let the physical networks NOT be complex but rather more simple and flat and then unify the orchestration, traffic steering, service insertion and (even) security capabilities of the physical and virtual networks AND the virtualization/cloud orchestration layers (from the networking perspective) into a single intelligent control plane…

That’s a big old self-frosting cake.

Basically, this is what AWS has done…but all that intelligence provided by the single pane of glass is currently left up to the app owner atop them.  That’s the downside.  Those sufficiently enlightened AWS’ customers are aware generally  of this and understand the balance of benefits and limitations of this path.

In an enterprise environment, however, it’s a timing game between the controller vendors, the virtualization/cloud stack providers, the networking vendors, and security vendors …each trying to offer up this capability either as an “integrated” capability or as an overlay…all under the watchful eye of the auditor who is generally unmotivated, uneducated and unnerved by all this new technology — especially since the compliance frameworks and regulatory elements aren’t designed to account for these dramatic shifts in architecture or operation (let alone the threat curve of advanced adversaries.)

Back To The Future…Hey, Look, It’s Token Ring and DMZs!

As I sit with these customers who build these nextgen networks, the moment segmentation comes up, the elegant network and application architectures rapidly crumble into piles of asset-based rubble as what happens next borders on the criminal…

Thanks to compliance initiatives — PCI is a good example — no matter how well scoped, those flat networks become more and more logically hierarchical.  Because SDN is still nascent and we’re lacking that unified virtualized network (and security) control plane, we end up resorting back to platform-specific “less flat” network architectures in both the physical and virtual layers to achieve “enclave” like segmentation.

But with virtualization the problem gets more complex as in an attempt to be agile, cost efficient and in order to bring data to the workloads to reduce heaving lifting of the opposite approach, out-of-scope assets can often (and suddenly) be co-resident with in-scope assets…traversing logical and physical constructs that makes it much more difficult to threat model since the level of virtualized context supports differs wildly across these layers.

Architects are then left to think how they can effectively take all the awesome performance, agility, scale and simplicity offered by the underlying fabrics (compute, network and storage) and then layer on — bolt on — security and compliance capabilities.

What they discover is that it’s very, very, very platform specific…which is why we see protocols such as VXLAN and NVGRE pop up to deal with them.

Lego Blocks and Pig Farms…

These architects then replicate the design patterns with which they are familiar and start to craft DMZs that are logically segmented in the physical network and then grafted on to the virtual.  So we end up with relying on what Gunnar Petersen and I refer to as the “SSL and Firewall” lego block…we front end collections of “layer 2 connected” assets based on criticality or function, many of which stretched across these fabrics, and locate them behind layer 3 “firewalls” which provide basic zone-based isolation and often VPN connectivity between “trusted” groups of other assets.

In short, rather than build applications that securely authenticate, communicate — or worse yet, even when they do — we pigpen our corralled assets and make our estate fatter instead of flatter.  It’s really a shame.

I’ve made the case in my “Commode Computing” presentation that one of the very first things that architects need to embrace is the following:

…by not artificially constraining the way in which we organize, segment and apply policy (i.e. “put it in a DMZ”) we can think about how design “anti-patterns” may actually benefit us…you can call them what you like, but we need to employ better methodology for “zoning.”

These trust zones or enclaves are reasonable in concept so long as we can ultimately further abstract their “segmentation” and abstract the security and compliance policy requirements by expressing policy programmatically and taking the logical business and functional use-case PROCESSES into consideration when defining, expressing and instantiating said policy.

You know…understand what talks to what and why…

A great way to think about this problem is to apply the notion of application mobility — without VM containers — and how one would instantiate a security “policy” in that context.  In many cases, as we march up the stack to distributed platform application architectures, we’re not able to depend upon the “crutch” that hypervisors or VM packages have begun to give us in legacy architectures that have virtualization grafted onto them.

Since many enterprises are now just starting to better leverage their virtualized infrastructure, there *are* some good solutions (again, platform specific) that unify the physical and virtual networks from a zoning perspective, but the all-up process-driven, asset-centric (app & information) view of “policy” is still woefully lacking, especially in heterogeneous environments.

Wrapping Up…

In enterprise and SP environments where we don’t have the opportunity to start anew, it often feels like we’re so far off from this sort of capability because it requires a shift that makes software defined networking look like child’s play.  Most enterprises don’t do risk-driven, asset-centric, process-mapped modelling, [and SP’s are disconnected from this,] so segmentation falls back to what we know: DMZs with VLANs, NAT, Firewalls, SSL and new protocol band-aids invented to cover gaping arterial wounds.

In environments lucky enough to think about and match the application use cases with the highly-differentiated operational models that virtualized *everything* brings to bear, it’s here today — but be prepared and honest that the vendor(s) you chose must be strategic and the interfaces between those platforms and external entities VERY well defined…else you risk software defined entropy.

I wish I had more than the 5 minutes it took to scratch this out because there’s SO much to talk about here…

…perhaps later.

Related articles

Enhanced by Zemanta

PrivateCore: Another Virtualization-Enabled Security Solution Launches…

June 21st, 2012 No comments

On the heels of Bromium’s coming-out party yesterday at Gigamon’s Structure conference, PrivateCore — a company founded by VMware vets Oded Horovitz and Carl Waldspurger and Google’s Steve Weis — announced a round of financing and what I interpret as a more interesting and focused Raison d’être.

Previously in videos released by Oded, he described the company’s focus around protecting servers (cloud, otherwise) against physical incursion whilst extracting contents from memory, etc. where physical access is required.

From what I could glean, the PrivateCore solution utilizes encryption and CPU cache (need to confirm) to provide memory isolation to render these attack vectors moot.

What’s interesting is the way in which PrivateCore is now highlighting the vehicle for their solution; a “hardened hypervisor.”

It will be interesting to see how well they can market this approach/technology (and to whom,) what sort of API/management planes their VMM provides and how long they stand-alone before being snapped up — perhaps even by VMware or Citrix.

More good action (and $2.25M in funding) in the virtual security space.

/Hoff

Enhanced by Zemanta

Elemental: Leveraging Virtualization Technology For More Resilient & Survivable Systems

June 21st, 2012 Comments off

Yesterday saw the successful launch of Bromium at Gigamon’s Structure conference in San Francisco.

I was privileged to spend some stage time with Stacey Higginbotham and Simon Crosby (co-founder, CTO, mentor and good friend) on stage after Simon’s big reveal of Bromium‘s operating model and technology approach.

While product specifics weren’t disclosed, we spent some time chatting about Bromium’s approach to solving a particularly tough set of security challenges with a focus on realistic outcomes given the advanced adversaries and attack methodologies in use today.

At the heart of our discussion* was the notion that in many cases one cannot detect let alone prevent specific types of attacks and this requires a new way of containing the impact of exploiting vulnerabilities (known or otherwise) that are as much targeting the human factor as they are weaknesses in underlying operating systems and application technologies.

I think Kurt Marko did a good job summarizing Bromium in his article here, so if you’re interested in learning more check it out. I can tell you that as a technology advisor to Bromium and someone who is using the technology preview, it lives up to the hype and gives me hope that we’ll see even more novel approaches of usable security leveraging technology like this.  More will be revealed as time goes on.

That said, with productization details purposely left vague, Bromium’s leveraged implementation of Intel’s VT technology and its “microvisor” approach brought about comments yesterday from many folks that reminded them of what they called “similar approaches” (however right/wrong they may be) to use virtualization technology and/or “sandboxing” to provide more “secure” systems.  I recall the following in passing conversation yesterday:

  • Determina (VMware acquired)
  • Green Borders (Google acquired)
  • Trusteer
  • Invincea
  • DeepSafe (Intel/McAfee)
  • Intel TXT w/MLE & hypervisors
  • Self Cleansing Intrusion Tolerance (SCIT)
  • PrivateCore (Newly launched by Oded Horovitz)
  • etc…

I don’t think Simon would argue that the underlying approach of utilizing virtualization for security (even for an “endpoint” application) is new, but the approach toward making it invisible and transparent from a user experience perspective certainly is.  Operational simplicity and not making security the user’s problem is a beautiful thing.

Here is a video of Simon and my session “Secure Everything.

What’s truly of interest to me — and based on what Simon said yesterday — the application of this approach could be just at home in a “server,” cloud or mobile application as it is on a classical desktop environment.  There are certainly dependencies (such as VT) today, but the notion that we can leverage virtualization for better resilience, survivability and assurance for more “trustworthy” systems is exciting.

I for one am very excited to see how we’re progressing from “bolt on” to more integrated approaches in our security models. This will bear fruit as we become more platform and application-centric in our approach to security, allowing us to leverage fundamentally “elemental” security components to allow for more meaningfully trustworthy computing.

/Hoff

* The range of topics was rather hysterical; from the Byzantine General’s problem to K/T Boundary extinction-class events to the Mexican/U.S. border fence, it was chock full of analogs 😉

 

Enhanced by Zemanta