NIST’s Trusted Geolocation in the Cloud: PoC Implementation

December 22nd, 2012 3 comments

I was very interested and excited to learn what NIST researchers and staff had come up with when I saw the notification of the “Draft Interagency Report 7904, Trusted Geolocation in the Cloud: Proof of Concept Implementation.”

It turns out that this report is an iteration on the PoC previously created by VMware, Intel and RSA back in 2010 which utilized Intel’s TXT, VMWare’s virtualization platform and the RSA/Archer GRC platform, as this one does.

I haven’t spent much time to look at the differences, but I’m hoping as I read through it that we’ve made progress…

You can read about the original PoC here, and watch a video from 2010 about it here.  Then you can read about it again in its current iteration, here (PDF.)

I wrote about this topic back in 2009 and still don’t have a good firm answer to the question I asked in 2009 in a blog titled “Quick Question: Any Public Cloud Providers Using Intel TXT?” and the follow-on “More On High Assurance (via TPM) Cloud Environments

At CloudConnect 2011 I also filmed a session with the Intel/RSA/VMware folks titled “More On Cloud and Hardware Root Of Trust: Trusting Cloud Services with Intel® TXT

I think this is really interesting stuff and a valuable security and compliance capability, but is apparently still hampered with practical deployment challenges.

I’m also confused as to why RSA employees were not appropriately attributed under the NIST banner and this is very much a product-specific/vendor-specific set of solutions…I’m not sure I’ve ever seen a NIST-branded report like this.

At any rate, I am interested to see if we will get to the point where these solutions will have more heterogeneous uptake across platforms.

/Hoff

Enhanced by Zemanta

On Puppy Farm Vendors, Petco and The Remarkable Analog To Security Consultancies/Integrators…

December 5th, 2012 No comments
Funny Attention Dogs And Owners Sign

Funny Attention Dogs And Owners Sign (Photo credits: www.dogpoopsigns.com)

Imagine you are part of a company in the “Pet Industry.”  Let’s say dogs, specifically.

Imagine further that regardless of whether you work on the end that feeds the dog, provides services focused on grooming the dog, sells accessories for the dog, actually breeds and raises the dog or deals with cleaning up what comes out the other end of the dog, that you also simultaneously spend your time offering your opinions on how much you despise the dog industry.

Hmmmm.

Now, either you’re being refreshingly honest, or you’re simply being shrewd about which end of the mutt you’re targeting your services toward — and sometimes it’s both ends and the middle — but you’re still a part of the dog industry.

And we all know it’s a dog-eat-dog world…in the Pet business as it is in the Security business.  Which ironically illustrates the cannibalistic nature of being in the security industry whilst trying to distance oneself by juxtaposing the position of the security community.

Claiming to be a Dog Whisperer in an industry of other aimless people shouting and clapping loudly whilst looking to perpetuate bad dog-breeding practices so they can sell across the supply chain is an interesting tactic.  However, yelling “BAD DOG!” and wondering why it continues to eat your slippers doesn’t change behavior.

You can’t easily dismantle and industry but you can offer better training, solutions or techniques to make a difference.

Either way, there’s a lot of tail wagging and crap to clean up.

Lots to consider in this little analog.  For everyone.

/Hoff

P.S. @bmkatz points us all to this amazing resource you may find useful.

Enhanced by Zemanta

Are Flat Networkers Like Flat Earthers Of Yore?

December 4th, 2012 11 comments

Lori Macvittie is at the Gartner DC conference today and tweeted something extraordinary from one of the sessions focused on SDN (actually there were numerous juicy tidbits, but this one caught my attention:

Amazing, innit?

To which my response was:

Regardless of how one might “feel” about SDN, the notion of agility in service delivery wherein the network can be exposed and consumed as a service versus a trunk port and some VLANs is…the right thing.  Just because the network is “flat” doesn’t mean it’s services are or that the delivery of said services are any less complex.  I just wrote about this here: The Tyranny Of Taming (Network) Traffic: Steering, Service Insertion and Chaining…

“Flat networks” end up being carved right back up into VLANs and thus L3 routing domains to provide for isolation and security boundaries…and then to deal with that we get new protocols to deal with VLAN exhaustion, mobility and L2 stretch and…

It seems like some of the people at the Gartner DC show (from this and other tweets as I am not there) are abjectly allergic to abstraction beyond that which they can physically exercise dominion.

Where have I seen this story before?

/Beaker

CloudPassage: Security & The Cloud 2012…

November 29th, 2012 No comments

Like cowbell, I’m a sucker for MOAR INFOGRAPHICS!

CloudPassage has created a cool one based upon respondent data from a survey about security and the Cloud with some interesting data points.

I will ask for the raw demographics/statistics data that generated it:

The Tyranny Of Taming (Network) Traffic: Steering, Service Insertion and Chaining…

November 29th, 2012 No comments

You know what’s hard?

Describing the difficulties to anyone who doesn’t work inside of an actual “networking” company why the notions of traffic steering, services insertion and chaining across multiple physical boxes and/or combinations of physical and virtual service instantiations is freaking difficult.

12/3/12 [Ed: I realized I didn’t actually define these terms.  Added below.]

What do I mean by these terms?  Simplified definitions here:

  • Traffic Steering: directing and delivering traffic (flows/packets, tagged or otherwise) from one processing point to another
  • Service Insertion: addition of some form of processing (terminated or mirrored,) delivered as a service, that is interposed dynamically between processing points
  • Service Chaining: chaining (serialized or parallelized) and insertion of services with other services.
I didn’t get into the nuances of these capabilities with things like state, flow to service mapping tables, replication across flow/state tables in “clustered” processing points, etc., but I spoke to some of them in the “Four Horsemen of the Virtualization Security Apocalypse” presentation. See Pwnie #1 – War | Episode 7: Revenge Of the UTM Clones.

Now, with that out of the way and these terms simply defined, I suppose the “networking is simple” people are right.

I mean, all you have to do is agree on a common set of protocols, a consistent tagging format, flow and/or packet metadata, disposition mechanisms, flow redirection mechanisms beyond next hop unicast, tunneling, support for protocols other than unicast, state machine handling across disparate service chains, performance/availability/QoS telemetry across network domains and diameters, disparate control and data planes, session termination versus pass-through deltas, and then incidental stuff like MAC and routing table updates with convergence latencies across distributed entities, etc.

…and support for legacy while we’re at it.

It ain’t nuthin’ but a peanut, right?

Oh, this just must be an issue with underlay (physical) networks, right?

Overlays have this handled, right?

All these new APIs and control planes are secure by default, too, right?

Uh-huh.

Glad we’ve got this covered, apparently:

Things need to change dramatically at networking companies

This is true, by the way.

However, allow me to suggest that networking companies have experience, footprint, capabilities and relationships and are quite motivated to add value, increase feature velocity, reduce complexity in deployment and operation, and add more efficiency to their solutions.

Change is good.

See 18:45 if you want the juicy bits.

/Hoff

 

Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 15 comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself :)

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting :)

Related articles

 

Enhanced by Zemanta

Quick Quip: Capability, Reliability and Liability…Security Licensing

November 28th, 2012 9 comments

Earlier today, I tweeted the following and was commented on by Dan Kaminsky (@dakami):

…which I explained with:

This led to a very interesting comment by Preston Wood who suggested something very interesting from the perspective of both leadership and accountability:

…and that brought forward another insightful comment:

Pretty interesting, right? Engineers, architects, medical practitioners, etc. all have degrees/licenses and absorb liability upon failure. What about security?

What do you think about this concept?

/Hoff

 

Should/Can/Will Virtual Firewalls Replace Physical Firewalls?

October 15th, 2012 6 comments
Simulação da participação de um Firewall entre...

Simulação da participação de um Firewall entre uma LAN e uma WAN Français : Schéma d’un pare-feu entre un LAN et un WAN (Photo credit: Wikipedia)

“Should/Can/Will Virtual Firewalls Replace Physical Firewalls?”

The answer is, as always, “Of course, but not really, unless maybe, you need them to…” :)

This discussion crops up from time-to-time, usually fueled by a series of factors which often lack the context to appropriately address it.

The reality is there exists the ever-useful answer of “it depends,” and frankly it’s a reasonable answer.

Back in 2008 when I created “The Four Horsemen of the Virtualization Security Apocalypse” presentation, I highlighted the very real things we needed to be aware of as we saw the rapid adoption of server virtualization…and the recommendations from virtualization providers as to the approach we should take in terms of securing the platforms and workloads atop them.  Not much has changed in almost five years.

However, each time I’m asked this question, I inevitably sound evasive when asking for more detail when the person doing the asking references “physical” firewalls and what it is they mean.  Normally the words “air-gap” are added to the mix.

The very interesting thing about how people answer this question is that in reality, the great many firewalls that are deployed today have the following features deployed in common:

  1. Heavy use of network “LAG” (link aggregation group) interface bundling/VLAN trunking and tagging
  2. Heavy network virtualization used, leveraging VLANs as security boundaries, trunked across said interfaces
  3. Increased use of virtualized contexts and isolated resource “virtual systems” and separate policies
  4. Heavy use of ASIC/FPGA and x86 architectures which make use of shared state tables, memory and physical hardware synced across fabrics and cluster members
  5. Predominant use of “stateful inspection” at layers 2-4 with the addition of protocol decoders at L5-7 for more “application-centric” enforcement
  6. Increasing use of “transparent proxies” at L2 but less (if any) full circuit or application proxies in the classic sense

So before I even START to address the use cases of the “virtual firewalls” that people reference as the comparison, nine times out of ten, that supposed “air gap” with dedicated physical firewalls that they reference usually doesn’t compute.

Most of the firewall implementations that people have meet most of the criteria mentioned in items 1-6 above.

Further, most firewall architectures today aren’t running full L7 proxies across dedicated physical interfaces like in the good old days (Raptor, etc.) for some valid reasons…(read the postscript for an interesting prediction.)

Failure domains and the threat modeling that illustrates cascading impact due to complexity, outright failure or compromised controls is usually what people are interested in when asking this question, but this gets almost completely obscured by the “physical vs. virtual” concern and we often never dig deeper.

There are some amazing things that can be done in virtual constructs that we can’t do in the physical and there are some pretty important things that physical firewalls can provide that virtual versions have trouble with.  It’s all a matter of balance, perspective, need, risk and reward…oh, and operational simplicity.

I think it’s important to understand what we’re comparing when asking that question before we conflate use cases, compare and mismatch expectations, and make summary generalizations (like I just did :) about that which we are contrasting.

I’ll actually paint these use cases in a follow-on post shortly.

/Hoff

POSTSCRIPT:

I foresee that we will see a return of the TRUE application-level proxy firewall — especially with application identification, cheap hardware, more security and networking virtualized in hardware.  I see this being deployed both on-premise and as part of a security as a service offering (they are already, today — see CloudFlare, for example.)

If you look at the need to terminate SSL/TLS and provide for not only L4-L7 sanity, protect applications/sessions at L5-7 (web and otherwise) AND the renewed dependence upon XML, SOAP, REST, JSON, etc., it will drive even more interesting discussions in this space.  Watch as the hybrid merge of the WAF+XML security services gateway returns to vogue… (see also Cisco EOLing ACE while I simultaneously receive an email from Intel informing me I can upgrade to their Intel Expressway Service Gateway…which I believe (?) was from the Cervega Sarvega acqusition?)

Enhanced by Zemanta

Cloud Service Providers and the Dual Stack Dilemma

September 20th, 2012 1 comment

I wrote this blog and then jumped on Twitter to summarize/crystallize what I thought were the most important bits:

…and thus realized I didn’t really need to finish drafting the blog since I’d managed to say it in three tweets.

Twitter has indeed killed the WordPress star…

More detailed version below.  Not finished.  TL;DR

/Hoff

—– (below unedited for tense, grammar, logical thought or completeness…) —–

Read more…

Categories: Cloud Computing, Cloud Security Tags:

The Cuban Cloud Missile Crisis…Weapons Of Mass Abstraction.

September 7th, 2012 2 comments
English: Coat of arms of Cuba. Español: Escudo...

English: Coat of arms of Cuba. Español: Escudo de Cuba. Русский: Герб Кубы. (Photo credit: Wikipedia)

In the midst of the Cold War in October of 1962, the United States and the Soviet Union stood periously on the brink of nuclear war as a small island some 90 miles off the coast of Florida became the focal point of intense foreign policy scrutiny, challenges to sovereignty and political arm wrestling the likes of which were never seen before.

Photographic evidence provided by a high altitude U.S. spy plane exposed the until-then secret construction of medium and intermediate ballistic nuclear missile silos, constructed by the Soviet Union, which were deliberately placed so as to be close enough to reach the continental United States.

The United States, alarmed by this unprecedented move by the Soviets and the already uneasy relations with communist Cuba, unsuccessfully attempted a CIA-led forceful invasion and overthrow of the Cuban regime at the Bay of Pigs.

This did not sit well with either the Cubans or Soviets.  A nightmare scenario ensued as the Soviets responded with threats of its own to defend its ally (and strategic missile sites) at any cost, declaring the American’s actions as unprovoked and unacceptable.

During an incredibly tense standoff, the U.S. mulled over plans to again attack Cuba both by air and sea to ensure the disarmament of the weapons that posed a dire threat to the country.

As posturing and threats continued to escalate from the Soviets, President Kennedy elected to pursue a less direct military action;  a naval blockade designed to prevent the shipment of supplies necessary for the completion and activation of launchable missiles.  Using this as a lever, the U.S. continued to demand that Russia dismantle and remove all nuclear weapons as they prevented any and all naval traffic to and from Cuba.

Soviet premier Krustchev protested such acts of “direct aggression” and communicated to president Kennedy that his tactics were plunging the world into the depths of potential nuclear war.

While both countries publicly traded threats of war, the bravado, posturing and defiance were actually a cover for secret backchannel negotiations involving the United Nations. The Soviets promised they would dismantle and remove nuclear weapons, support infrastructure and transports from Cuba, and the United States promised not to invade Cuba while also removing nuclear weapons from Turkey and Italy.

The Soviets made good on their commitment two weeks later.  Eleven months after the agreement, the United States complied and removed from service the weapons abroad.

The Cold War ultimately ended and the Soviet Union fell, but the political, economic and social impact remains even today — 40 years later we have uneasy relations with (now) Russia and the United States still enforces ridiculous economic and social embargoes on Cuba.

What does this have to do with Cloud?

Well, it’s a cute “movie of the week” analog desperately in need of a casting call for Nikita Khrushchev and JFK.  I hear Gary Busey and Aston Kutcher are free…

As John Furrier, Dave Vellante and I were discussing on theCUBE recently at VMworld 2012, there exists an uneasy standoff — a cold war — between the so-called “super powers” staking a claim in Cloud.  The posturing and threats currently in process don’t quite have the world-ending outcomes that nuclear war would bring, but it could have devastating technology outcomes nonetheless.

In this case, the characters of the Americans, Soviets, Cubans and the United Nations are played by networking vendors, SDN vendors, virtualization/abstraction vendors, cloud “stack” projects/efforts/products and underlying CPU/chipset vendors (not necessarily in that order…)  The rest of the world stands by as their fate is determined on the world’s stage.

If we squint hard enough at Cloud, we might find out very own version of the “Bay of Pigs,” with what’s going on with OpenStack.

The “community” effort behind OpenStack is one largely based on “industry” and if we think of OpenStack as Cuba, it’s being played as pawn in the much larger battle for global domination.  The munitions being stocked in this tiny little enclave threatens to disrupt relations of epic proportions.  That’s why we now see so much strategic movement around an initiative and technology that many outside of the navel gazers haven’t really paid much attention to.

Then there are players like Amazon Web Services who, like China of today, quietly amass their weapons of mass abstraction as the industry-jockeying and distractions play on (but that’s a topic for another post)

Cutting to the chase…if we step back for a minute

Intel is natively bundling more and more networking and virtualization capabilities into their CPU/Chipsets and a $7B investment in security company McAfee makes them a serious player there.  VMware is de-emphasizing the “hypervisor” and is instead positioning they are focused on end-to-end solutions which include everything from secure mobility, orchestration/provisioning and now, with Nicira, networking.  Networking companies like Cisco and Juniper continue to move up-stack to deeper integrate networking and security along with service overlays in order to remain relevant in light of virtualization and SDN.

…and OpenStack’s threat of disrupting all of those plays makes it important enough to pay attention to.  It’s a little island of technology that is causing huge behemoths to collide.  A molehill that has become a mountain.

If today’s announcements of VMware and Intel joining OpenStack as Gold Members along with the existing membership by other “super powers” doesn’t make it clear that we’re in the middle of an enormous power struggle, I’ve got a small Island to sell you 😉

Me?  I’m going to make some Lechon Asado, enjoy a mojito and a La Gloria Cubana.

Related articles

Enhanced by Zemanta