Wanna Be A Security Player? Deliver It In Software As A Service Layer…

January 9th, 2013 1 comment

As I continue to think about the opportunities that Software Defined Networking (SDN) and Network Function Virtualization (NFV) bring into focus, the capability to deliver security as a service layer is indeed exciting.

I wrote about how SDN and OpenFlow (as a functional example) and the security use cases provided by each will be a differentiating capability back in 2011: The Killer App For OpenFlow and SDN? SecurityOpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security, and Back To The Future: Network Segmentation & More Moaning About Zoning.

Recent activity in the space has done nothing but reinforce this opinion.  My day job isn’t exactly lacking in excitement, either 🙂

As many networking vendors begin to bring their SDN solutions to market — whether in the form of networking equipment or controllers designed to interact with them — one of the missing strategic components is security.  This isn’t a new phenomenon, unfortunately, and as such, predictably there are also now startups entering this space and/or retooling from the virtualization space and stealthily advertising themselves as “SDN Security” companies 🙂

Like we’ve seen many times before, security is often described (confused?) as a “simple” or “atomic” service and so SDN networking solutions are designed with the thought that security will simply be “bolted on” after the fact and deployed not unlike a network service such as “load balancing.”  The old “we’ll just fire up some VMs and TAMO (Then a Miracle Occurs) we’ve got security!” scenario.  Or worse yet, we’ll develop some proprietary protocol or insertion architecture that will magically get traffic to and from physical security controls (witness the “U-TURN” or “horseshoe” L2/L3 solutions of yesteryear.)

The challenge is that much of Security today is still very topologically sensitive and depends upon classical networking constructs to be either physically or logically plumbed between the “outside” and the asset under protection, or it’s very platform dependent and lacks the ability to truly define a policy that travels with the workload regardless of the virtualization, underlay OR overlay solutions.

Depending upon the type of control, security is often operationalized across multiple layers using wildly different constructs, APIs and context in terms of policy and disposition depending upon it’s desired effect.

Virtualization has certainly evolved our thinking about how we should think differently about security mostly due to the dynamism and mobility that virtualization has introduced, but it’s still incredibly nascent in terms of exposed security capabilities in the platforms themselves.  It’s been almost 5 years since I started raging about how we need(ed) platform providers to give us capabilities that function across stacks so we’d have a fighting chance.  To date, not only do we have perhaps ONE vendor doing some of this, but we’ve seen the emergence of others who are maniacally focused on providing as little of it as possible.

If you think about what virtualization offers us today from a security perspective, we have the following general solution options:

  1. Hypervisor-based security solutions which may apply policy as a function of the virtual-NIC card of the workloads it protects.
  2. Extensions of virtual-networking (i.e. switching) solutions that enable traffic steering and some policy enforcement that often depend upon…
  3. Virtual Appliance-based security solutions that require manual or automated provisioning, orchestration and policy application in user space that may or may not utilize APIs exposed by the virtual networking layer or hypervisor

There are tradeoffs across each of these solutions; scale, performance, manageability, statefulness, platform dependencies, etc.  There simply aren’t many platforms that natively offer security capabilities as a function of service delivery that allows arbitrary service definition with consistent and uniform ways of describing the outcome of the policies at these various layers.  I covered this back in 2008 (it’s a shame nothing has really changed) in my Four Horsemen Of the Virtual Security Apocalypse presentation.

As I’ve complained for years, we still have 20 different ways of defining how to instantiate a five-tupule ACL as a basic firewall function.

Out of the Darkness…

The promise of SDN truly realized — the ability to separate the control, forwarding, management and services planes — and deploy security as a function of available service components across overlays and underlays, means we will be able to take advantage of any of these models so long as we have a way to programmatically interface with the various strata regardless of whether we provision at the physical, virtual or overlay virtual layer.

It’s truly exciting.  We’re seeing some real effort to enable true security service delivery.

When I think about how to categorize the intersection of “SDN” and “Security,” I think about it the same way I have with virtualization and Cloud:

  • Securing SDN (Securing the SDN components)
  • SDN Security Services (How do I take security and use SDN to deliver security as a service)
  • Security via SDN (What NEW security capabilities can be derived from SDN)

There are numerous opportunities with each of these categories to really make a difference to security in the coming years.

The notion that many of our network and security capabilities are becoming programmatic means we *really* need to focus on securing SDN solutions, especially given the potential for abuse given the separation of the various channels. (See: Software Defined Networking (In)Security: All Your Control Plane Are Belong To Us…)

Delivering security as a service via SDN holds enormous promise for reasons I’ve already articulated and gives us an amazing foundation upon which to start building solutions we can’t imagine today given the lack of dynamism in our security architecture and design patterns.

Finally, the first two elements give rise to allow us to do things we can’t even imagine with today’s traditional physical and even virtual solutions.

I’ll be starting to highlight really interesting solutions I find (and am able to talk about) over the next few months.

Security enabled by SDN is going to be huge.

More soon.

/Hoff

Related articles

Enhanced by Zemanta

NIST’s Trusted Geolocation in the Cloud: PoC Implementation

December 22nd, 2012 3 comments

I was very interested and excited to learn what NIST researchers and staff had come up with when I saw the notification of the “Draft Interagency Report 7904, Trusted Geolocation in the Cloud: Proof of Concept Implementation.”

It turns out that this report is an iteration on the PoC previously created by VMware, Intel and RSA back in 2010 which utilized Intel’s TXT, VMWare’s virtualization platform and the RSA/Archer GRC platform, as this one does.

I haven’t spent much time to look at the differences, but I’m hoping as I read through it that we’ve made progress…

You can read about the original PoC here, and watch a video from 2010 about it here.  Then you can read about it again in its current iteration, here (PDF.)

I wrote about this topic back in 2009 and still don’t have a good firm answer to the question I asked in 2009 in a blog titled “Quick Question: Any Public Cloud Providers Using Intel TXT?” and the follow-on “More On High Assurance (via TPM) Cloud Environments

At CloudConnect 2011 I also filmed a session with the Intel/RSA/VMware folks titled “More On Cloud and Hardware Root Of Trust: Trusting Cloud Services with Intel® TXT

I think this is really interesting stuff and a valuable security and compliance capability, but is apparently still hampered with practical deployment challenges.

I’m also confused as to why RSA employees were not appropriately attributed under the NIST banner and this is very much a product-specific/vendor-specific set of solutions…I’m not sure I’ve ever seen a NIST-branded report like this.

At any rate, I am interested to see if we will get to the point where these solutions will have more heterogeneous uptake across platforms.

/Hoff

Enhanced by Zemanta

On Puppy Farm Vendors, Petco and The Remarkable Analog To Security Consultancies/Integrators…

December 5th, 2012 No comments
Funny Attention Dogs And Owners Sign

Funny Attention Dogs And Owners Sign (Photo credits: www.dogpoopsigns.com)

Imagine you are part of a company in the “Pet Industry.”  Let’s say dogs, specifically.

Imagine further that regardless of whether you work on the end that feeds the dog, provides services focused on grooming the dog, sells accessories for the dog, actually breeds and raises the dog or deals with cleaning up what comes out the other end of the dog, that you also simultaneously spend your time offering your opinions on how much you despise the dog industry.

Hmmmm.

Now, either you’re being refreshingly honest, or you’re simply being shrewd about which end of the mutt you’re targeting your services toward — and sometimes it’s both ends and the middle — but you’re still a part of the dog industry.

And we all know it’s a dog-eat-dog world…in the Pet business as it is in the Security business.  Which ironically illustrates the cannibalistic nature of being in the security industry whilst trying to distance oneself by juxtaposing the position of the security community.

Claiming to be a Dog Whisperer in an industry of other aimless people shouting and clapping loudly whilst looking to perpetuate bad dog-breeding practices so they can sell across the supply chain is an interesting tactic.  However, yelling “BAD DOG!” and wondering why it continues to eat your slippers doesn’t change behavior.

You can’t easily dismantle and industry but you can offer better training, solutions or techniques to make a difference.

Either way, there’s a lot of tail wagging and crap to clean up.

Lots to consider in this little analog.  For everyone.

/Hoff

P.S. @bmkatz points us all to this amazing resource you may find useful.

Enhanced by Zemanta

Are Flat Networkers Like Flat Earthers Of Yore?

December 4th, 2012 11 comments

Lori Macvittie is at the Gartner DC conference today and tweeted something extraordinary from one of the sessions focused on SDN (actually there were numerous juicy tidbits, but this one caught my attention:

Amazing, innit?

To which my response was:

Regardless of how one might “feel” about SDN, the notion of agility in service delivery wherein the network can be exposed and consumed as a service versus a trunk port and some VLANs is…the right thing.  Just because the network is “flat” doesn’t mean it’s services are or that the delivery of said services are any less complex.  I just wrote about this here: The Tyranny Of Taming (Network) Traffic: Steering, Service Insertion and Chaining…

“Flat networks” end up being carved right back up into VLANs and thus L3 routing domains to provide for isolation and security boundaries…and then to deal with that we get new protocols to deal with VLAN exhaustion, mobility and L2 stretch and…

It seems like some of the people at the Gartner DC show (from this and other tweets as I am not there) are abjectly allergic to abstraction beyond that which they can physically exercise dominion.

Where have I seen this story before?

/Beaker

CloudPassage: Security & The Cloud 2012…

November 29th, 2012 No comments

Like cowbell, I’m a sucker for MOAR INFOGRAPHICS!

CloudPassage has created a cool one based upon respondent data from a survey about security and the Cloud with some interesting data points.

I will ask for the raw demographics/statistics data that generated it:

The Tyranny Of Taming (Network) Traffic: Steering, Service Insertion and Chaining…

November 29th, 2012 No comments

You know what’s hard?

Describing the difficulties to anyone who doesn’t work inside of an actual “networking” company why the notions of traffic steering, services insertion and chaining across multiple physical boxes and/or combinations of physical and virtual service instantiations is freaking difficult.

12/3/12 [Ed: I realized I didn’t actually define these terms.  Added below.]

What do I mean by these terms?  Simplified definitions here:

  • Traffic Steering: directing and delivering traffic (flows/packets, tagged or otherwise) from one processing point to another
  • Service Insertion: addition of some form of processing (terminated or mirrored,) delivered as a service, that is interposed dynamically between processing points
  • Service Chaining: chaining (serialized or parallelized) and insertion of services with other services.
I didn’t get into the nuances of these capabilities with things like state, flow to service mapping tables, replication across flow/state tables in “clustered” processing points, etc., but I spoke to some of them in the “Four Horsemen of the Virtualization Security Apocalypse” presentation. See Pwnie #1 – War | Episode 7: Revenge Of the UTM Clones.

Now, with that out of the way and these terms simply defined, I suppose the “networking is simple” people are right.

I mean, all you have to do is agree on a common set of protocols, a consistent tagging format, flow and/or packet metadata, disposition mechanisms, flow redirection mechanisms beyond next hop unicast, tunneling, support for protocols other than unicast, state machine handling across disparate service chains, performance/availability/QoS telemetry across network domains and diameters, disparate control and data planes, session termination versus pass-through deltas, and then incidental stuff like MAC and routing table updates with convergence latencies across distributed entities, etc.

…and support for legacy while we’re at it.

It ain’t nuthin’ but a peanut, right?

Oh, this just must be an issue with underlay (physical) networks, right?

Overlays have this handled, right?

All these new APIs and control planes are secure by default, too, right?

Uh-huh.

Glad we’ve got this covered, apparently:

Things need to change dramatically at networking companies

This is true, by the way.

However, allow me to suggest that networking companies have experience, footprint, capabilities and relationships and are quite motivated to add value, increase feature velocity, reduce complexity in deployment and operation, and add more efficiency to their solutions.

Change is good.

See 18:45 if you want the juicy bits.

/Hoff

 

Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 15 comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself 🙂

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting 🙂

Related articles

 

Enhanced by Zemanta

Quick Quip: Capability, Reliability and Liability…Security Licensing

November 28th, 2012 9 comments

Earlier today, I tweeted the following and was commented on by Dan Kaminsky (@dakami):

…which I explained with:

This led to a very interesting comment by Preston Wood who suggested something very interesting from the perspective of both leadership and accountability:

…and that brought forward another insightful comment:

Pretty interesting, right? Engineers, architects, medical practitioners, etc. all have degrees/licenses and absorb liability upon failure. What about security?

What do you think about this concept?

/Hoff

 

Should/Can/Will Virtual Firewalls Replace Physical Firewalls?

October 15th, 2012 6 comments
Simulação da participação de um Firewall entre...

Simulação da participação de um Firewall entre uma LAN e uma WAN Français : Schéma d’un pare-feu entre un LAN et un WAN (Photo credit: Wikipedia)

“Should/Can/Will Virtual Firewalls Replace Physical Firewalls?”

The answer is, as always, “Of course, but not really, unless maybe, you need them to…” 🙂

This discussion crops up from time-to-time, usually fueled by a series of factors which often lack the context to appropriately address it.

The reality is there exists the ever-useful answer of “it depends,” and frankly it’s a reasonable answer.

Back in 2008 when I created “The Four Horsemen of the Virtualization Security Apocalypse” presentation, I highlighted the very real things we needed to be aware of as we saw the rapid adoption of server virtualization…and the recommendations from virtualization providers as to the approach we should take in terms of securing the platforms and workloads atop them.  Not much has changed in almost five years.

However, each time I’m asked this question, I inevitably sound evasive when asking for more detail when the person doing the asking references “physical” firewalls and what it is they mean.  Normally the words “air-gap” are added to the mix.

The very interesting thing about how people answer this question is that in reality, the great many firewalls that are deployed today have the following features deployed in common:

  1. Heavy use of network “LAG” (link aggregation group) interface bundling/VLAN trunking and tagging
  2. Heavy network virtualization used, leveraging VLANs as security boundaries, trunked across said interfaces
  3. Increased use of virtualized contexts and isolated resource “virtual systems” and separate policies
  4. Heavy use of ASIC/FPGA and x86 architectures which make use of shared state tables, memory and physical hardware synced across fabrics and cluster members
  5. Predominant use of “stateful inspection” at layers 2-4 with the addition of protocol decoders at L5-7 for more “application-centric” enforcement
  6. Increasing use of “transparent proxies” at L2 but less (if any) full circuit or application proxies in the classic sense

So before I even START to address the use cases of the “virtual firewalls” that people reference as the comparison, nine times out of ten, that supposed “air gap” with dedicated physical firewalls that they reference usually doesn’t compute.

Most of the firewall implementations that people have meet most of the criteria mentioned in items 1-6 above.

Further, most firewall architectures today aren’t running full L7 proxies across dedicated physical interfaces like in the good old days (Raptor, etc.) for some valid reasons…(read the postscript for an interesting prediction.)

Failure domains and the threat modeling that illustrates cascading impact due to complexity, outright failure or compromised controls is usually what people are interested in when asking this question, but this gets almost completely obscured by the “physical vs. virtual” concern and we often never dig deeper.

There are some amazing things that can be done in virtual constructs that we can’t do in the physical and there are some pretty important things that physical firewalls can provide that virtual versions have trouble with.  It’s all a matter of balance, perspective, need, risk and reward…oh, and operational simplicity.

I think it’s important to understand what we’re comparing when asking that question before we conflate use cases, compare and mismatch expectations, and make summary generalizations (like I just did 🙂 about that which we are contrasting.

I’ll actually paint these use cases in a follow-on post shortly.

/Hoff

POSTSCRIPT:

I foresee that we will see a return of the TRUE application-level proxy firewall — especially with application identification, cheap hardware, more security and networking virtualized in hardware.  I see this being deployed both on-premise and as part of a security as a service offering (they are already, today — see CloudFlare, for example.)

If you look at the need to terminate SSL/TLS and provide for not only L4-L7 sanity, protect applications/sessions at L5-7 (web and otherwise) AND the renewed dependence upon XML, SOAP, REST, JSON, etc., it will drive even more interesting discussions in this space.  Watch as the hybrid merge of the WAF+XML security services gateway returns to vogue… (see also Cisco EOLing ACE while I simultaneously receive an email from Intel informing me I can upgrade to their Intel Expressway Service Gateway…which I believe (?) was from the Cervega Sarvega acqusition?)

Enhanced by Zemanta

Cloud Service Providers and the Dual Stack Dilemma

September 20th, 2012 1 comment

I wrote this blog and then jumped on Twitter to summarize/crystallize what I thought were the most important bits:

…and thus realized I didn’t really need to finish drafting the blog since I’d managed to say it in three tweets.

Twitter has indeed killed the WordPress star…

More detailed version below.  Not finished.  TL;DR

/Hoff

—– (below unedited for tense, grammar, logical thought or completeness…) —–

Read more…

Categories: Cloud Computing, Cloud Security Tags: