Archive

Posts Tagged ‘OpenFlow’

Software Defined Networking (In)Security: All Your Control Plane Are Belong To Us…

August 20th, 2012 No comments

My next series of talks are focused around the emerging technology, solutions and security architectures of so-called “Software Defined Networking (SDN)”

As this space heats up, I see a huge opportunity for new and interesting ways in which security can be delivered — the killer app? — but I also am concerned that, per usual, security is a potential after thought.

At an absolute minimum example, the separation of control and data planes (much as what we saw with compute-centric virtualization) means we now have additional (or at least bifurcated) attack surfaces and threat vectors.  And not unlike compute-centric virtualization, the C&C channels for network operation represents a juicy target.

There are many more interesting elements that deserve more attention paid to them — new protocols, new hardware/software models, new operational ramifications…and I’m going to do just that.

If you’re a vendor who cares to share what you’re doing to secure your SDN offerings — and I promise I’ll be fair and balanced as I always am — please feel free to reach out to me.  If you don’t and I choose to include your solution based on access to what data I have, you run the risk of being painted inaccurately <hint>

If you have any ideas, comments or suggestions on what you’d like to see featured or excluded, let me know.  This will be along the lines of what I did with the “Four Horsemen Of the Virtualization Security Apocalypse” back in 2008.

Check out a couple of previous ramblings related to SDN (and OpenFlow) with respect to security below.

/Hoff

Enhanced by Zemanta

Back To The Future: Network Segmentation & More Moaning About Zoning

July 16th, 2012 5 comments

A Bit Of Context…

This image was selected as a picture of the we...

(Photo credit: Wikipedia)A Bit Of Context…

The last 3 years have been very interesting when engaging with large enterprises and service providers as they set about designing, selecting and deploying their “next generation” network architecture. These new networks are deployed in timescales that see them collide with disruptive innovation such as fabrics, cloud, big data and DevOps.

In most cases, these network platforms must account for the nuanced impact of virtualized design patterns, refreshes of programmatic architecture and languages, and the operational model differences these things introduce.  What’s often apparent is that no matter how diligent the review, by the time these platforms are chosen, many tradeoffs are made — especially when it comes to security and compliance — and we arrive at the old adage: “You can get fast, cheap or secure…pick two.”

…And In the Beginning, There Was Spanning Tree…

The juxtaposition of flatter and flatter physical networks, nee “fabrics” (compute, network and storage,) with what seems to be a flip-flop transition between belief systems and architects who push for either layer 2 or layer 3 (or encapsulated versions thereof) segmentation at the higher layers is again aggravated by continued push for security boundary definition that yields further segmentation based on policy at the application and information layer.

So what we end up with is the benefits of flatter, any-to-any connectivity at the physical networking layer with a “software defined” and virtualized networking context floating both alongside (Nicira, BigSwitch, OpenFlow) as well as atop it (VMware, Citrix, OpenStack Quantum, etc.) with a bunch of protocols ladled on like some protocol gravy blanketing the Chicken Fried Steak that represents the modern data center.

Oh!  You Mean the Cloud…

Now, there are many folks who don’t approach it this way, and instead abstract away much of what I just described.  In Amazon Web Services’ case as a service provider, they dumb down the network sufficiently and control the abstracted infrastructure to the point that “flatness” is the only thing customers get and if you’re going to run your applications atop, you must keep it simple and programmatic in nature else risk introducing unnecessary complexity into the “software stack.”

The customers who then depend upon these simplified networking services must then absorb the gaps introduced by a lack of features by architecturally engineering around them, becoming more automated, instrumented and programmatic in nature or add yet another layer of virtualized (and generally encrypted) transport and execution above them.

This works if you’re able to engineer your way around these gaps (or make them less relevant,) but generally this is where segmentation becomes an issue due to security and compliance design patterns which depend on the “complexity” introduced by the very flexible networking constructs available in most enterprise of SP networks.

It’s like a layered cake that keeps self-frosting.

Software Defined Architecture…

You can see the extreme opportunity for Software Defined *anything* then, can’t you? With SDN, let the physical networks NOT be complex but rather more simple and flat and then unify the orchestration, traffic steering, service insertion and (even) security capabilities of the physical and virtual networks AND the virtualization/cloud orchestration layers (from the networking perspective) into a single intelligent control plane…

That’s a big old self-frosting cake.

Basically, this is what AWS has done…but all that intelligence provided by the single pane of glass is currently left up to the app owner atop them.  That’s the downside.  Those sufficiently enlightened AWS’ customers are aware generally  of this and understand the balance of benefits and limitations of this path.

In an enterprise environment, however, it’s a timing game between the controller vendors, the virtualization/cloud stack providers, the networking vendors, and security vendors …each trying to offer up this capability either as an “integrated” capability or as an overlay…all under the watchful eye of the auditor who is generally unmotivated, uneducated and unnerved by all this new technology — especially since the compliance frameworks and regulatory elements aren’t designed to account for these dramatic shifts in architecture or operation (let alone the threat curve of advanced adversaries.)

Back To The Future…Hey, Look, It’s Token Ring and DMZs!

As I sit with these customers who build these nextgen networks, the moment segmentation comes up, the elegant network and application architectures rapidly crumble into piles of asset-based rubble as what happens next borders on the criminal…

Thanks to compliance initiatives — PCI is a good example — no matter how well scoped, those flat networks become more and more logically hierarchical.  Because SDN is still nascent and we’re lacking that unified virtualized network (and security) control plane, we end up resorting back to platform-specific “less flat” network architectures in both the physical and virtual layers to achieve “enclave” like segmentation.

But with virtualization the problem gets more complex as in an attempt to be agile, cost efficient and in order to bring data to the workloads to reduce heaving lifting of the opposite approach, out-of-scope assets can often (and suddenly) be co-resident with in-scope assets…traversing logical and physical constructs that makes it much more difficult to threat model since the level of virtualized context supports differs wildly across these layers.

Architects are then left to think how they can effectively take all the awesome performance, agility, scale and simplicity offered by the underlying fabrics (compute, network and storage) and then layer on — bolt on — security and compliance capabilities.

What they discover is that it’s very, very, very platform specific…which is why we see protocols such as VXLAN and NVGRE pop up to deal with them.

Lego Blocks and Pig Farms…

These architects then replicate the design patterns with which they are familiar and start to craft DMZs that are logically segmented in the physical network and then grafted on to the virtual.  So we end up with relying on what Gunnar Petersen and I refer to as the “SSL and Firewall” lego block…we front end collections of “layer 2 connected” assets based on criticality or function, many of which stretched across these fabrics, and locate them behind layer 3 “firewalls” which provide basic zone-based isolation and often VPN connectivity between “trusted” groups of other assets.

In short, rather than build applications that securely authenticate, communicate — or worse yet, even when they do — we pigpen our corralled assets and make our estate fatter instead of flatter.  It’s really a shame.

I’ve made the case in my “Commode Computing” presentation that one of the very first things that architects need to embrace is the following:

…by not artificially constraining the way in which we organize, segment and apply policy (i.e. “put it in a DMZ”) we can think about how design “anti-patterns” may actually benefit us…you can call them what you like, but we need to employ better methodology for “zoning.”

These trust zones or enclaves are reasonable in concept so long as we can ultimately further abstract their “segmentation” and abstract the security and compliance policy requirements by expressing policy programmatically and taking the logical business and functional use-case PROCESSES into consideration when defining, expressing and instantiating said policy.

You know…understand what talks to what and why…

A great way to think about this problem is to apply the notion of application mobility — without VM containers — and how one would instantiate a security “policy” in that context.  In many cases, as we march up the stack to distributed platform application architectures, we’re not able to depend upon the “crutch” that hypervisors or VM packages have begun to give us in legacy architectures that have virtualization grafted onto them.

Since many enterprises are now just starting to better leverage their virtualized infrastructure, there *are* some good solutions (again, platform specific) that unify the physical and virtual networks from a zoning perspective, but the all-up process-driven, asset-centric (app & information) view of “policy” is still woefully lacking, especially in heterogeneous environments.

Wrapping Up…

In enterprise and SP environments where we don’t have the opportunity to start anew, it often feels like we’re so far off from this sort of capability because it requires a shift that makes software defined networking look like child’s play.  Most enterprises don’t do risk-driven, asset-centric, process-mapped modelling, [and SP’s are disconnected from this,] so segmentation falls back to what we know: DMZs with VLANs, NAT, Firewalls, SSL and new protocol band-aids invented to cover gaping arterial wounds.

In environments lucky enough to think about and match the application use cases with the highly-differentiated operational models that virtualized *everything* brings to bear, it’s here today — but be prepared and honest that the vendor(s) you chose must be strategic and the interfaces between those platforms and external entities VERY well defined…else you risk software defined entropy.

I wish I had more than the 5 minutes it took to scratch this out because there’s SO much to talk about here…

…perhaps later.

Related articles

Enhanced by Zemanta

The Killer App For OpenFlow and SDN? Security.

October 27th, 2011 8 comments

I spent yesterday at the PacketPushers/TechFieldDay OpenFlow Symposium. The event provided a good overview of what OpenFlow [currently] means, how it fits into the overall context of software-defined networking (SDN) and where it might go from here.

I’d suggest reading Ethan Banks’ (@ecbanks) overview here.

Many of us left the event, however, still wondering about what the “killer app” for OpenFlow might be.

Chatting with Ivan Pepelnjak (@ioshints) and Derrick Winkworth (@CloudToad,) I reiterated that selfishly, I’m still thrilled about the potential that OpenFlow and SDN can bring to security.  This was a topic only briefly skirted during the symposium as the ACL-like capabilities of OpenFlow were discussed, but there’s so much more here.

I wrote about this back in May (OpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security):

… “security” needs to be as programmatic/programmable, agile, scaleable and flexible as the workloads (and stacks) it is designed to protect. “Security” in this context extends well beyond the network, but the network provides such a convenient way of defining templated containers against which we can construct and enforce policies across a wide variety of deployment and delivery models.

So as I watch OpenFlow (and Software Defined Networking) mature, I’m really, really excited to recognize the potential for a slew of innovative ways we can leverage and extend this approach to networking [monitoring and enforcement] in order to achieve greater visibility, scale, agility, performance, efficacy and reduced costs associated with security.  The more programmatic and instrumented the network becomes, the more capable our security options will become also.

I had to chuckle at a serendipitous tweet from a former Cisco co-worker (Stefan Avgoustakis, @savgoust) because it’s really quite apropos for this topic:

…I think he’s oddly right!

Frankly, if you look at what OpenFlow and SDN (and network programmability in general) gives an operator — the visibility and effective graph of connectivity as well as the multiple-tupule flow action capabilities, there are numerous opportunities to leverage the separation of control/data plane across both virtual and physical networks to provide better security capabilities in response to threats and at a pace/scale/radius commensurate with said threat.

To be able to leverage telemetry and flow tables in the controllers “centrally” and then “dispatch” the necessary security response on an as-needed basis to the network location ONLY that needs it, really does start to sound a lot like the old “immune system” analogy that SDN (self defending networks) promised.

The ability to distribute security capabilities more intelligently as a service layer which can be effected when needed — without the heavy shotgunned footprint of physical in-line devices or the sprawl of virtualized appliances — is truly attractive.  Automation for detection and ultimately prevention is FTW.

Bundling the capabilities delivered via programmatic interfaces and coupling that with ways of integrating the “network” and “applications” (of which security is one) produces some really neat opportunities.

Now, this isn’t just a classical “data center core” opportunity, either. How about the WAN/Edge?  Campus, branch…? Anywhere you have the need to deliver security as a service.

For non-security examples, check out Dave Ward’s (my Juniper colleague) presentation “Programmable Networks are SFW” where he details interesting use cases such as “service engineered paths,” “service appliance pooling,” “service specific topology,” “content request routing,” and “bandwidth calendaring” for example.

…think of the security ramifications and opportunities linked to those capabilities!

I’ve mocked up a couple of very interesting security prototypes using OpenFlow and some open source security components; from IDP to Anti-malware and the potential is exciting because OpenFlow — in conjunction with other protocols and solutions in the security ecosystem — could provide some of the missing glue necessary to deliver a constant,  but abstracted security command/control (nee API-like capability) across heterogenous infrastructure.

NOTE: I’m not suggesting that OpenFlow itself provide these security capabilities, but rather enable security solutions to take advantage of the control/data plane separation to provide for more agile and effective security.

If the potential exists for OpenFlow to effectively allow choice of packet forwarding engines and network virtualization across the spectrum of supporting vendors’ switches, it occurs to me that we could utilize it for firewalls, intrusion detection/prevention engines, WAFs, NAC, etc.

Thoughts?

Enhanced by Zemanta

A Contentious Question: The Value Proposition & Target Market Of Virtual Networking Solutions?

September 28th, 2011 26 comments

I have, what I think, is a simple question I’d like some feedback on:

Given the recent influx of virtual networking solutions, many of which are OpenFlow-based, what possible in-roads and value can they hope to offer in heavily virtualized enterprise environments wherein the virtual networking is owned and controlled by VMware?

Specifically, if the only third-party VMware virtual switch to date is Cisco’s and access to this platform is limited (if at all available) to startup players, how on Earth do BigSwitch, Nicira, vCider, etc. plan to insert themselves into an already contentious environment effectively doing mindshare and relevance battle with the likes of mainline infrastructure networking giants and VMware?

If you’re answer is “OpenFlow and OpenStack will enable this access,” I’ll follow along with a question that asks how long a runway these startups have hanging their shingle on relatively new efforts (mainly open source) that the enterprise is not a typically early adopter of.

I keep hearing notional references to the problems these startups hope to solve for the “Enterprise,” but just how (and who) do they think they’re going to get to consider their products at a level that gives them reasonable penetration?

Service providers, maybe?

Enterprises…?

It occurs to me that most of these startups are being built to be acquired by traditional networking vendors who will (or will not) adopt OpenFlow when significant enterprise dollars materialize in stacks that are not VMware-centric.

Not meaning to piss anyone off, but many of these startups’ business plans are shrouded in the mystical vail of “wait and see.”

So I do.

/Hoff

Ed: To be clear, this post isn’t about “OpenFlow” specifically (that’s only one of many protocols/approaches,) but rather the penetration of a virtual networking solution into a “closed” platform environment dominated by a single vendor.

If you want a relevant analog, look at the wasteland that represents the virtual security startups that tried to enter this space (and even the larger vendors’ solutions) and how long this has taken/fared.

If you read the comments below, you’ll see people start to accidentally tease out the real answer to the question I was asking…about the value of these virtual networking solutions providers.  The funny part is that despite the lack of comments from most of the startups I mention, it took Brad Hedlund (from Cisco) to recognize why I wrote the post, which is the following:

“The *real* reason I wrote this piece was to illustrate that really, these virtual networking startups are really trying to invade the physical network in virtual sheep’s clothing…”

…in short, the problem space they’re trying to solve is actually in the physical network, or more specifically bridge the gap between the two.

Enhanced by Zemanta

OpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security

April 8th, 2011 3 comments

As facetious as the introductory premise of my Commode Computing presentation is, the main message — the automation of security capabilities up and down the stack — really is something I’m passionate about.

Ultimately, I made the point that “security” needs to be as programmatic/programmable, agile, scaleable and flexible as the workloads (and stacks) it is designed to protect. “Security” in this contexts extends well beyond the network, but the network provides such a convenient way of defining templated containers against which we can construct and enforce policies across a wide variety of deployment and delivery models.

So as I watch OpenFlow (and Software Defined Networking) mature, I’m really, really excited to recognize the potential for a slew of innovative ways we can leverage and extend this approach to networking [monitoring and enforcement] in order to achieve greater visibility, scale, agility, performance, efficacy and reduced costs associated with security.  The more programmatic and instrumented the network becomes, the more capable our security options will become also.

I’m busy reading many of the research activities associated with OpenFlow security and digesting where vendors are in terms of their approach to leveraging this technology in terms of security.  It may be just my perspective, but it’s a little sparse today — not disappointingly so — with a huge greenfield opportunity for really innovative stuff when paired with advancements we’re seeing in virtualization and cloud computing.

I’ll relate more of my thoughts and discoveries as time goes on.  If you’ve got some cool ideas/concepts/products in this area (I don’t care who you work for,) post ’em here in the comments, please!

In the meantime, check out: www.openflow.org to get your feet wet.

/Hoff

Reminders to self to perform more research on (I think I’m going to do my next presentation series on this):

  • AAA for messages between OpenFlow Switch and Controllers
  • Flood protection for controllers
  • Spoofing/MITM between switch/controllers (specifically SSL/TLS)
  • Flow-through (ha!)/support of OpenFlow in virtual switches (see 1000v and Open vSwitch)
  • (per above) Integration with VN-Tag (like) flow-VM (workload) tagging
  • Integration of Netflow data from OpenFlow flow tables
  • State/flow-table convergence for security decisions with/without cut-through given traffic steering
  • Service insertion overlays for security control planes
  • Integration with 802.1x (and protocol extensions such as TrustSec)
  • Telemetry integration with NAC and vNAC
  • Anti-DDoS implications
Enhanced by Zemanta