Archive

Archive for the ‘Automation’ Category

Incomplete Thought: In-Line Security Devices & the Fallacies Of Block Mode

June 28th, 2013 16 comments

blockadeThe results of a long-running series of extremely scientific studies has produced a Metric Crapload™ of anecdata.

Namely, hundreds of detailed discussions (read: lots of booze and whining) over the last 5 years has resulted in the following:

Most in-line security appliances (excluding firewalls) with the ability to actively dispose of traffic — services such as IPS, WAF, Anti-malware — are deployed in “monitor” or “learning” mode are rarely, if ever, enabled with automated blocking.  In essence, they are deployed as detective versus preventative security services.

I have many reasons compiled for this.

I am interested in hearing whether you agree/disagree and your reasons for such.

/Hoff

Enhanced by Zemanta

Wanna Be A Security Player? Deliver It In Software As A Service Layer…

January 9th, 2013 1 comment

As I continue to think about the opportunities that Software Defined Networking (SDN) and Network Function Virtualization (NFV) bring into focus, the capability to deliver security as a service layer is indeed exciting.

I wrote about how SDN and OpenFlow (as a functional example) and the security use cases provided by each will be a differentiating capability back in 2011: The Killer App For OpenFlow and SDN? SecurityOpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security, and Back To The Future: Network Segmentation & More Moaning About Zoning.

Recent activity in the space has done nothing but reinforce this opinion.  My day job isn’t exactly lacking in excitement, either 🙂

As many networking vendors begin to bring their SDN solutions to market — whether in the form of networking equipment or controllers designed to interact with them — one of the missing strategic components is security.  This isn’t a new phenomenon, unfortunately, and as such, predictably there are also now startups entering this space and/or retooling from the virtualization space and stealthily advertising themselves as “SDN Security” companies 🙂

Like we’ve seen many times before, security is often described (confused?) as a “simple” or “atomic” service and so SDN networking solutions are designed with the thought that security will simply be “bolted on” after the fact and deployed not unlike a network service such as “load balancing.”  The old “we’ll just fire up some VMs and TAMO (Then a Miracle Occurs) we’ve got security!” scenario.  Or worse yet, we’ll develop some proprietary protocol or insertion architecture that will magically get traffic to and from physical security controls (witness the “U-TURN” or “horseshoe” L2/L3 solutions of yesteryear.)

The challenge is that much of Security today is still very topologically sensitive and depends upon classical networking constructs to be either physically or logically plumbed between the “outside” and the asset under protection, or it’s very platform dependent and lacks the ability to truly define a policy that travels with the workload regardless of the virtualization, underlay OR overlay solutions.

Depending upon the type of control, security is often operationalized across multiple layers using wildly different constructs, APIs and context in terms of policy and disposition depending upon it’s desired effect.

Virtualization has certainly evolved our thinking about how we should think differently about security mostly due to the dynamism and mobility that virtualization has introduced, but it’s still incredibly nascent in terms of exposed security capabilities in the platforms themselves.  It’s been almost 5 years since I started raging about how we need(ed) platform providers to give us capabilities that function across stacks so we’d have a fighting chance.  To date, not only do we have perhaps ONE vendor doing some of this, but we’ve seen the emergence of others who are maniacally focused on providing as little of it as possible.

If you think about what virtualization offers us today from a security perspective, we have the following general solution options:

  1. Hypervisor-based security solutions which may apply policy as a function of the virtual-NIC card of the workloads it protects.
  2. Extensions of virtual-networking (i.e. switching) solutions that enable traffic steering and some policy enforcement that often depend upon…
  3. Virtual Appliance-based security solutions that require manual or automated provisioning, orchestration and policy application in user space that may or may not utilize APIs exposed by the virtual networking layer or hypervisor

There are tradeoffs across each of these solutions; scale, performance, manageability, statefulness, platform dependencies, etc.  There simply aren’t many platforms that natively offer security capabilities as a function of service delivery that allows arbitrary service definition with consistent and uniform ways of describing the outcome of the policies at these various layers.  I covered this back in 2008 (it’s a shame nothing has really changed) in my Four Horsemen Of the Virtual Security Apocalypse presentation.

As I’ve complained for years, we still have 20 different ways of defining how to instantiate a five-tupule ACL as a basic firewall function.

Out of the Darkness…

The promise of SDN truly realized — the ability to separate the control, forwarding, management and services planes — and deploy security as a function of available service components across overlays and underlays, means we will be able to take advantage of any of these models so long as we have a way to programmatically interface with the various strata regardless of whether we provision at the physical, virtual or overlay virtual layer.

It’s truly exciting.  We’re seeing some real effort to enable true security service delivery.

When I think about how to categorize the intersection of “SDN” and “Security,” I think about it the same way I have with virtualization and Cloud:

  • Securing SDN (Securing the SDN components)
  • SDN Security Services (How do I take security and use SDN to deliver security as a service)
  • Security via SDN (What NEW security capabilities can be derived from SDN)

There are numerous opportunities with each of these categories to really make a difference to security in the coming years.

The notion that many of our network and security capabilities are becoming programmatic means we *really* need to focus on securing SDN solutions, especially given the potential for abuse given the separation of the various channels. (See: Software Defined Networking (In)Security: All Your Control Plane Are Belong To Us…)

Delivering security as a service via SDN holds enormous promise for reasons I’ve already articulated and gives us an amazing foundation upon which to start building solutions we can’t imagine today given the lack of dynamism in our security architecture and design patterns.

Finally, the first two elements give rise to allow us to do things we can’t even imagine with today’s traditional physical and even virtual solutions.

I’ll be starting to highlight really interesting solutions I find (and am able to talk about) over the next few months.

Security enabled by SDN is going to be huge.

More soon.

/Hoff

Related articles

Enhanced by Zemanta

Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 15 comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself 🙂

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting 🙂

Related articles

 

Enhanced by Zemanta

Brood Parasitism: A Cuckoo Discussion Of Smart Device Insecurity By Way Of Robbing the NEST…

July 18th, 2012 No comments
English: Eastern Phoebe (Sayornis phoebe) nest...

(Photo credit: Wikipedia)

 

I’m doing some research, driven by recent groundswells of some awesome security activity focused on so-called “smart meters.”  Specifically, I am interested in the emerging interconnectedness, consumerization and prevalence of more generic smart devices and home automation systems and what that means from a security, privacy and safety perspective.

I jokingly referred to something like this way back in 2007…who knew it would be more reality than fiction.

You may think this is interesting.  You may think this is overhyped and boorish.  You may even think this is cuckoo…

Speaking of which, back to the title of the blog…

Brood parasitism is defined as:

A method of reproduction seen in birds that involves the laying of eggs in the nests of other birds. The eggs are left under the parantal care of the host parents. Brood parasitism may be occur between species (interspecific) or within a species (intraspecific) [About.com]

A great example is that of the female european Cuckoo which lays an egg that mimics that of a host species.  After hatching, the young Cuckcoo may actually dispose of the host egg by shoving it out of the nest with a genetically-engineered physical adaptation — a depression in its back.  One hatched, the forced-adoptive parent birds, tricked into thinking the hatchling is legitimate, cares for the imposter that may actually grow larger than they, and then struggle to keep up with its care and feeding.

What does this have to do with “smart device” security?

I’m a huge fan of my NEST thermostat. 🙂 It’s a fantastic device which, using self-learning concepts, manages the heating and cooling of my house.  It does so by understanding how my family and I utilize the controls over time doing so in combination with knowing when we’re at home or we’re away.  It communicates with and allows control over my household temperature management over the Internet.  It also has an API <wink wink>  It uses an ARM Cortex A8 CPU and has both Wifi and Zigbee radios <wink wink>

…so it knows how I use power.  It knows how when I’m at home and when I’m not. It allows for remote, out-of-band, Internet connectivity.  I uses my Wifi network to communicate.  It will, I am sure, one day intercommunicate with OTHER devices on my network (which, btw, is *loaded* with other devices already)

So back to my cuckoo analog of brood parasitism and the bounty of “robbing the NEST…”

I am working on researching the potential for subverting the control plane for my NEST (amongst other devices) and using that to gain access to information regarding occupancy, usage, etc.  I have some ideas for how this information might be (mis)used.

Essentially, I’m calling the tool “Cuckoo” and it’s job is that of its nest-robbing namesake — to have it fed illegitimately and outgrow its surrogate trust model to do bad things™.

This will dovetail on work that has been done in the classical “smart meter” space such as what was presented at CCC in 2011 wherein the researchers were able to do things like identify what TV show someone was watching and what capabilities like that mean to privacy and safety.

If anyone would like to join in on the fun, let me know.

/Hoff

 

Enhanced by Zemanta

Back To The Future: Network Segmentation & More Moaning About Zoning

July 16th, 2012 5 comments

A Bit Of Context…

This image was selected as a picture of the we...

(Photo credit: Wikipedia)A Bit Of Context…

The last 3 years have been very interesting when engaging with large enterprises and service providers as they set about designing, selecting and deploying their “next generation” network architecture. These new networks are deployed in timescales that see them collide with disruptive innovation such as fabrics, cloud, big data and DevOps.

In most cases, these network platforms must account for the nuanced impact of virtualized design patterns, refreshes of programmatic architecture and languages, and the operational model differences these things introduce.  What’s often apparent is that no matter how diligent the review, by the time these platforms are chosen, many tradeoffs are made — especially when it comes to security and compliance — and we arrive at the old adage: “You can get fast, cheap or secure…pick two.”

…And In the Beginning, There Was Spanning Tree…

The juxtaposition of flatter and flatter physical networks, nee “fabrics” (compute, network and storage,) with what seems to be a flip-flop transition between belief systems and architects who push for either layer 2 or layer 3 (or encapsulated versions thereof) segmentation at the higher layers is again aggravated by continued push for security boundary definition that yields further segmentation based on policy at the application and information layer.

So what we end up with is the benefits of flatter, any-to-any connectivity at the physical networking layer with a “software defined” and virtualized networking context floating both alongside (Nicira, BigSwitch, OpenFlow) as well as atop it (VMware, Citrix, OpenStack Quantum, etc.) with a bunch of protocols ladled on like some protocol gravy blanketing the Chicken Fried Steak that represents the modern data center.

Oh!  You Mean the Cloud…

Now, there are many folks who don’t approach it this way, and instead abstract away much of what I just described.  In Amazon Web Services’ case as a service provider, they dumb down the network sufficiently and control the abstracted infrastructure to the point that “flatness” is the only thing customers get and if you’re going to run your applications atop, you must keep it simple and programmatic in nature else risk introducing unnecessary complexity into the “software stack.”

The customers who then depend upon these simplified networking services must then absorb the gaps introduced by a lack of features by architecturally engineering around them, becoming more automated, instrumented and programmatic in nature or add yet another layer of virtualized (and generally encrypted) transport and execution above them.

This works if you’re able to engineer your way around these gaps (or make them less relevant,) but generally this is where segmentation becomes an issue due to security and compliance design patterns which depend on the “complexity” introduced by the very flexible networking constructs available in most enterprise of SP networks.

It’s like a layered cake that keeps self-frosting.

Software Defined Architecture…

You can see the extreme opportunity for Software Defined *anything* then, can’t you? With SDN, let the physical networks NOT be complex but rather more simple and flat and then unify the orchestration, traffic steering, service insertion and (even) security capabilities of the physical and virtual networks AND the virtualization/cloud orchestration layers (from the networking perspective) into a single intelligent control plane…

That’s a big old self-frosting cake.

Basically, this is what AWS has done…but all that intelligence provided by the single pane of glass is currently left up to the app owner atop them.  That’s the downside.  Those sufficiently enlightened AWS’ customers are aware generally  of this and understand the balance of benefits and limitations of this path.

In an enterprise environment, however, it’s a timing game between the controller vendors, the virtualization/cloud stack providers, the networking vendors, and security vendors …each trying to offer up this capability either as an “integrated” capability or as an overlay…all under the watchful eye of the auditor who is generally unmotivated, uneducated and unnerved by all this new technology — especially since the compliance frameworks and regulatory elements aren’t designed to account for these dramatic shifts in architecture or operation (let alone the threat curve of advanced adversaries.)

Back To The Future…Hey, Look, It’s Token Ring and DMZs!

As I sit with these customers who build these nextgen networks, the moment segmentation comes up, the elegant network and application architectures rapidly crumble into piles of asset-based rubble as what happens next borders on the criminal…

Thanks to compliance initiatives — PCI is a good example — no matter how well scoped, those flat networks become more and more logically hierarchical.  Because SDN is still nascent and we’re lacking that unified virtualized network (and security) control plane, we end up resorting back to platform-specific “less flat” network architectures in both the physical and virtual layers to achieve “enclave” like segmentation.

But with virtualization the problem gets more complex as in an attempt to be agile, cost efficient and in order to bring data to the workloads to reduce heaving lifting of the opposite approach, out-of-scope assets can often (and suddenly) be co-resident with in-scope assets…traversing logical and physical constructs that makes it much more difficult to threat model since the level of virtualized context supports differs wildly across these layers.

Architects are then left to think how they can effectively take all the awesome performance, agility, scale and simplicity offered by the underlying fabrics (compute, network and storage) and then layer on — bolt on — security and compliance capabilities.

What they discover is that it’s very, very, very platform specific…which is why we see protocols such as VXLAN and NVGRE pop up to deal with them.

Lego Blocks and Pig Farms…

These architects then replicate the design patterns with which they are familiar and start to craft DMZs that are logically segmented in the physical network and then grafted on to the virtual.  So we end up with relying on what Gunnar Petersen and I refer to as the “SSL and Firewall” lego block…we front end collections of “layer 2 connected” assets based on criticality or function, many of which stretched across these fabrics, and locate them behind layer 3 “firewalls” which provide basic zone-based isolation and often VPN connectivity between “trusted” groups of other assets.

In short, rather than build applications that securely authenticate, communicate — or worse yet, even when they do — we pigpen our corralled assets and make our estate fatter instead of flatter.  It’s really a shame.

I’ve made the case in my “Commode Computing” presentation that one of the very first things that architects need to embrace is the following:

…by not artificially constraining the way in which we organize, segment and apply policy (i.e. “put it in a DMZ”) we can think about how design “anti-patterns” may actually benefit us…you can call them what you like, but we need to employ better methodology for “zoning.”

These trust zones or enclaves are reasonable in concept so long as we can ultimately further abstract their “segmentation” and abstract the security and compliance policy requirements by expressing policy programmatically and taking the logical business and functional use-case PROCESSES into consideration when defining, expressing and instantiating said policy.

You know…understand what talks to what and why…

A great way to think about this problem is to apply the notion of application mobility — without VM containers — and how one would instantiate a security “policy” in that context.  In many cases, as we march up the stack to distributed platform application architectures, we’re not able to depend upon the “crutch” that hypervisors or VM packages have begun to give us in legacy architectures that have virtualization grafted onto them.

Since many enterprises are now just starting to better leverage their virtualized infrastructure, there *are* some good solutions (again, platform specific) that unify the physical and virtual networks from a zoning perspective, but the all-up process-driven, asset-centric (app & information) view of “policy” is still woefully lacking, especially in heterogeneous environments.

Wrapping Up…

In enterprise and SP environments where we don’t have the opportunity to start anew, it often feels like we’re so far off from this sort of capability because it requires a shift that makes software defined networking look like child’s play.  Most enterprises don’t do risk-driven, asset-centric, process-mapped modelling, [and SP’s are disconnected from this,] so segmentation falls back to what we know: DMZs with VLANs, NAT, Firewalls, SSL and new protocol band-aids invented to cover gaping arterial wounds.

In environments lucky enough to think about and match the application use cases with the highly-differentiated operational models that virtualized *everything* brings to bear, it’s here today — but be prepared and honest that the vendor(s) you chose must be strategic and the interfaces between those platforms and external entities VERY well defined…else you risk software defined entropy.

I wish I had more than the 5 minutes it took to scratch this out because there’s SO much to talk about here…

…perhaps later.

Related articles

Enhanced by Zemanta

Overlays: Wasting Away Again In Abstractionville…

May 5th, 2012 3 comments
IBM Cloud Computing

(Photo credit: IvanWalsh.com)

 

I’m about to get in a metal tube and spend 14 hours in the Clouds.  I figured I’d get something off my chest while I sit outside in the sun listening to some Jimmy Buffett.

[Network] overlays.  They bug me.  Let me tell you why.

The Enterprise, when considering “moving to the Cloud” generally takes one of two approaches depending upon culture, leadership, business goals, maturity and sophistication:

  1. Go whole-hog with an all-in Cloud strategy. 
    Put an expiration date on maintaining/investing in legacy apps/infrastructure and instead build an organizational structure, technology approach, culture, and operational model that is designed around building applications that are optimized for “cloud” — and that means SaaS, PaaS, and IaaS across public, private and hybrid models with a focus on how application delivery and information (including protecting) is very different than legacy deployments, or…
  2. Adopt a hedging strategy to get to Cloud…someday.
    This usually means opportunistically picking low risk, low impact, low-hanging fruit that can be tip-toed toward and scraping together the existing “rogue” projects already underway, sprinkling in some BYOD, pointing to a virtualized datacenter and calling a 3 day provisioning window with change control as “on-demand,” and “Cloud.”  Oh, and then deploying gateways, VPNs, data encryption and network overlays as an attempt to plug holes by paving over them, and calling that “Cloud,” also.

See that last bit?

This is where so-called “software defined networking (SDN),” the myriad of models that utilize “virtualization” and all sorts of new protocols and service delivery mechanisms are being conflated into the “will it blend” menagerie called “Cloud.”  It’s an “eyes wide shut” approach.

Now, before you think I’m being dismissive of “virtualization” or SDN, I’m not.  I believe. Wholesale. But within the context of option #2 above, it’s largely a waste of time, money, and effort.  It’s putting lipstick on a pig.

You either chirp or get off the twig.

Picking door #2 is where the Enterprise looks at shiny new things based on an article in the WSJ, Wired or via peer group golf outing and says “I bet if we added yet another layer of abstraction atop the piles of already rapidly abstracting piles of shite we already have, we would be more agile, nimble, efficient and secure.”  We would be “cloud” enabled.

[To a legacy-minded Enterprise,] Cloud is the revenge of VPN and PKI…

The problem is that just like the folks in Maine will advise: “You can’t get there from here.”  I mean, you can, but the notion that you’ll actually pull it off by stacking turtles, applying band-aids and squishing the tyranny of VLANs by surrounding them in layer 3 network overlays and calling this the next greatest thing since sliced bread is, well, bollocks.

Look, I think SDN, protocols like Openflow and VXLAN/NVGRE, etc. are swell.  I think the separation of control and data planes and the notion that I can programmatically operate my network is awesome.  I think companies like Nicira and Bigswitch are doing really interesting things.  I think that Cloudstack, Openstack and VMWare present real opportunity to make things “better.”

Hey, look, we’re just like Google and Amazon Web Services Now!

But to an Enterprise without a real plan as to what “Cloud” really means to their business, these are largely overlays within the context of #2.  Within the context of #1, they’re simply mom and apple pie and are, for the most part, invisible.  That’s not where the focus actually is.

That said, for a transitional Enterprise, these things give them pause, but should be looked upon as breadcrumbs that indicate a journey, not the destination.  They’re a crutch and another band-aid to solve legacy problems.  They’re really a means to an end.

These “innovations” *are* a step in the right direction.  They will let us do great things. They will let a whole new generation of operational models and a revitalized ecosystem flourish AND it will encourage folks to think differently.  But about what?  And to solve what problem(s)?

If you simply expect to layer them on your legacy infrastructure, operational models and people and call it “Cloud,” you’re being disingenuous.

Ultimately, to abuse an analogy, network overlays are a layover on the itinerary of our journey to the Cloud, but not where we should ultimately land. I see too many companies focusing on the transition…and by the time they get there, the target will have moved.  Again.  Just like it always does.

They’re hot now because they reflect something we should have done a long time ago, but like hypervisors, one day [soon] network overlays will become just a feature and not a focus.

/Hoff

 

Enhanced by Zemanta

The Killer App For OpenFlow and SDN? Security.

October 27th, 2011 8 comments

I spent yesterday at the PacketPushers/TechFieldDay OpenFlow Symposium. The event provided a good overview of what OpenFlow [currently] means, how it fits into the overall context of software-defined networking (SDN) and where it might go from here.

I’d suggest reading Ethan Banks’ (@ecbanks) overview here.

Many of us left the event, however, still wondering about what the “killer app” for OpenFlow might be.

Chatting with Ivan Pepelnjak (@ioshints) and Derrick Winkworth (@CloudToad,) I reiterated that selfishly, I’m still thrilled about the potential that OpenFlow and SDN can bring to security.  This was a topic only briefly skirted during the symposium as the ACL-like capabilities of OpenFlow were discussed, but there’s so much more here.

I wrote about this back in May (OpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security):

… “security” needs to be as programmatic/programmable, agile, scaleable and flexible as the workloads (and stacks) it is designed to protect. “Security” in this context extends well beyond the network, but the network provides such a convenient way of defining templated containers against which we can construct and enforce policies across a wide variety of deployment and delivery models.

So as I watch OpenFlow (and Software Defined Networking) mature, I’m really, really excited to recognize the potential for a slew of innovative ways we can leverage and extend this approach to networking [monitoring and enforcement] in order to achieve greater visibility, scale, agility, performance, efficacy and reduced costs associated with security.  The more programmatic and instrumented the network becomes, the more capable our security options will become also.

I had to chuckle at a serendipitous tweet from a former Cisco co-worker (Stefan Avgoustakis, @savgoust) because it’s really quite apropos for this topic:

…I think he’s oddly right!

Frankly, if you look at what OpenFlow and SDN (and network programmability in general) gives an operator — the visibility and effective graph of connectivity as well as the multiple-tupule flow action capabilities, there are numerous opportunities to leverage the separation of control/data plane across both virtual and physical networks to provide better security capabilities in response to threats and at a pace/scale/radius commensurate with said threat.

To be able to leverage telemetry and flow tables in the controllers “centrally” and then “dispatch” the necessary security response on an as-needed basis to the network location ONLY that needs it, really does start to sound a lot like the old “immune system” analogy that SDN (self defending networks) promised.

The ability to distribute security capabilities more intelligently as a service layer which can be effected when needed — without the heavy shotgunned footprint of physical in-line devices or the sprawl of virtualized appliances — is truly attractive.  Automation for detection and ultimately prevention is FTW.

Bundling the capabilities delivered via programmatic interfaces and coupling that with ways of integrating the “network” and “applications” (of which security is one) produces some really neat opportunities.

Now, this isn’t just a classical “data center core” opportunity, either. How about the WAN/Edge?  Campus, branch…? Anywhere you have the need to deliver security as a service.

For non-security examples, check out Dave Ward’s (my Juniper colleague) presentation “Programmable Networks are SFW” where he details interesting use cases such as “service engineered paths,” “service appliance pooling,” “service specific topology,” “content request routing,” and “bandwidth calendaring” for example.

…think of the security ramifications and opportunities linked to those capabilities!

I’ve mocked up a couple of very interesting security prototypes using OpenFlow and some open source security components; from IDP to Anti-malware and the potential is exciting because OpenFlow — in conjunction with other protocols and solutions in the security ecosystem — could provide some of the missing glue necessary to deliver a constant,  but abstracted security command/control (nee API-like capability) across heterogenous infrastructure.

NOTE: I’m not suggesting that OpenFlow itself provide these security capabilities, but rather enable security solutions to take advantage of the control/data plane separation to provide for more agile and effective security.

If the potential exists for OpenFlow to effectively allow choice of packet forwarding engines and network virtualization across the spectrum of supporting vendors’ switches, it occurs to me that we could utilize it for firewalls, intrusion detection/prevention engines, WAFs, NAC, etc.

Thoughts?

Enhanced by Zemanta

Cloud Security Start-Up: Dome9 – Firewall Management SaaS With a Twist

September 12th, 2011 No comments

Dome9 has peeked its head out from under the beta covers and officially launched their product today.  I got an advanced pre-brief last week and thought I’d summarize what I learned.

As it turns out I enjoy a storied past with Zohar Alon, Dome9’s CEO.  Back in the day, I was responsible for architecture and engineering of Infonet’s (now BT) global managed security services which included a four-continent deployment of Check Point Firewall-1 on Sun Sparcs.

Deploying thousands of managed firewall “appliances” (if I can even call them that now) and managing each of them individually with a small team posed quite a challenge for us.  It seems it posed a challenge for many others also.

Zohar was at Check Point and ultimately led the effort to deliver Provider-1 which formed the basis of their distributed firewall (and virtualized firewall) management solution which piggybacked on VSX.

Fast forward 15 years and here we are again — cloud and virtualization have taken the same set of security and device management issues and amplified them.  Zohar and his team looked at the challenge we face in managing the security of large “web-scale” cloud environments and brought Dome9 to life to help solve this problem.

Dome9’s premise is simple – use a centralized SaaS-based offering to help manage your distributed cloud access-control (read: firewall) management challenge using either an agent (in the guest) or agent-less (API) approach across multiple cloud IaaS platforms.

Their first iteration of the agent-based solution focuses on Windows and Linux-based OSes and can pretty much function anywhere.  The API version currently is limited to Amazon Web Services.

Dome9 seeks to fix the “open hole” access problem created when administrators create rules to allow system access and forget to close/remove them after the tasks are complete.  This can lead to security issues as open ports invite unwanted “guests.”  In their words:

  • Keep ALL administrative ports CLOSED on your servers without losing access and control.
  • Dynamically open any port On-Demand, any time, for anyone, and from anywhere.
  • Send time and location-based secure access invitations to third parties.
  • Close ports automatically, so you don’t have to manually reconfigure your firewall.
  • Securely access your cloud servers without fear of getting locked out.

The unique spin/value-proposition with Dome9 in it’s initial release is the role/VM/user focused and TIME-LIMIT based access policies you put in place to enable either static (always-open) or dynamic (time-limited) access control to authorized users.

Administrators can setup rules in advance for access or authorized users can request time-based access dynamically to previously-configured ports by clicking a button.  It quickly opens access and closes it once the time limit has been reached.

Basically Dome9 allows you to manage and reconcile “network” based ACLs and — where used — AWS security zones (across regions) with guest-based firewall rules.  With the agent installed, it’s clear you’ll be able to do more in both the short and long-term (think vulnerability management, configuration compliance, etc.) although they are quite focused on the access control problem today.

There are some workflow enhancements I suggested during the demo to enable requests from “users” to “administrators” to request access to ports not previously defined — imagine if port 443 is open to allow a user to install a plug-in that then needs a new TCP port to communicate.  If that port is not previously known/defined, there’s no automated way to open that port without an out-of-band process which makes the process clumsy.

We also discussed the issue of importing/supporting identity federation in order to define “users” from the Enterprise perspective across multiple clouds.  They could use your input if you have any.

There are other startups with similar models today such as CloudPassage (I’ve written about them before here) who look to leverage SaaS-based centralized security services to solve IaaS-based distributed security challenges.

In the long term, I see Cloud security services being chained together to form an overlay of sorts.  In fact, CloudFlare (another security SaaS offering) announced a partnership with Dome9 for this very thing.

Dome9 has a 14-day free trial two available pricing models:

  1. “Personal Server” – a FREE single protected server with a single administrator
  2. “Business Cloud” – Per-use pricing with 5 protected servers at $20 per month

If you’re dealing with trying to get a grip on your distributed firewall management problem, especially if you’re a big user of AWS, check out Dome9.

/Hoff

Enhanced by Zemanta

VMware’s vShield – Why It’s Such A Pain In the Security Ecosystem’s *aaS…

September 4th, 2011 15 comments

I’ve become…comfortably numb…

Whilst attending VMworld 2011 last week, I attended a number of VMware presentations, hands-on labs and engaged in quite a few discussions related to VMware’s vShield and overall security strategy.

I spent a ton of time discussing vShield with customers — some who love it, some who don’t — and thought long and hard about writing this blog.  I also spent some time on SiliconAngle’s The Cube discussing such, here.

I have dedicated quite a lot of time discussing the benefits of VMware’s security initiatives, so it’s important that you understand that I’m not trying to be overtly negative, nor am I simply pointing fingers as an uneducated, uninterested or uninvolved security blogger intent on poking the bear.  I live this stuff…every day, and like many, it’s starting to become messy. (Ed: I’ve highlighted this because many seem to have missed this point. See here for example.)

It’s fair to say that I have enjoyed “up-to-the-neck” status with VMware’s various security adventures since the first marketing inception almost 4 years ago with the introduction of the VMsafe APIs.  I’ve implemented products and helped deliver some of the ecosystem’s security offerings.  My previous job at Cisco was to provide the engineering interface between the two companies, specifically around the existing and next generation security offerings, and I now enjoy a role at Juniper which also includes this featured partnership.

I’m also personal friends with many of the folks at VMware on the product and engineering teams, so I like to think I have some perspective.  Maybe it’s skewed, but I don’t think so.

There are lots of things I cannot and will not say out of respect for obvious reasons pertaining to privileged communications and NDAs, but there are general issues that need to be aired.

Geez, enough with the CYA…get on with it then…

As I stated on The Cube interview, I totally understand VMware’s need to stand-alone and provide security capacities atop their platform; they simply cannot expect to move forward and be successful if they are to depend solely on synchronizing the roadmaps of dozens of security companies with theirs.

However, the continued fumbles and mis-management of the security ecosystem and their partnerships as well as the continued competitive nature of their evolving security suite makes this difficult.  Listening to VMware espouse that they are in the business of “security ecosystem enablement” when there are so few actually successful ecosystem partners involved beyond antimalware is disingenuous…or at best, a hopeful prediction of some future state.

Here’s something I wrote on the topic back in 2009: The Cart Before the Virtual Horse: VMware’s vShield/Zones vs. VMsafe APIs that I think illustrates some of the issues related to the perceived “strategy by bumping around in the dark.”

A big point of confusion is that vShield simultaneously describes both an ecosystem program and a set of products that is actually more than just anti-malware capabilities which is where the bulk of integration today is placed.

Analysts and journalists continue to miss the fact that “vShield” is actually made up of 4 components (not counting the VMsafe APIs):

  • vShield Edge
  • vShield App
  • vShield Endpoint
  • vShield Manager

What most people often mean when they refer to “vShield” are the last two components, completely missing the point that the first two products — which are now monetized and marketed/sold as core products for vSphere and vCloud Director — basically make it very difficult for the ecosystem to partner effectively since it’s becoming more difficult to exchange vShield solutions for someone else’s.

An important reason for this is that VMware’s sales force is incentivized (and compensated) on selling VMware security products, not the ecosystem’s — unless of course it is in the way of a big deal that only a partnership can overcome.  This is the interesting juxtaposition of VMware’s “good enough” versus incumbent security vendors “best-of-breed” product positioning.

VMware is not a security or networking company and ignoring the fact that big companies with decades of security and networking products are not simply going to fade away is silly.  This is true of networking as it is security (see software-defined networking as an example.)

Technically, vShield Edge is becoming more and more a critical piece of the overall architecture for VMware’s products — it acts as the perimeter demarcation and multi-tenant boundary in their Cloud offerings and continues to become the technology integration point for acquisitions as well as networking elements such as VXLAN.

As a third party tries to “integrate” a product which is functionally competitive with vShield Edge, the problems start to become much more visible and the partnerships more and more clumsy, especially in the eyes of the most important party privy to this scenario: the customer.

Jon Oltsik wrote a story recently in which he described the state of VMware’s security efforts: “vShield, Cloud Computing, and the Security Industry

So why aren’t more security vendors jumping on the bandwagon? Many of them look at vShield as a potentially competitive security product, not just a set of APIs.

In a recent Network World interview, Allwyn Sequeira, VMware’s chief technology officer of security and vice president of security and network solutions, admitted that the vShield program in many respects “does represent a challenge to the status quo” … (and) vShield does provide its own security services (firewall, application layer controls, etc.)

Why aren’t more vendors on-board? It’s because this positioning of VMware’s own security products which enjoy privileged and unobstructed access to the platform that ISV’s in the ecosystem do not have.  You can’t move beyond the status quo when there’s not a clear plan for doing so and the past and present are littered with the wreckage of prior attempts.

VMware has its own agenda: tightly integrate security services into vSphere and vCloud to continue to advance these platforms. Nevertheless, VMware’s role in virtualization/cloud and its massive market share can’t be ignored. So here’s a compromise I propose:

  1. Security vendors should become active VMware/vShield partners, integrate their security solutions, and work with VMware to continue to bolster cloud security. Since there is plenty of non-VMware business out there, the best heterogeneous platforms will likely win.
  2. VMware must make clear distinctions among APIs, platform planning, and its own security products. For example, if a large VMware shop wants to implement vShield for virtual security services but has already decided on Symantec (Vontu) or McAfee DLP, it should have the option for interoperability with no penalties (i.e., loss of functionality, pricing/support premiums, etc.).

Item #1 Sounds easy enough, right? Except it’s not.  If the way in which the architecture is designed effectively locks out the ecosystem from equal access to the platform except perhaps for a privileged few, “integrating” security solutions in a manner that makes those solutions competitive and not platform-specific is a tall order.  It also limits innovation in the marketplace.

Look how few startups still exist who orbit VMware as a platform.  You can count them on less fingers that exist on a single hand.  As an interesting side-note, Catbird — a company who used to produce their own security enforcement capabilities along with their strong management and compliance suite — has OEM’d VMware’s vShield App product instead of bothering to compete with it.

Now, item #2 above is right on the money.  That’s exactly what should happen; the customer should match his/her requirements against the available options, balance the performance, efficacy, functionality and costs and ultimately be free to choose.  However, as they say in Maine…”you can’t get there from here…” at least not unless item #1 gets solved.

In a complimentary piece to Jon’s, Ellen Messmer writes in “VMware strives to expand security partner ecosystem“:

Along with technical issues, there are political implications to the vShield approach for security vendors with a large installed base of customers as the vShield program asks for considerable investment in time and money to develop what are new types of security products under VMware’s oversight, plus sharing of threat-detection information with vShield Manager in a middleware approach.

…and…

The pressure to make vShield and its APIs a success is on VMware in some respects because VMware’s earlier security API , the VMsafe APIs, weren’t that successful. Sequiera candidly acknowledges that, saying, “we got the APIs wrong the first time,” adding that “the major security vendors have found it hard to integrate with VMsafe.”

Once bitten, twice shy…

So where’s the confidence that guarantees it will be easier this time? Basically, besides anti-malware functionality provided by integration with vShield endpoint, there’s not really a well-defined ecosystem-wide option for integration beyond that with VMware now.  Even VMware’s own roadmaps for integration are confusing.  In the case of vCloud Director, while vShield Edge is available as a bundled (and critical) component, vShield App is not!

Also, forcing integration with security products now to directly integrate with vShield Manager makes for even more challenges.

There are a handful of security products besides anti-malware in the market based on the VMsafe APIs, which are expected to be phased out eventually. VMware is reluctant to pin down an exact date, though some vendors anticipate end of next year.

That’s rather disturbing news for those companies who have invested in the roadmap and certification that VMware has put forth, isn’t it?  I can name at least one such company for whom this is a concern. 🙁

Because VMware has so far reserved the role of software-based firewalls and data-loss prevention under vShield to its own products, that has also contributed to unease among security vendors. But Sequiera says VMware is in discussions with Cisco on a firewall role in vShield.   And there could be many other changes that could perk vendor interest. VMware insists its vShield APIs are open but in the early days of vShield has taken the approach of working very closely with a few selected vendors.

Firstly, that’s not entirely accurate regarding firewall options. Cisco and Juniper both have VMware-specific “firewalls” on the market for some time; albeit they use different delivery vehicles.  Cisco uses the tightly co-engineered effort with the Nexus 1000v to provide access to their VSG offering and Juniper uses the VMsafe APIs for the vGW (nee’ Altor) firewall.  The issue is now one of VMware’s architecture for integrating moving forward.

Cisco has announced their forthcoming vASA (virtual ASA) product which will work with the existing Cisco VSG atop the Nexus 1000v, but this isn’t something that is “open” to the ecosystem as a whole, either.  To suggest that the existing APIs are “open” is inaccurate and without an API-based capability available to anyone who has the wherewithal to participate, we’ll see more native “integration” in private deals the likes of which we’re already witnessing with the inclusion of RSA’s DLP functionality in vShield/vSphere 5.

Not being able to replace vShield Edge with an ecosystem partner’s “edge” solution is really a problem.

In general, the potential for building a new generation of security products specifically designed for VMware’s virtualization software may be just beginning…

Well, it’s a pretty important step and I’d say that “beginning” still isn’t completely realized!

It’s important to note that these same vendors who have been patiently navigating VMware’s constant changes are also looking to emerging competitive platforms to hedge their bets. Many have already been burned by their experience thus far and see competitive platform offerings from vendors who do not compete with their own security solutions as much more attractive, regardless of how much marketshare they currently enjoy.  This includes community and open source initiatives.

Given their druthers, with a stable, open and well-rounded program, those in the security ecosystem would love to continue to produce top-notch solutions for their customers on what is today the dominant enterprise virtualization and cloud platform, but it’s getting more frustrating and difficult to do so.

It’s even worse at the service provider level where the architectural implications make the enterprise use cases described above look like cake.

It doesn’t have to be this way, however.

Jon finished up his piece by describing how the VMware/ecosystem partnership ought to work in a truly cooperative manner:

This seems like a worthwhile “win-win,” as that old tired business cliche goes. Heck, customers would win too as they already have non-VMware security tools in place. VMware will still sell loads of vShield product and the security industry becomes an active champion instead of a suspicious player in another idiotic industry concept, “coopitition.” The sooner that VMware and the security industry pass the peace pipe around, the better for everyone.

The only thing I disagree with is how this seems to paint the security industry as the obstructionist in this arms race.  It’s more than a peace pipe that’s needed.

Puff, puff, pass…it’s time for more than blowing smoke.

/Hoff

Enhanced by Zemanta

Unsafe At Any Speed: The Darkside Of Automation

July 29th, 2011 5 comments

I’m a huge proponent of automation. Taking rote processes from the hands of humans & leveraging machines of all types to enable higher agility, lower cost and increased efficacy is a wonderful thing.

However, there’s a trade off; as automation matures and feedback loops become more closed with higher and higher clock rates yielding less time between execution, our ability to both detect and recover — let alone prevent — within a cascading failure domain is diminished.

Take three interesting, yet unrelated, examples:

  1. The premise of the W.O.P.R. in War Games — Joshua goes apeshit and almost starts WWIII by invoking a simulated game of global thermonuclear war
  2. The Airbus 380 failure – the luck of having 5 pilots on-board and their skill to override hundreds of cascading automation failures after an engine failure prevented a crash that would have killed hundreds of people.*
  3. The AWS EBS outage — the cloud version of Girls Gone Wild; automated replication caught in a FOR…NEXT loop

These weren’t “maliciously initiated” issues, they were accidents.  But how about “events” like Stuxnet?  What about a former Gartner analyst having his home automation (CASA-SCADA) control system hax0r3d!? There’s another obvious one missing, but we’ll get to that in a minute (hint: Flash Crash)

How do we engineer enough failsafe logic up and down the stack that can function at the same scale as the decision and controller logic does?   How do we integrate/expose enough telemetry that can be produced and consumed fast enough to actually allow actionable results in a timeframe that allows for graceful failure and recovery (nee survivability.)

One last example that is pertinent: high frequency trading (HFT) —  highly automated, computer driven, algorithmic-based stock trading at speeds measured in millionths of a second.

Check out how this works:

[Check out James Urquhart’s great Wisdom Of the Clouds blog post: “What Cloud Computing Can Learn From Flash Crash“]

In the use-case of HFT, ruthlessly squeezing nanoseconds from the processing loops — removing as much latency as possible from every element of the stack — literally has implications in the millions of dollars.

Technology vendors are doing many very interesting and innovative things architecturally to achieve these goals — some of them quite audacious — and anything that gets in the way or adds latency is generally not considered “useful.”  Security is usually one of them.

There are most definitely security technologies that allow for very low latency insertion of things like firewalls that have low single-digit microsecond latency figures (small packet,) but interestingly enough we’re also governed by the annoying laws of physics and things like propagation delay, serialization delay, TCP/IP protocol overhead, etc. all adds up.

Thus traditional approaches to “in-line” security — both detective and preventative — are not generally sustainable in these environments and thus require some deep thought so as to provide solutions that will scale as well as these HFT systems do…no short order.

I think this is another good use for “big data” and security data analytics.  Consider very high speed side-band systems that function along with these HFT systems that could potentially leverage the logic in these transactional trading systems to allow us to get closer to being able to solve the challenges of these environments.  Integrate these signaling and telemetry planes with “fabric-enabled” security capabilities and we might get somewhere useful.

This tees up nicely my buddy James Arlen’s talk at Blackhat on the insecurity of high frequency trading systems: “Security when nano seconds count”  You should plan on checking it out…I know I will.

/Hoff

*H/T to @reillyusa who also pointed me to “Questions Raised About Airbus Automated Control System” regarding the doomed Air France 447 flight.  Also, serendipitously, @etherealmind posted a link to a a story titled “Volkswagen demonstrates ‘Temporary Auto Pilot'” — what could *possibly* go wrong? 😉

Enhanced by Zemanta