The Cuban Cloud Missile Crisis…Weapons Of Mass Abstraction.

September 7th, 2012 2 comments
English: Coat of arms of Cuba. Español: Escudo...

English: Coat of arms of Cuba. Español: Escudo de Cuba. Русский: Герб Кубы. (Photo credit: Wikipedia)

In the midst of the Cold War in October of 1962, the United States and the Soviet Union stood periously on the brink of nuclear war as a small island some 90 miles off the coast of Florida became the focal point of intense foreign policy scrutiny, challenges to sovereignty and political arm wrestling the likes of which were never seen before.

Photographic evidence provided by a high altitude U.S. spy plane exposed the until-then secret construction of medium and intermediate ballistic nuclear missile silos, constructed by the Soviet Union, which were deliberately placed so as to be close enough to reach the continental United States.

The United States, alarmed by this unprecedented move by the Soviets and the already uneasy relations with communist Cuba, unsuccessfully attempted a CIA-led forceful invasion and overthrow of the Cuban regime at the Bay of Pigs.

This did not sit well with either the Cubans or Soviets.  A nightmare scenario ensued as the Soviets responded with threats of its own to defend its ally (and strategic missile sites) at any cost, declaring the American’s actions as unprovoked and unacceptable.

During an incredibly tense standoff, the U.S. mulled over plans to again attack Cuba both by air and sea to ensure the disarmament of the weapons that posed a dire threat to the country.

As posturing and threats continued to escalate from the Soviets, President Kennedy elected to pursue a less direct military action;  a naval blockade designed to prevent the shipment of supplies necessary for the completion and activation of launchable missiles.  Using this as a lever, the U.S. continued to demand that Russia dismantle and remove all nuclear weapons as they prevented any and all naval traffic to and from Cuba.

Soviet premier Krustchev protested such acts of “direct aggression” and communicated to president Kennedy that his tactics were plunging the world into the depths of potential nuclear war.

While both countries publicly traded threats of war, the bravado, posturing and defiance were actually a cover for secret backchannel negotiations involving the United Nations. The Soviets promised they would dismantle and remove nuclear weapons, support infrastructure and transports from Cuba, and the United States promised not to invade Cuba while also removing nuclear weapons from Turkey and Italy.

The Soviets made good on their commitment two weeks later.  Eleven months after the agreement, the United States complied and removed from service the weapons abroad.

The Cold War ultimately ended and the Soviet Union fell, but the political, economic and social impact remains even today — 40 years later we have uneasy relations with (now) Russia and the United States still enforces ridiculous economic and social embargoes on Cuba.

What does this have to do with Cloud?

Well, it’s a cute “movie of the week” analog desperately in need of a casting call for Nikita Khrushchev and JFK.  I hear Gary Busey and Aston Kutcher are free…

As John Furrier, Dave Vellante and I were discussing on theCUBE recently at VMworld 2012, there exists an uneasy standoff — a cold war — between the so-called “super powers” staking a claim in Cloud.  The posturing and threats currently in process don’t quite have the world-ending outcomes that nuclear war would bring, but it could have devastating technology outcomes nonetheless.

In this case, the characters of the Americans, Soviets, Cubans and the United Nations are played by networking vendors, SDN vendors, virtualization/abstraction vendors, cloud “stack” projects/efforts/products and underlying CPU/chipset vendors (not necessarily in that order…)  The rest of the world stands by as their fate is determined on the world’s stage.

If we squint hard enough at Cloud, we might find out very own version of the “Bay of Pigs,” with what’s going on with OpenStack.

The “community” effort behind OpenStack is one largely based on “industry” and if we think of OpenStack as Cuba, it’s being played as pawn in the much larger battle for global domination.  The munitions being stocked in this tiny little enclave threatens to disrupt relations of epic proportions.  That’s why we now see so much strategic movement around an initiative and technology that many outside of the navel gazers haven’t really paid much attention to.

Then there are players like Amazon Web Services who, like China of today, quietly amass their weapons of mass abstraction as the industry-jockeying and distractions play on (but that’s a topic for another post)

Cutting to the chase…if we step back for a minute

Intel is natively bundling more and more networking and virtualization capabilities into their CPU/Chipsets and a $7B investment in security company McAfee makes them a serious player there.  VMware is de-emphasizing the “hypervisor” and is instead positioning they are focused on end-to-end solutions which include everything from secure mobility, orchestration/provisioning and now, with Nicira, networking.  Networking companies like Cisco and Juniper continue to move up-stack to deeper integrate networking and security along with service overlays in order to remain relevant in light of virtualization and SDN.

…and OpenStack’s threat of disrupting all of those plays makes it important enough to pay attention to.  It’s a little island of technology that is causing huge behemoths to collide.  A molehill that has become a mountain.

If today’s announcements of VMware and Intel joining OpenStack as Gold Members along with the existing membership by other “super powers” doesn’t make it clear that we’re in the middle of an enormous power struggle, I’ve got a small Island to sell you 😉

Me?  I’m going to make some Lechon Asado, enjoy a mojito and a La Gloria Cubana.

Related articles

Enhanced by Zemanta

TL;DR But My Virtual Machine Liked Me On Facebook Anyway…

September 2nd, 2012 6 comments

I usually don’t spend much time when I write a blog, but this was ridiculously difficult to write.

I’m neither a neuroscientist or a computer scientist. I’ve dabbled in AI and self-organizing maps, but I can barely do fractions, so every sentence of this blog had me doubting writing it. It’s probably shit, but I enjoyed thinking about it.

The further I tried to simplify my thoughts, the less cogent they became and what spooled outward onto my screen resembled more porridge than prose.

That said, I often feel stymied while writing. When someone else has crystallized thoughts to which adding commentary seems panderous, redundant, or potentially intellectually fraudulent, it feels like there’s no possible way that my thoughts spilling out are original, credible, or meaningful.

This is especially the case as when brilliant people have written brilliant things on the topic.

“On the shoulders of giants” and all that…

Skynet, The Matrix, The Singularity, The Borg…all of these examples popped into my head as I wrote, destroying my almost sensical paragraphs with clumbsy analogs that had me longing to reduce my commentary to nothing more than basic Twitter and Facebook-like primitives: “< +1” or “Like.” It was all just a big pile of fail.

The funny thing is, that’s actually where this story begins and why its genesis was so intriguing.

Alex Williams wrote an article titled “How Machines Will Use Social Networks To Gain Identity, Develop Relationships And Make Friends.

He offered up a couple of interesting examples from some conceptual “demos” from last week’s VMworld.  I re-read the article and found that the topic was profound, relevant and timely.

At its core, Alex challenges us to reimagine how “machines” — really combinations of infrastructure and applications that process information — might (self) identify, communicate, interoperate, organize and function as part of a collective construct, using a codified language that mimics the channels that we humans are today using in social patterns and grafs that define our relationships online.

The article wobbled a bit with the implication that machines might “feel,” but stripping relevant actions or qualitative measures such as “like” or “dislike” down to their core, it’s not hard to imagine how machines might evaluate or re-evaluate relationships, behavior and (re)actions based on established primitives such as “good,” “bad,” “available” or “malfuctioned.”

I know that’s how my wife generally thinks of me.

Frankly, it’s a simple concept. Even for humans. As an intelligently-complex species, humans define even heady things like emotional responses as a function of two fundamental neurotransmitters — chemical messengers — the biogenic amines serotonin and dopamine. The levels of these neurotransmitters are normally quite reasonably regulated but can be heightened or depressed based on the presence of and interaction with other chemical compounds. These neurochemical interactions may yield behavioral or even systemic immune system responses that manifest themselves in a variety of ways; from happiness to disease.

One might imagine that machines might likewise interact and form behavioral responses to, and thus relationships with, other groups of machines in either like-minded or opposing “clusters” using a distilled version of the very “activity streams” that humans feed into and out of using social media, defined by the dynamic, organic and chaotic social graph that ties them.

[I just noticed that my friend and prior colleague Mat Matthews from Plexxi wrote a blog on “affinity” and described this as Socially Defined Networks. Brilliant. ]

I’m sure that in some way, they already do. But again, I’m hung up on the fact that my NEST thermostat may actually be out to kill me…and tweet about it at an ecologically sound point in time when electricity costs are optimal.

The notion that machines will process these activity streams like humans do and act on them is really a natural extension of how today’s application architectures and infrastructure designs which utilize message buses and APIs to intercommunicate. It’s a bit of a re-hash of many topics and the autonomic, self-learning, HAL-9000 batshit crazy compute concepts we’ve all heard of before.

On Twitter, reacting to what he sensed as “sensationalism,” Thomas Lukasik (@sparkenstein) summarized my assessment of this concept (and thus rendering all these words even more useless) thusly:

“…my immediate response was that a “social network” is an ideal model 2 take advantage of N autonomous systems.”

My response: +1 (see what I did there? 😉

But what differentiates between the human social graph and the non-kinetic “cyber” graph is the capacity, desire and operational modality that describes how, when and why events are processed (or not.) That and crazy ex-girlfriends, pictures of dinner and politicial commentary.

I further addressed Thomas’ complaint that we’d seen this before by positing that “how humans are changing the way we interact will ultimately define how the machines we design will, too.”

To wit, machines don’t necessarily have the complexity, variety, velocity and volume of unrelated stimuli and distractions that humans do. We have more senses and we have fuzzy responses to binary responses. They are simpler, more discrete “creatures” and as their taskmasters, we enjoy a highly leveraged, somewhat predictable and reasonably consistent way in which they process and respond to events.

Usually until something kinetic or previously undefined occurs. Then, the dependency on automation and the ability for the discrete and systemic elements to “learn,” adapt, interact and leverage previously unrelated relationships with other nodes becomes important.  I wrote about that here: Unsafe At Any Speed: The Darkside Of Automation

What’s really relevant here, however,  is that the “social graph” approach — the relationship between entities and the policies established to govern them — can help close that gap.  Autonomous is cool.  Being part of an “autonomous collective” is cooler. As evidence, I offer up that scene with the peasants in Monty Python’s “Quest for the Holy Grail.”

In fact, if one were to look at computer networks, we’ve seen the evolution from centralized to distributed and now hybrid models of how the messages and state between entities are communicated and controlled.

Now, take a deep breath because I’m about to add yet another bit of “sensationalism” that Thomas will probably choke on…

The notion of separating the control, data and management planes that exist in the form of protocols and communication architectures are bubbling to the surface already in the highly-hyped area of software defined networking (SDN.)

I’m going to leave the bulk of my SDN example for another post, but bear with me for just a minute.  (Actually, this is where the blog descends into really crappily thought out rambling.)

If we have the capability to allow the applications and infrastructure — they’re both critical components of “the machine” — to communicate in an automated manner while contextualizing the notion that an event or message might indicate a need for state change, service delivery differences, or even something such as locality, and share this information with those who have a pre-defined relationship with a need-to-know, much goodness may occur.

Think: security.

This starts to bring back into focus the notion that like a human immune system, the ability to identify, localize and respond, signalling to the collective the disposition of the event and what may be needed to deal with it.

The implications are profound because as the systems of “machines” become increasingly more networked, adaptive and complex, they become more like living organisms and these collective “hives” will behave less like binary constructs, and much more like fuzzy communities of animals such as ants or bees.

If we bring this back into the teeniest bit more relevant focus — let’s say virtualized data centers or even (gasp!) Cloud, I think that collision between “social” and “networking” really can take on a broader meaning, especially within the context of how systems intercommunicate and interact with one another.

As an example, the orchestration, provisioning, automation and policy engines we’re deploying today are primitive. The fact that applications and infrastructure are viewed as discrete and not as a system further complicates the problem space because the paths, events, messages and actions are incomprehensible to each of these discrete layers.  This is why we can’t have nice things, America.

What’s coming, however, are really interesting collisions of relevant technology combined with fantastic applications of defining and leveraging the ways in which these complex systems of machines can become much more useful, interactive, communicative and “social.”

I think that’s what Alex was getting at when he wrote:

…points to an inevitable future. The machines will have a voice. They will communicate in increasingly human-like ways. In the near term, the advancements in the use of social technologies will provide contextual ways to manage data centers. Activity streams serve as the language that people understand. They help translate the interactions between machines so problems can be diagnosed faster.

By treating machines as individuals we can better provide visualizations to orchestrate complex provisioning and management tasks. That is inevitable in a world which requires more simple ways to orchestrate the increasingly dynamic nature for the ways we humans live and work with the machines among us.

Johnny Five is Alive.

Like.

Enhanced by Zemanta
Categories: General Rants & Raves Tags:

SiliconAngle Cube: Hoff On Security – Live At VMworld 2012

August 31st, 2012 3 comments

I was thrilled to be invited back to the SiliconAngle Cube at VMworld 2012 where John Furrier, Dave Vellante and I spoke in depth about security, virtualization and software defined networking (SDN)

I really like the way the chat turned out — high octane, fast pace and some great questions!

Here is the amazing full list of speakers during the event.  Check it out, ESPECIALLY Martin Casado’s talk.

As I told him, I think he is like my Obi Wan…my only hope for convincing my friends at VMware that networking and security require more attention and a real embrace of the ecosystem…

I’d love to hear your feedback on the video.

/Hoff

 

Enhanced by Zemanta

Software Defined Networking (In)Security: All Your Control Plane Are Belong To Us…

August 20th, 2012 No comments

My next series of talks are focused around the emerging technology, solutions and security architectures of so-called “Software Defined Networking (SDN)”

As this space heats up, I see a huge opportunity for new and interesting ways in which security can be delivered — the killer app? — but I also am concerned that, per usual, security is a potential after thought.

At an absolute minimum example, the separation of control and data planes (much as what we saw with compute-centric virtualization) means we now have additional (or at least bifurcated) attack surfaces and threat vectors.  And not unlike compute-centric virtualization, the C&C channels for network operation represents a juicy target.

There are many more interesting elements that deserve more attention paid to them — new protocols, new hardware/software models, new operational ramifications…and I’m going to do just that.

If you’re a vendor who cares to share what you’re doing to secure your SDN offerings — and I promise I’ll be fair and balanced as I always am — please feel free to reach out to me.  If you don’t and I choose to include your solution based on access to what data I have, you run the risk of being painted inaccurately <hint>

If you have any ideas, comments or suggestions on what you’d like to see featured or excluded, let me know.  This will be along the lines of what I did with the “Four Horsemen Of the Virtualization Security Apocalypse” back in 2008.

Check out a couple of previous ramblings related to SDN (and OpenFlow) with respect to security below.

/Hoff

Enhanced by Zemanta

Incomplete Thought: Virtual/Cloud Security and The Potemkin Village Syndrome

August 16th, 2012 3 comments

Portrait of russian fieldmarshal Prince Potemk...A “Potemkin village” is a Russian expression derived from folklore from the 1700’s.  The story goes something like this: Grigory Potemkin, a military leader and  statesman, erected attractive but completely fake settlements constructed only of facades to impress Catherine the Great (empress of Russia) during a state visit in order to gain favor and otherwise hype the value of recently subjugated territories.

I’ll get to that (and probably irate comments from actual Russians who will chide me for my hatchet job on their culture…)

Innovation over the last decade in technology in general has brought fundamental shifts in the way in which we work, live, and play. In the last 4 years, the manner in which technology products and services that enabled by this “digital supply chain,” and the manner in which they are designed, built and brought to market have also pivoted.

Virtualization and Cloud computing — the technologies and operational models — have contributed greatly to this.

Interestingly enough, the faster technology evolves, the more lethargic, fragile and fractured security seems to be.

This can be explained in a few ways.

First, the trust models, architecture and operational models surrounding how we’ve “done” security simply are not designed to absorb this much disruption so quickly.  The fact that we’ve relied on physical segregation, static policies that combine locality and service definition, mobility and the (now) highly dynamic application deployment options means that we’re simply disconnected.

Secondly, fragmentation and specialization within security means that we have no cohesive, integrated or consistent approach in terms of how we define or instantiate “security,” and so customers are left to integrate disparate solutions at multiple layers (think physical and/or virtual firewalls, IDP, DLP, WAF, AppSec, etc.)  What services and “hooks” the operating systems, networks and provisioning/orchestration layers offers largely dictates what we can do using the skills and “best practices” we already have.

Lastly, the (un)natural market consolidation behavior wherein aspiring technology startups are acquired and absorbed into larger behemoth organizations means that innovation cycles in security quickly become victims of stunted periodicity, reduced focus on solving specific problems, cultural subduction and artificially constrained scope based on P&L models which are detached from reality, customers and out of step with trends that end up driving more disruption.

I’ve talked about this process as part of the “Security Hamster Sine Wave of Pain.”  It’s not a malicious or evil plan on behalf of vendors to conspire to not solve your problems, it’s an artifact of the way in which the market functions — and is allowed to function.

What this yields is that when new threat models, evolving vulnerabilities and advanced adversarial skill sets are paired with massively disruptive approaches and technology “conquests,” the security industry  basically erects facades of solutions, obscuring the fact that in many cases, there’s not only a lacking foundation for the house of cards we’ve built, but interestingly there’s not much more to it than that.

Again, this isn’t a plan masterminded by a consortium of industry “Dr. Evils.”  Actually, it’s quite simple: It’s inertial…if you keep buying it, they’ll keep making it.

We are suffering then from the security equivalent of the Potemkin Village syndrome; our efforts are largely built to impress people who are mesmerized by pretty facades but don’t take the time to recognize that there’s really nothing there.  Those building it, while complicit, find it quite hard to change.

Until the revolution comes.

To wit, we have hardworking members of the proletariat, toiling away behind the scenes struggling to add substance and drive change in the way in which we do what we do.

Adding to this is the good news that those two aforementioned “movements” — virtualization and cloud computing — are exposing the facades for what they are and we’re now busy shining the light on unstable foundations, knocking over walls and starting to build platforms that are fundamentally better suited to support security capabilities rather than simply “patching holes.”

Most virtualization and IaaS cloud platforms are still woefully lacking the native capabilities or interfaces to build security in, but that’s the beauty of platforms (as a service,) as you can encourage more “universally” the focus on the things that matter most: building resilient and survivable systems, deploying secure applications, and identifying and protecting information across its lifecycle.

Realistically this is a long view and it is going to take a few more cycles on the Hamster Wheel to drive true results.  It’s frankly less about technology and rather largely a generational concern with the current ruling party who governs operational security awaiting deposition, retirement or beheading.

I’m looking forward to more disruption, innovation and reconstruction.  Let’s fix the foundation and deal with hanging pictures later.  Redecorating security is for the birds…or dead Russian royalty.

/Hoff

Enhanced by Zemanta

The Soylent Green of “Epic Hacks” – It’s Made of PEOPLE!

August 7th, 2012 3 comments

Allow me to immediately state that I am, in no way, attempting to blame or shame the victim in my editorial below.

However, the recent rash of commentary from security wonks on Twitter and blogs regarding who is to “blame” in Mat Honan’s unfortunate experience leaves me confused and misses an important point.

Firstly, the title of the oft-referenced article documenting the series of events is at the root of my discontent:

How Apple and Amazon Security Flaws Led to My Epic Hacking

As I tweeted, my assessment and suggestion for a title would be:

How my poor behavior led to my epic hacking & flawed trust models & bad luck w/Apple and Amazon assisted

…especially when coupled with what is clearly an admission by Mr. Honan, that he is, fundamentally, responsible for enabling the chained series of events that took place:

In the space of one hour, my entire digital life was destroyed. First my Google account was taken over, then deleted. Next my Twitter account was compromised, and used as a platform to broadcast racist and homophobic messages. And worst of all, my AppleID account was broken into, and my hackers used it to remotely erase all of the data on my iPhone, iPad, and MacBook.

In many ways, this was all my fault. My accounts were daisy-chained together. Getting into Amazon let my hackers get into my Apple ID account, which helped them get into Gmail, which gave them access to Twitter. Had I used two-factor authentication for my Google account, it’s possible that none of this would have happened, because their ultimate goal was always to take over my Twitter account and wreak havoc. Lulz.

Had I been regularly backing up the data on my MacBook, I wouldn’t have had to worry about losing more than a year’s worth of photos, covering the entire lifespan of my daughter, or documents and e-mails that I had stored in no other location.

Those security lapses are my fault, and I deeply, deeply regret them.

The important highlighted snippets above are obscured by the salacious title and the bulk of the article which focuses on how services — which he enabled and relied upon — however flawed certain components of that trust and process may have been, are *really* at the center of the debate here.  Or ought to be.

There’s clearly a bit of emotional transference occurring.  It’s easier to associate causality with a faceless big corporate machine rather than swing the light toward the victim, even if he, himself, self-identifies.

Before you think I’m madly defending and/or suggesting that there weren’t breakdowns with any of the vendors — especially Apple — let me assure you I am not.  There are many things that can and should be addressed here, but leaving out the human element, the root of it all here, is dangerous.

I am concerned that as a community there is often an aire of suggestion that consumers are incapable and inculpable with respect to understanding the risks associated with the clicky-clicky-connect syndrome that all of these interconnected services brings.

People give third party applications and services unfettered access to services like Twitter and Facebook every day — even when messages surrounding the potential incursion of privacy and security are clearly stated.

When something does fail — and it does and always will — we vilify the suppliers (sometimes rightfully so for poor practices) but we never really look at what we need to do to prevent having to see this again: “Those security lapses are my fault, and I deeply, deeply regret them.”

The more interconnected things become, the more dependent upon flawed trust models and the expectations that users aren’t responsible we shall be.

This is the point I made in my presentations: Cloudifornication and Cloudinomicon.

There’s a lot of interesting discussion regarding the effectiveness of security awareness training.  Dave Aitel started a lively one here: “Why you shouldn’t train employees for security awareness

It’s unfortunate the the only real way people learn is through misfortune, and any way you look at it, that’s the thing that drives awareness.

There are many lessons we can learn from Mr. Honan’s unfortunate experience…I urge you to consider less focusing blame on one link in the chain and instead guide the people you can influence to reconsider decisions of convenience over the potential tradeoffs they incur.

/Hoff

P.S. For you youngsters who don’t get the Soylent Green reference, see here.  Better yet, watch it. It’s awesome. Charlton Heston, FTW.

P.P.S. (Check out the sentiment of all the articles below)

Enhanced by Zemanta

Brood Parasitism: A Cuckoo Discussion Of Smart Device Insecurity By Way Of Robbing the NEST…

July 18th, 2012 No comments
English: Eastern Phoebe (Sayornis phoebe) nest...

(Photo credit: Wikipedia)

 

I’m doing some research, driven by recent groundswells of some awesome security activity focused on so-called “smart meters.”  Specifically, I am interested in the emerging interconnectedness, consumerization and prevalence of more generic smart devices and home automation systems and what that means from a security, privacy and safety perspective.

I jokingly referred to something like this way back in 2007…who knew it would be more reality than fiction.

You may think this is interesting.  You may think this is overhyped and boorish.  You may even think this is cuckoo…

Speaking of which, back to the title of the blog…

Brood parasitism is defined as:

A method of reproduction seen in birds that involves the laying of eggs in the nests of other birds. The eggs are left under the parantal care of the host parents. Brood parasitism may be occur between species (interspecific) or within a species (intraspecific) [About.com]

A great example is that of the female european Cuckoo which lays an egg that mimics that of a host species.  After hatching, the young Cuckcoo may actually dispose of the host egg by shoving it out of the nest with a genetically-engineered physical adaptation — a depression in its back.  One hatched, the forced-adoptive parent birds, tricked into thinking the hatchling is legitimate, cares for the imposter that may actually grow larger than they, and then struggle to keep up with its care and feeding.

What does this have to do with “smart device” security?

I’m a huge fan of my NEST thermostat. 🙂 It’s a fantastic device which, using self-learning concepts, manages the heating and cooling of my house.  It does so by understanding how my family and I utilize the controls over time doing so in combination with knowing when we’re at home or we’re away.  It communicates with and allows control over my household temperature management over the Internet.  It also has an API <wink wink>  It uses an ARM Cortex A8 CPU and has both Wifi and Zigbee radios <wink wink>

…so it knows how I use power.  It knows how when I’m at home and when I’m not. It allows for remote, out-of-band, Internet connectivity.  I uses my Wifi network to communicate.  It will, I am sure, one day intercommunicate with OTHER devices on my network (which, btw, is *loaded* with other devices already)

So back to my cuckoo analog of brood parasitism and the bounty of “robbing the NEST…”

I am working on researching the potential for subverting the control plane for my NEST (amongst other devices) and using that to gain access to information regarding occupancy, usage, etc.  I have some ideas for how this information might be (mis)used.

Essentially, I’m calling the tool “Cuckoo” and it’s job is that of its nest-robbing namesake — to have it fed illegitimately and outgrow its surrogate trust model to do bad things™.

This will dovetail on work that has been done in the classical “smart meter” space such as what was presented at CCC in 2011 wherein the researchers were able to do things like identify what TV show someone was watching and what capabilities like that mean to privacy and safety.

If anyone would like to join in on the fun, let me know.

/Hoff

 

Enhanced by Zemanta

Back To The Future: Network Segmentation & More Moaning About Zoning

July 16th, 2012 5 comments

A Bit Of Context…

This image was selected as a picture of the we...

(Photo credit: Wikipedia)A Bit Of Context…

The last 3 years have been very interesting when engaging with large enterprises and service providers as they set about designing, selecting and deploying their “next generation” network architecture. These new networks are deployed in timescales that see them collide with disruptive innovation such as fabrics, cloud, big data and DevOps.

In most cases, these network platforms must account for the nuanced impact of virtualized design patterns, refreshes of programmatic architecture and languages, and the operational model differences these things introduce.  What’s often apparent is that no matter how diligent the review, by the time these platforms are chosen, many tradeoffs are made — especially when it comes to security and compliance — and we arrive at the old adage: “You can get fast, cheap or secure…pick two.”

…And In the Beginning, There Was Spanning Tree…

The juxtaposition of flatter and flatter physical networks, nee “fabrics” (compute, network and storage,) with what seems to be a flip-flop transition between belief systems and architects who push for either layer 2 or layer 3 (or encapsulated versions thereof) segmentation at the higher layers is again aggravated by continued push for security boundary definition that yields further segmentation based on policy at the application and information layer.

So what we end up with is the benefits of flatter, any-to-any connectivity at the physical networking layer with a “software defined” and virtualized networking context floating both alongside (Nicira, BigSwitch, OpenFlow) as well as atop it (VMware, Citrix, OpenStack Quantum, etc.) with a bunch of protocols ladled on like some protocol gravy blanketing the Chicken Fried Steak that represents the modern data center.

Oh!  You Mean the Cloud…

Now, there are many folks who don’t approach it this way, and instead abstract away much of what I just described.  In Amazon Web Services’ case as a service provider, they dumb down the network sufficiently and control the abstracted infrastructure to the point that “flatness” is the only thing customers get and if you’re going to run your applications atop, you must keep it simple and programmatic in nature else risk introducing unnecessary complexity into the “software stack.”

The customers who then depend upon these simplified networking services must then absorb the gaps introduced by a lack of features by architecturally engineering around them, becoming more automated, instrumented and programmatic in nature or add yet another layer of virtualized (and generally encrypted) transport and execution above them.

This works if you’re able to engineer your way around these gaps (or make them less relevant,) but generally this is where segmentation becomes an issue due to security and compliance design patterns which depend on the “complexity” introduced by the very flexible networking constructs available in most enterprise of SP networks.

It’s like a layered cake that keeps self-frosting.

Software Defined Architecture…

You can see the extreme opportunity for Software Defined *anything* then, can’t you? With SDN, let the physical networks NOT be complex but rather more simple and flat and then unify the orchestration, traffic steering, service insertion and (even) security capabilities of the physical and virtual networks AND the virtualization/cloud orchestration layers (from the networking perspective) into a single intelligent control plane…

That’s a big old self-frosting cake.

Basically, this is what AWS has done…but all that intelligence provided by the single pane of glass is currently left up to the app owner atop them.  That’s the downside.  Those sufficiently enlightened AWS’ customers are aware generally  of this and understand the balance of benefits and limitations of this path.

In an enterprise environment, however, it’s a timing game between the controller vendors, the virtualization/cloud stack providers, the networking vendors, and security vendors …each trying to offer up this capability either as an “integrated” capability or as an overlay…all under the watchful eye of the auditor who is generally unmotivated, uneducated and unnerved by all this new technology — especially since the compliance frameworks and regulatory elements aren’t designed to account for these dramatic shifts in architecture or operation (let alone the threat curve of advanced adversaries.)

Back To The Future…Hey, Look, It’s Token Ring and DMZs!

As I sit with these customers who build these nextgen networks, the moment segmentation comes up, the elegant network and application architectures rapidly crumble into piles of asset-based rubble as what happens next borders on the criminal…

Thanks to compliance initiatives — PCI is a good example — no matter how well scoped, those flat networks become more and more logically hierarchical.  Because SDN is still nascent and we’re lacking that unified virtualized network (and security) control plane, we end up resorting back to platform-specific “less flat” network architectures in both the physical and virtual layers to achieve “enclave” like segmentation.

But with virtualization the problem gets more complex as in an attempt to be agile, cost efficient and in order to bring data to the workloads to reduce heaving lifting of the opposite approach, out-of-scope assets can often (and suddenly) be co-resident with in-scope assets…traversing logical and physical constructs that makes it much more difficult to threat model since the level of virtualized context supports differs wildly across these layers.

Architects are then left to think how they can effectively take all the awesome performance, agility, scale and simplicity offered by the underlying fabrics (compute, network and storage) and then layer on — bolt on — security and compliance capabilities.

What they discover is that it’s very, very, very platform specific…which is why we see protocols such as VXLAN and NVGRE pop up to deal with them.

Lego Blocks and Pig Farms…

These architects then replicate the design patterns with which they are familiar and start to craft DMZs that are logically segmented in the physical network and then grafted on to the virtual.  So we end up with relying on what Gunnar Petersen and I refer to as the “SSL and Firewall” lego block…we front end collections of “layer 2 connected” assets based on criticality or function, many of which stretched across these fabrics, and locate them behind layer 3 “firewalls” which provide basic zone-based isolation and often VPN connectivity between “trusted” groups of other assets.

In short, rather than build applications that securely authenticate, communicate — or worse yet, even when they do — we pigpen our corralled assets and make our estate fatter instead of flatter.  It’s really a shame.

I’ve made the case in my “Commode Computing” presentation that one of the very first things that architects need to embrace is the following:

…by not artificially constraining the way in which we organize, segment and apply policy (i.e. “put it in a DMZ”) we can think about how design “anti-patterns” may actually benefit us…you can call them what you like, but we need to employ better methodology for “zoning.”

These trust zones or enclaves are reasonable in concept so long as we can ultimately further abstract their “segmentation” and abstract the security and compliance policy requirements by expressing policy programmatically and taking the logical business and functional use-case PROCESSES into consideration when defining, expressing and instantiating said policy.

You know…understand what talks to what and why…

A great way to think about this problem is to apply the notion of application mobility — without VM containers — and how one would instantiate a security “policy” in that context.  In many cases, as we march up the stack to distributed platform application architectures, we’re not able to depend upon the “crutch” that hypervisors or VM packages have begun to give us in legacy architectures that have virtualization grafted onto them.

Since many enterprises are now just starting to better leverage their virtualized infrastructure, there *are* some good solutions (again, platform specific) that unify the physical and virtual networks from a zoning perspective, but the all-up process-driven, asset-centric (app & information) view of “policy” is still woefully lacking, especially in heterogeneous environments.

Wrapping Up…

In enterprise and SP environments where we don’t have the opportunity to start anew, it often feels like we’re so far off from this sort of capability because it requires a shift that makes software defined networking look like child’s play.  Most enterprises don’t do risk-driven, asset-centric, process-mapped modelling, [and SP’s are disconnected from this,] so segmentation falls back to what we know: DMZs with VLANs, NAT, Firewalls, SSL and new protocol band-aids invented to cover gaping arterial wounds.

In environments lucky enough to think about and match the application use cases with the highly-differentiated operational models that virtualized *everything* brings to bear, it’s here today — but be prepared and honest that the vendor(s) you chose must be strategic and the interfaces between those platforms and external entities VERY well defined…else you risk software defined entropy.

I wish I had more than the 5 minutes it took to scratch this out because there’s SO much to talk about here…

…perhaps later.

Related articles

Enhanced by Zemanta

Six Degress Of Desperation: When Defense Becomes Offense…

July 15th, 2012 No comments
English: Defensive and offensive lines in Amer...

English: Defensive and offensive lines in American football (Photo credit: Wikipedia)

One cannot swing a dead cat without bumping into at least one expose in the mainstream media regarding how various nation states are engaged in what is described as “Cyberwar.”

The obligatory shots of darkened rooms filled with pimply-faced spooky characters basking in the green glow of command line sessions furiously typing are dosed with trademark interstitial fade-ins featuring the masks of Anonymous set amongst a backdrop of shots of smoky Syrian streets during the uprising,  power grids and nuclear power plants in lockdown replete with alarms and flashing lights accompanied by plunging stock-ticker animations laid over the trademark icons of financial trading floors.

Terms like Stuxnet, Zeus, and Flame have emerged from the obscure .DAT files of AV research labs and now occupy a prominent spot in the lexicon of popular culture…right along side the word “Hacker,” which now almost certainly brings with it only the negative connotation it has been (re)designed to impart.

In all of this “Cyberwar” we hear that the U.S. defense complex is woefully unprepared to deal with the sophistication, volume and severity of the attacks we are under on a daily basis.  Further, statistics from the Private Sector suggest that adversaries are becoming more aggressive, motivated, innovative, advanced,  and successful in their ability to attack what is basically described as basically undefended — nee’ undefendable — assets.

In all of this talk of “Cyberwar,” we were led to believe that the U.S. Government — despite hostile acts of “cyberaggression” from “enemies” foreign and domestic — never engaged in pre-emptive acts of Cyberwar.  We were led to believe that despite escalating cases of documented incursions across our critical infrastructure (Aurora, Titan Rain, etc.,) that our response was reactionary, limited in scope and reach and almost purely detective/forensic in nature.

It’s pretty clear that was a farce.

However, what’s interesting — besides the amazing geopolitical, cultural, socio-economic, sovereign,  financial and diplomatic issues that war of any sort brings — including “cyberwar” — is that even in the Private Sector, we’re still led to believe that we’re both unable, unwilling or forbidden to do anything but passively respond to attack.

There are some very good reasons for that argument, and some which need further debate.

Advanced adversaries are often innovative and unconstrained in their attack methodologies yet defenders remain firmly rooted in the classical OODA-fueled loops of the past where the A, “act,” generally includes some convoluted mixture of detection, incident response and cleanup…which is often followed up with a second dose when the next attack occurs.

As such, “Defenders” need better definitions of what “defense” means and how a silent discard from a firewall, a TCP RST from an IPS or a blip from Bro is simply not enough.  What I’m talking about here is what defensive linemen look to do when squared up across from their offensive linemen opponents — not to just hold the line to prevent further down-field penetration, but to sack the quarterback or better yet, cause a fumble or error and intercept a pass to culminate in running one in for points to their advantage.

That’s a big difference between holding till fourth down and hoping the offense can manage to not suffer the same fate from the opposition.

That implies there’s a difference between “winning” and “not losing,” with arbitrary values of the latter.

Put simply, it means we should employ methods that make it more and more difficult, costly, timely and non-automated for the attacker to carry out his/her mission…[more] active defense.

I’ve written about this before in 2009 “Incomplete Thought: Offensive Computing – The Empire Strikes Back” wherein I asked people’s opinion on both their response to and definition of “offensive security.”  This was a poor term…so I was delighted when I found my buddy Rich Mogull had taken the time to clarify vocabulary around this issue in his blog titled: “Thoughts on Active Defense, Intrusion Deception, and Counterstrikes.

Rich wrote:

…Here are some possible definitions we can work with:

  • Active defense: Altering your environment and system responses dynamically based on the activity of potential attackers, to both frustrate attacks and more definitively identify actual attacks. Try to tie up the attacker and gain more information on them without engaging in offensive attacks yourself. A rudimentary example is throwing up an extra verification page when someone tries to leave potential blog spam, all the way up to tools like Mykonos that deliberately screw with attackers to waste their time and reduce potential false positives.
  • Intrusion deception: Pollute your environment with false information designed to frustrate attackers. You can also instrument these systems/datum to identify attacks. DataSoft Nova is an example of this. Active defense engages with attackers, while intrusion deception can also be more passive.
  • Honeypots & tripwires: Purely passive (and static) tools with false information designed to entice and identify an attacker.
  • Counterstrike: Attack the attacker by engaging in offensive activity that extends beyond your perimeter.

These aren’t exclusive – Mykonos also uses intrusion deception, while Nova can also use active defense. The core idea is to leave things for attackers to touch, and instrument them so you can identify the intruders. Except for counterattacks, which move outside your perimeter and are legally risky.

I think that we’re seeing the re-emergence of technology that wasn’t ready for primetime now become more prominent in consideration when folks refresh their toolchests looking for answers to problems that “passive response” offers.  It’s important to understand that tools like these — in isolation — won’t solve many complex attacks, nor are they a silver bullet, but understanding that we’re not limited to cleanup is important.

The language of “active defense,” like Rich’s above, is being spoken more and more.

Traditional networking and security companies such as Juniper* are acquiring upstarts like Mykonos Software in this space.  Mykonos’ mission is to “…change the economics of hacking…by making the attack surface variable and inserting deceptive detection points into the web application…mak[ing] hacking a website more time consuming, tedious and costly to an attacker. Because the web application is no longer passive, it also makes attacks more difficult.”

VC’s like Kleiner Perkins are funding companies whose operating premise is a more active “response” such as the in-stealth company “Shape Security” that expects to “…change the web security paradigm by shifting costs from defenders to hackers.”

Or, as Rich defined above, the notion of “counterstrike” outside one’s “perimeter” is beginning to garner open discussion now that we’ve seen what’s possible in the wild.

In fact, check out the abstract at Defcon 20 from Shawn Henry of newly-unstealthed company “Crowdstrike,” titled “Changing the Security Paradigm: Taking Back Your Network and Bringing Pain to the Adversary:

The threat to our networks is increasing at an unprecedented rate. The hostile environment we operate in has rendered traditional security strategies obsolete. Adversary advances require changes in the way we operate, and “offense” changes the game.

Shawn Henry Prior to joining CrowdStrike, Henry was with the FBI for 24 years, most recently as Executive Assistant Director, where he was responsible for all FBI criminal investigations, cyber investigations, and international operations worldwide.

If you look at Mr. Henry’s credentials, it’s clear where the motivation and customer base are likely to flow.

Without turning this little highlight into a major opus — because when discussing this topic it’s quite easy to do so given the definition and implications of “active defense,”– I hope this has scratched an itch and you’ll spend more time investigating this fascinating topic.

I’m convinced we will see more and more as the cybersword rattling continues.

Have you investigated technology solutions that offer more “active defense?”

/Hoff

* Full disclosure: I work for Juniper Networks who recently acquired Mykonos Software mentioned above.  I hold a position in, and enjoy a salary from, Juniper Networks, Inc. 😉

Enhanced by Zemanta

Investing, Advising & Mentoring…An Observation Of Roles Using Different Lenses

June 25th, 2012 1 comment

As I previously wrote, I attended the GigaOm Structure Conference and was fortunate enough to participate in a chat with Stacey Higginbotham (GigaOm) and Simon Crosby (Bromium.)

In the beginning of our session, after Simon’s “unveiling” of Bromium’s approach to solving some tough security challenges, we engaged in some dialog about those same security challenges [for context] and many broader security topics in general.  Stacey led off by rhetorically asking me if I was an advisor to Bromium.  I answered in the affirmative with one word, “yes.”

To add more color, what “yes” meant was that I have advised leadership and employees of Bromium as to their approach, technology and productization and have access to their “technology preview” (read: beta) program.

What I didn’t  clarify is that like every other opportunity wherein I “advise” individuals, boards, companies or investors (institutional or otherwise,) I do not receive compensation for such activities.  No stock, bonds, gifts, cash, etc. The only thing that might qualify as compensation is when I have to travel to a remote location that I can’t expense myself or my employer can’t/won’t cover.  Every one in a while, I get a meal out of these activities so we can do a brain-dump outside of normal working hours so my employer is not impacted.

Those things may, in some people’s eyes, still seem like “compensation.”  I think that’s fair enough.  I’m also required to disclose any position I undertake with my employer to avoid conflict of any sort.

This is interesting to me because I’ve never really thought about disclosing any further my “advisory” roles outside of this process because I never put myself in the position wherein I feel either I have a vested interest (financially) in the company’s outcome.  The reason I “advise” is that it allows me early stage access to very interesting topics that I (cautiously) comment on — both publicly or privately where appropriate — and everyone involved wins.

What motivated this was a private DM exchange between someone (an analyst who shall remain nameless but to whom I am thankful) who attended Structure and was kind and honest enough to tell me what he thought.  Specifically, he suggested I had “crossed the line” in my public “endorsement” of Bromium.

Check out the thread below.  I found it fascinating.  To me, this seemed to be one part poor communication/disclosure on my part regarding what being an “advisor” entailed and one part complaint that perhaps I was messing up the business model of those who advise for free.

There was one additional point made that to the investment world, there’s a distinction between investing, advising and mentoring wherein “mentoring” was the only category that implied there was no financial compensation.  I’ve never really thought about making the distinction because again I’ve never asked for compensation…so I guess I’ve been a “mentor,” but I’d feel awkward calling myself that.

At any rate, I learned something from the exchange.  Maybe you will, too.

/Hoff

 

Categories: General Rants & Raves Tags: