Archive

Archive for July, 2012

Brood Parasitism: A Cuckoo Discussion Of Smart Device Insecurity By Way Of Robbing the NEST…

July 18th, 2012 No comments
English: Eastern Phoebe (Sayornis phoebe) nest...

(Photo credit: Wikipedia)

 

I’m doing some research, driven by recent groundswells of some awesome security activity focused on so-called “smart meters.”  Specifically, I am interested in the emerging interconnectedness, consumerization and prevalence of more generic smart devices and home automation systems and what that means from a security, privacy and safety perspective.

I jokingly referred to something like this way back in 2007…who knew it would be more reality than fiction.

You may think this is interesting.  You may think this is overhyped and boorish.  You may even think this is cuckoo…

Speaking of which, back to the title of the blog…

Brood parasitism is defined as:

A method of reproduction seen in birds that involves the laying of eggs in the nests of other birds. The eggs are left under the parantal care of the host parents. Brood parasitism may be occur between species (interspecific) or within a species (intraspecific) [About.com]

A great example is that of the female european Cuckoo which lays an egg that mimics that of a host species.  After hatching, the young Cuckcoo may actually dispose of the host egg by shoving it out of the nest with a genetically-engineered physical adaptation — a depression in its back.  One hatched, the forced-adoptive parent birds, tricked into thinking the hatchling is legitimate, cares for the imposter that may actually grow larger than they, and then struggle to keep up with its care and feeding.

What does this have to do with “smart device” security?

I’m a huge fan of my NEST thermostat. 🙂 It’s a fantastic device which, using self-learning concepts, manages the heating and cooling of my house.  It does so by understanding how my family and I utilize the controls over time doing so in combination with knowing when we’re at home or we’re away.  It communicates with and allows control over my household temperature management over the Internet.  It also has an API <wink wink>  It uses an ARM Cortex A8 CPU and has both Wifi and Zigbee radios <wink wink>

…so it knows how I use power.  It knows how when I’m at home and when I’m not. It allows for remote, out-of-band, Internet connectivity.  I uses my Wifi network to communicate.  It will, I am sure, one day intercommunicate with OTHER devices on my network (which, btw, is *loaded* with other devices already)

So back to my cuckoo analog of brood parasitism and the bounty of “robbing the NEST…”

I am working on researching the potential for subverting the control plane for my NEST (amongst other devices) and using that to gain access to information regarding occupancy, usage, etc.  I have some ideas for how this information might be (mis)used.

Essentially, I’m calling the tool “Cuckoo” and it’s job is that of its nest-robbing namesake — to have it fed illegitimately and outgrow its surrogate trust model to do bad things™.

This will dovetail on work that has been done in the classical “smart meter” space such as what was presented at CCC in 2011 wherein the researchers were able to do things like identify what TV show someone was watching and what capabilities like that mean to privacy and safety.

If anyone would like to join in on the fun, let me know.

/Hoff

 

Enhanced by Zemanta

Back To The Future: Network Segmentation & More Moaning About Zoning

July 16th, 2012 5 comments

A Bit Of Context…

This image was selected as a picture of the we...

(Photo credit: Wikipedia)A Bit Of Context…

The last 3 years have been very interesting when engaging with large enterprises and service providers as they set about designing, selecting and deploying their “next generation” network architecture. These new networks are deployed in timescales that see them collide with disruptive innovation such as fabrics, cloud, big data and DevOps.

In most cases, these network platforms must account for the nuanced impact of virtualized design patterns, refreshes of programmatic architecture and languages, and the operational model differences these things introduce.  What’s often apparent is that no matter how diligent the review, by the time these platforms are chosen, many tradeoffs are made — especially when it comes to security and compliance — and we arrive at the old adage: “You can get fast, cheap or secure…pick two.”

…And In the Beginning, There Was Spanning Tree…

The juxtaposition of flatter and flatter physical networks, nee “fabrics” (compute, network and storage,) with what seems to be a flip-flop transition between belief systems and architects who push for either layer 2 or layer 3 (or encapsulated versions thereof) segmentation at the higher layers is again aggravated by continued push for security boundary definition that yields further segmentation based on policy at the application and information layer.

So what we end up with is the benefits of flatter, any-to-any connectivity at the physical networking layer with a “software defined” and virtualized networking context floating both alongside (Nicira, BigSwitch, OpenFlow) as well as atop it (VMware, Citrix, OpenStack Quantum, etc.) with a bunch of protocols ladled on like some protocol gravy blanketing the Chicken Fried Steak that represents the modern data center.

Oh!  You Mean the Cloud…

Now, there are many folks who don’t approach it this way, and instead abstract away much of what I just described.  In Amazon Web Services’ case as a service provider, they dumb down the network sufficiently and control the abstracted infrastructure to the point that “flatness” is the only thing customers get and if you’re going to run your applications atop, you must keep it simple and programmatic in nature else risk introducing unnecessary complexity into the “software stack.”

The customers who then depend upon these simplified networking services must then absorb the gaps introduced by a lack of features by architecturally engineering around them, becoming more automated, instrumented and programmatic in nature or add yet another layer of virtualized (and generally encrypted) transport and execution above them.

This works if you’re able to engineer your way around these gaps (or make them less relevant,) but generally this is where segmentation becomes an issue due to security and compliance design patterns which depend on the “complexity” introduced by the very flexible networking constructs available in most enterprise of SP networks.

It’s like a layered cake that keeps self-frosting.

Software Defined Architecture…

You can see the extreme opportunity for Software Defined *anything* then, can’t you? With SDN, let the physical networks NOT be complex but rather more simple and flat and then unify the orchestration, traffic steering, service insertion and (even) security capabilities of the physical and virtual networks AND the virtualization/cloud orchestration layers (from the networking perspective) into a single intelligent control plane…

That’s a big old self-frosting cake.

Basically, this is what AWS has done…but all that intelligence provided by the single pane of glass is currently left up to the app owner atop them.  That’s the downside.  Those sufficiently enlightened AWS’ customers are aware generally  of this and understand the balance of benefits and limitations of this path.

In an enterprise environment, however, it’s a timing game between the controller vendors, the virtualization/cloud stack providers, the networking vendors, and security vendors …each trying to offer up this capability either as an “integrated” capability or as an overlay…all under the watchful eye of the auditor who is generally unmotivated, uneducated and unnerved by all this new technology — especially since the compliance frameworks and regulatory elements aren’t designed to account for these dramatic shifts in architecture or operation (let alone the threat curve of advanced adversaries.)

Back To The Future…Hey, Look, It’s Token Ring and DMZs!

As I sit with these customers who build these nextgen networks, the moment segmentation comes up, the elegant network and application architectures rapidly crumble into piles of asset-based rubble as what happens next borders on the criminal…

Thanks to compliance initiatives — PCI is a good example — no matter how well scoped, those flat networks become more and more logically hierarchical.  Because SDN is still nascent and we’re lacking that unified virtualized network (and security) control plane, we end up resorting back to platform-specific “less flat” network architectures in both the physical and virtual layers to achieve “enclave” like segmentation.

But with virtualization the problem gets more complex as in an attempt to be agile, cost efficient and in order to bring data to the workloads to reduce heaving lifting of the opposite approach, out-of-scope assets can often (and suddenly) be co-resident with in-scope assets…traversing logical and physical constructs that makes it much more difficult to threat model since the level of virtualized context supports differs wildly across these layers.

Architects are then left to think how they can effectively take all the awesome performance, agility, scale and simplicity offered by the underlying fabrics (compute, network and storage) and then layer on — bolt on — security and compliance capabilities.

What they discover is that it’s very, very, very platform specific…which is why we see protocols such as VXLAN and NVGRE pop up to deal with them.

Lego Blocks and Pig Farms…

These architects then replicate the design patterns with which they are familiar and start to craft DMZs that are logically segmented in the physical network and then grafted on to the virtual.  So we end up with relying on what Gunnar Petersen and I refer to as the “SSL and Firewall” lego block…we front end collections of “layer 2 connected” assets based on criticality or function, many of which stretched across these fabrics, and locate them behind layer 3 “firewalls” which provide basic zone-based isolation and often VPN connectivity between “trusted” groups of other assets.

In short, rather than build applications that securely authenticate, communicate — or worse yet, even when they do — we pigpen our corralled assets and make our estate fatter instead of flatter.  It’s really a shame.

I’ve made the case in my “Commode Computing” presentation that one of the very first things that architects need to embrace is the following:

…by not artificially constraining the way in which we organize, segment and apply policy (i.e. “put it in a DMZ”) we can think about how design “anti-patterns” may actually benefit us…you can call them what you like, but we need to employ better methodology for “zoning.”

These trust zones or enclaves are reasonable in concept so long as we can ultimately further abstract their “segmentation” and abstract the security and compliance policy requirements by expressing policy programmatically and taking the logical business and functional use-case PROCESSES into consideration when defining, expressing and instantiating said policy.

You know…understand what talks to what and why…

A great way to think about this problem is to apply the notion of application mobility — without VM containers — and how one would instantiate a security “policy” in that context.  In many cases, as we march up the stack to distributed platform application architectures, we’re not able to depend upon the “crutch” that hypervisors or VM packages have begun to give us in legacy architectures that have virtualization grafted onto them.

Since many enterprises are now just starting to better leverage their virtualized infrastructure, there *are* some good solutions (again, platform specific) that unify the physical and virtual networks from a zoning perspective, but the all-up process-driven, asset-centric (app & information) view of “policy” is still woefully lacking, especially in heterogeneous environments.

Wrapping Up…

In enterprise and SP environments where we don’t have the opportunity to start anew, it often feels like we’re so far off from this sort of capability because it requires a shift that makes software defined networking look like child’s play.  Most enterprises don’t do risk-driven, asset-centric, process-mapped modelling, [and SP’s are disconnected from this,] so segmentation falls back to what we know: DMZs with VLANs, NAT, Firewalls, SSL and new protocol band-aids invented to cover gaping arterial wounds.

In environments lucky enough to think about and match the application use cases with the highly-differentiated operational models that virtualized *everything* brings to bear, it’s here today — but be prepared and honest that the vendor(s) you chose must be strategic and the interfaces between those platforms and external entities VERY well defined…else you risk software defined entropy.

I wish I had more than the 5 minutes it took to scratch this out because there’s SO much to talk about here…

…perhaps later.

Related articles

Enhanced by Zemanta

Six Degress Of Desperation: When Defense Becomes Offense…

July 15th, 2012 No comments
English: Defensive and offensive lines in Amer...

English: Defensive and offensive lines in American football (Photo credit: Wikipedia)

One cannot swing a dead cat without bumping into at least one expose in the mainstream media regarding how various nation states are engaged in what is described as “Cyberwar.”

The obligatory shots of darkened rooms filled with pimply-faced spooky characters basking in the green glow of command line sessions furiously typing are dosed with trademark interstitial fade-ins featuring the masks of Anonymous set amongst a backdrop of shots of smoky Syrian streets during the uprising,  power grids and nuclear power plants in lockdown replete with alarms and flashing lights accompanied by plunging stock-ticker animations laid over the trademark icons of financial trading floors.

Terms like Stuxnet, Zeus, and Flame have emerged from the obscure .DAT files of AV research labs and now occupy a prominent spot in the lexicon of popular culture…right along side the word “Hacker,” which now almost certainly brings with it only the negative connotation it has been (re)designed to impart.

In all of this “Cyberwar” we hear that the U.S. defense complex is woefully unprepared to deal with the sophistication, volume and severity of the attacks we are under on a daily basis.  Further, statistics from the Private Sector suggest that adversaries are becoming more aggressive, motivated, innovative, advanced,  and successful in their ability to attack what is basically described as basically undefended — nee’ undefendable — assets.

In all of this talk of “Cyberwar,” we were led to believe that the U.S. Government — despite hostile acts of “cyberaggression” from “enemies” foreign and domestic — never engaged in pre-emptive acts of Cyberwar.  We were led to believe that despite escalating cases of documented incursions across our critical infrastructure (Aurora, Titan Rain, etc.,) that our response was reactionary, limited in scope and reach and almost purely detective/forensic in nature.

It’s pretty clear that was a farce.

However, what’s interesting — besides the amazing geopolitical, cultural, socio-economic, sovereign,  financial and diplomatic issues that war of any sort brings — including “cyberwar” — is that even in the Private Sector, we’re still led to believe that we’re both unable, unwilling or forbidden to do anything but passively respond to attack.

There are some very good reasons for that argument, and some which need further debate.

Advanced adversaries are often innovative and unconstrained in their attack methodologies yet defenders remain firmly rooted in the classical OODA-fueled loops of the past where the A, “act,” generally includes some convoluted mixture of detection, incident response and cleanup…which is often followed up with a second dose when the next attack occurs.

As such, “Defenders” need better definitions of what “defense” means and how a silent discard from a firewall, a TCP RST from an IPS or a blip from Bro is simply not enough.  What I’m talking about here is what defensive linemen look to do when squared up across from their offensive linemen opponents — not to just hold the line to prevent further down-field penetration, but to sack the quarterback or better yet, cause a fumble or error and intercept a pass to culminate in running one in for points to their advantage.

That’s a big difference between holding till fourth down and hoping the offense can manage to not suffer the same fate from the opposition.

That implies there’s a difference between “winning” and “not losing,” with arbitrary values of the latter.

Put simply, it means we should employ methods that make it more and more difficult, costly, timely and non-automated for the attacker to carry out his/her mission…[more] active defense.

I’ve written about this before in 2009 “Incomplete Thought: Offensive Computing – The Empire Strikes Back” wherein I asked people’s opinion on both their response to and definition of “offensive security.”  This was a poor term…so I was delighted when I found my buddy Rich Mogull had taken the time to clarify vocabulary around this issue in his blog titled: “Thoughts on Active Defense, Intrusion Deception, and Counterstrikes.

Rich wrote:

…Here are some possible definitions we can work with:

  • Active defense: Altering your environment and system responses dynamically based on the activity of potential attackers, to both frustrate attacks and more definitively identify actual attacks. Try to tie up the attacker and gain more information on them without engaging in offensive attacks yourself. A rudimentary example is throwing up an extra verification page when someone tries to leave potential blog spam, all the way up to tools like Mykonos that deliberately screw with attackers to waste their time and reduce potential false positives.
  • Intrusion deception: Pollute your environment with false information designed to frustrate attackers. You can also instrument these systems/datum to identify attacks. DataSoft Nova is an example of this. Active defense engages with attackers, while intrusion deception can also be more passive.
  • Honeypots & tripwires: Purely passive (and static) tools with false information designed to entice and identify an attacker.
  • Counterstrike: Attack the attacker by engaging in offensive activity that extends beyond your perimeter.

These aren’t exclusive – Mykonos also uses intrusion deception, while Nova can also use active defense. The core idea is to leave things for attackers to touch, and instrument them so you can identify the intruders. Except for counterattacks, which move outside your perimeter and are legally risky.

I think that we’re seeing the re-emergence of technology that wasn’t ready for primetime now become more prominent in consideration when folks refresh their toolchests looking for answers to problems that “passive response” offers.  It’s important to understand that tools like these — in isolation — won’t solve many complex attacks, nor are they a silver bullet, but understanding that we’re not limited to cleanup is important.

The language of “active defense,” like Rich’s above, is being spoken more and more.

Traditional networking and security companies such as Juniper* are acquiring upstarts like Mykonos Software in this space.  Mykonos’ mission is to “…change the economics of hacking…by making the attack surface variable and inserting deceptive detection points into the web application…mak[ing] hacking a website more time consuming, tedious and costly to an attacker. Because the web application is no longer passive, it also makes attacks more difficult.”

VC’s like Kleiner Perkins are funding companies whose operating premise is a more active “response” such as the in-stealth company “Shape Security” that expects to “…change the web security paradigm by shifting costs from defenders to hackers.”

Or, as Rich defined above, the notion of “counterstrike” outside one’s “perimeter” is beginning to garner open discussion now that we’ve seen what’s possible in the wild.

In fact, check out the abstract at Defcon 20 from Shawn Henry of newly-unstealthed company “Crowdstrike,” titled “Changing the Security Paradigm: Taking Back Your Network and Bringing Pain to the Adversary:

The threat to our networks is increasing at an unprecedented rate. The hostile environment we operate in has rendered traditional security strategies obsolete. Adversary advances require changes in the way we operate, and “offense” changes the game.

Shawn Henry Prior to joining CrowdStrike, Henry was with the FBI for 24 years, most recently as Executive Assistant Director, where he was responsible for all FBI criminal investigations, cyber investigations, and international operations worldwide.

If you look at Mr. Henry’s credentials, it’s clear where the motivation and customer base are likely to flow.

Without turning this little highlight into a major opus — because when discussing this topic it’s quite easy to do so given the definition and implications of “active defense,”– I hope this has scratched an itch and you’ll spend more time investigating this fascinating topic.

I’m convinced we will see more and more as the cybersword rattling continues.

Have you investigated technology solutions that offer more “active defense?”

/Hoff

* Full disclosure: I work for Juniper Networks who recently acquired Mykonos Software mentioned above.  I hold a position in, and enjoy a salary from, Juniper Networks, Inc. 😉

Enhanced by Zemanta