Archive

Archive for July, 2011

Unsafe At Any Speed: The Darkside Of Automation

July 29th, 2011 5 comments

I’m a huge proponent of automation. Taking rote processes from the hands of humans & leveraging machines of all types to enable higher agility, lower cost and increased efficacy is a wonderful thing.

However, there’s a trade off; as automation matures and feedback loops become more closed with higher and higher clock rates yielding less time between execution, our ability to both detect and recover — let alone prevent — within a cascading failure domain is diminished.

Take three interesting, yet unrelated, examples:

  1. The premise of the W.O.P.R. in War Games — Joshua goes apeshit and almost starts WWIII by invoking a simulated game of global thermonuclear war
  2. The Airbus 380 failure – the luck of having 5 pilots on-board and their skill to override hundreds of cascading automation failures after an engine failure prevented a crash that would have killed hundreds of people.*
  3. The AWS EBS outage — the cloud version of Girls Gone Wild; automated replication caught in a FOR…NEXT loop

These weren’t “maliciously initiated” issues, they were accidents.  But how about “events” like Stuxnet?  What about a former Gartner analyst having his home automation (CASA-SCADA) control system hax0r3d!? There’s another obvious one missing, but we’ll get to that in a minute (hint: Flash Crash)

How do we engineer enough failsafe logic up and down the stack that can function at the same scale as the decision and controller logic does?   How do we integrate/expose enough telemetry that can be produced and consumed fast enough to actually allow actionable results in a timeframe that allows for graceful failure and recovery (nee survivability.)

One last example that is pertinent: high frequency trading (HFT) —  highly automated, computer driven, algorithmic-based stock trading at speeds measured in millionths of a second.

Check out how this works:

[Check out James Urquhart’s great Wisdom Of the Clouds blog post: “What Cloud Computing Can Learn From Flash Crash“]

In the use-case of HFT, ruthlessly squeezing nanoseconds from the processing loops — removing as much latency as possible from every element of the stack — literally has implications in the millions of dollars.

Technology vendors are doing many very interesting and innovative things architecturally to achieve these goals — some of them quite audacious — and anything that gets in the way or adds latency is generally not considered “useful.”  Security is usually one of them.

There are most definitely security technologies that allow for very low latency insertion of things like firewalls that have low single-digit microsecond latency figures (small packet,) but interestingly enough we’re also governed by the annoying laws of physics and things like propagation delay, serialization delay, TCP/IP protocol overhead, etc. all adds up.

Thus traditional approaches to “in-line” security — both detective and preventative — are not generally sustainable in these environments and thus require some deep thought so as to provide solutions that will scale as well as these HFT systems do…no short order.

I think this is another good use for “big data” and security data analytics.  Consider very high speed side-band systems that function along with these HFT systems that could potentially leverage the logic in these transactional trading systems to allow us to get closer to being able to solve the challenges of these environments.  Integrate these signaling and telemetry planes with “fabric-enabled” security capabilities and we might get somewhere useful.

This tees up nicely my buddy James Arlen’s talk at Blackhat on the insecurity of high frequency trading systems: “Security when nano seconds count”  You should plan on checking it out…I know I will.

/Hoff

*H/T to @reillyusa who also pointed me to “Questions Raised About Airbus Automated Control System” regarding the doomed Air France 447 flight.  Also, serendipitously, @etherealmind posted a link to a a story titled “Volkswagen demonstrates ‘Temporary Auto Pilot'” — what could *possibly* go wrong? 😉

Enhanced by Zemanta

More On Security & Big Data…Where Data Analytics and Security Collide

July 22nd, 2011 No comments
Racks of telecommunications equipment in part ...

Image via Wikipedia

My last blog post “InfoSec Fail: The Problem With Big Data Is Little Data,” prattled on a bit about how large data warehouses (or data lakes, from “Big Data Requires A Big, New Architecture,”) the intersection of next generation data centers, mobility and cloud computing were putting even more stress on “security”:

As Big Data and the databases/datastores it lives in interact with then proliferation of PaaS and SaaS offers, we have an opportunity to explore better ways of dealing with these problems — this is the benefit of mass centralization of information.

Of course there is an equal and opposite reaction to the “data gravity” property: mobility…and the replication (in chunks) and re-use of the same information across multiple devices.

This is when Big Data becomes Small Data and the ability to protect it gets even harder.

With the enormous amounts of data available, mining it — regardless of its source — and turning it into actionable information (nee intelligence) is really a strategic necessity, especially in the world of “security.”

Traditionally we’ve had to use tools such as security event information management (SEIM) tools or specialized visualization* suites to make sense of what ends up being telemetry which is often disconnected from the transaction and value of the asset from which they emanate.

Even when we do start to be able to integrate and correlate event, configuration, vulnerability or logging data, it’s very IT-centric.  It’s very INFRASTRUCTURE-centric.  It doesn’t really include much value about the actual information in use/transit or the implication of how it’s being consumed or related to.

This is where using Big Data and collective pools of sourced “puddles” as part of a larger data “lake” and then mining it using toolsets such as Hadoop come into play.

We’re starting to see the commercialization of Hadoop outside of vertical use cases for financial services and healthcare and more broadly adopted for analytics across entire lines of business, industry and verticals.  Combine the availability of cheap storage with ever more powerful and cost-effective compute and network and you’ve got a goldmine ready to tap.

One such solution you’ll hear more about is Zettaset who commercialize and productize Hadoop to enable the construction of enormously powerful data security warehouses and analytics.

Zettaset is a key component of a solution offering that is doing what I describe above for a CISO of a large company who integrates enormous amounts of disparate and seemingly unrelated data to make managed risk decisions that is fed to humans and automated processes alike.

These data are sourced from all across the business — including IT — and allows the teams and constituent interested parties from across the company to slice and dice data from petabytes of information which previously would have been silted.  Powerful.

Look for more announcements about this solution around the Blackhat timeframe.  It’s cool stuff.

This is one example where Big Data and “security” are paired in the positive.

/Hoff

* Ken Oestreich (@Fountnhead) tweeted an interesting and pertinent comment regarding my points related to SEIM and visualization tools that summarized the general idea I was getting at in referencing these existing toolsets:

…which of course was underscored by the clearly-bored Christian Reilly who has Citrix’s Cloud strategy already wrapped up tighter than piñata at a Mexican Wedding:

Related articles

Enhanced by Zemanta