Archive

Archive for the ‘Disruptive Innovation’ Category

Incomplete Thought: The Time Is Now For OCP-like White Box Security Appliances

January 25th, 2015 3 comments

Over the last couple of years, we’ve seen some transformative innovation erupt in networking.

In no particular order OR completeness:

  • CLOS architectures and protocols are evolving
  • the debate over Ethernet and IP fabrics is driving toward the outcome that we need both
  • x86 is finding a home in networking at increasing levels of throughput thanks to things like DPDK and optimized IP stacks
  • merchant silicon, FPGA and ASICs are seeing increased investment as the speeds/feeds move from 10 > 40 > 100 Gb/s per NIC
  • programmable abstraction and the operational models to support it has been proven at scale
  • virtualization and virtualized services are now common place architectural primitives in discussions for NG networking
  • Open Source is huge in both orchestration as well as service delivery
  • Entirely new network operating systems like that of Cumulus have emerged to challenge incumbents
  • SDN, NFV and overlays are starting to see production at-scale adoption beyond PoCs
  • automation is starting to take root for everything from provisioning to orchestration to dynamic service insertion and traffic steering

Stir in the profound scale-out requirements of mega-scale web/cloud providers and the creation and adoption of Open Compute Platform compliant network, storage and compute platforms, and there’s a real revolution going on:

The Open Compute Networking Project is creating a set of technologies that are disaggregated and fully open, allowing for rapid innovation in the network space. We aim to facilitate the development of network hardware and software – together with trusted project validation and testing – in a truly open and collaborative community environment.

We’re bringing to networking the guiding principles that OCP has brought to servers & storage, so that we can give end users the ability to forgo traditional closed and proprietary network switches – in favor of a fully open network technology stack. Our initial goal is to develop a top-of-rack (leaf) switch, while future plans target spine switches and other hardware and software solutions in the space.

Now, interestingly, while there are fundamental shifts occurring in the approach to and operations of security — the majority of investment in which is still network-centric — as an industry, we are still used to buying our security solutions as closed appliances or chassis form-factors from vendors with integrated hardware and software.

While vendors offer virtualized versions of their hardware solutions as virtual appliances that can also run on bare metal, they generally have not enjoyed widespread adoption because of the operational challenges involved with the operationally-siloed challenges involved in distinguishing the distribution of security as a service layer across dedicated appliances or across compute fabrics as an overlay.

But let’s just agree that outside of security, software is eating the world…and that at some point, the voracious appetite of developers and consumers will need to be sated as it relates to security.

Much of the value (up to certain watermark levels of performance and latency) of security solutions is delivered via software which when coupled with readily-available hardware platforms such as x86 with programmable merchant silicon, can provide some very interesting and exciting solutions at a much lower cost.

So why then, like what we’ve seen with networking vendors who have released OCP-compliant white-box switching solutions that allow end-users to run whatever software/NOS they desire, have we not seen the same for security?

I think it would be cool to see an OCP white box spec for security and let the security industry innovate on the software to power it.

You?

/Hoff

 

 

 

Incomplete Thought: Where Is the Technology Disruption Forcing REAL Change In Security?

January 24th, 2013 15 comments

In the networking world, we’ve seen how virtualization technologies and operational models such as cloud have impacted the market, vendors and customers in what amounts to an incredibly short span of time.

What’s popped out of that progression is the hugely disruptive impact of Software Defined Networking and corresponding Network Function Virtualization.  These issues are forcing both short and long term disruption in the networking space.  Behemoths have had to pivot…almost overnight.  We haven’t seen this behavior for a while.

I’m curious as to what people see in terms of technology that they feel is truly disruptive to the Security industry.  That means you. 🙂

I understand many use cases, trends and operational shifts such as BYOD, Mobility, Cloud, etc. as well as amplification of “older” issues such as DDoS, Malware, WebApp attacks, etc., but I’m curious if you think we are really seeing truly security technology disruption impact that is innovative versus incremental advancement (on either the offensive or defensive side of the coin.)

You have an opinion?

/Hoff

 

 

Categories: Disruptive Innovation Tags:

Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 15 comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself 🙂

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting 🙂

Related articles

 

Enhanced by Zemanta

Six Degress Of Desperation: When Defense Becomes Offense…

July 15th, 2012 No comments
English: Defensive and offensive lines in Amer...

English: Defensive and offensive lines in American football (Photo credit: Wikipedia)

One cannot swing a dead cat without bumping into at least one expose in the mainstream media regarding how various nation states are engaged in what is described as “Cyberwar.”

The obligatory shots of darkened rooms filled with pimply-faced spooky characters basking in the green glow of command line sessions furiously typing are dosed with trademark interstitial fade-ins featuring the masks of Anonymous set amongst a backdrop of shots of smoky Syrian streets during the uprising,  power grids and nuclear power plants in lockdown replete with alarms and flashing lights accompanied by plunging stock-ticker animations laid over the trademark icons of financial trading floors.

Terms like Stuxnet, Zeus, and Flame have emerged from the obscure .DAT files of AV research labs and now occupy a prominent spot in the lexicon of popular culture…right along side the word “Hacker,” which now almost certainly brings with it only the negative connotation it has been (re)designed to impart.

In all of this “Cyberwar” we hear that the U.S. defense complex is woefully unprepared to deal with the sophistication, volume and severity of the attacks we are under on a daily basis.  Further, statistics from the Private Sector suggest that adversaries are becoming more aggressive, motivated, innovative, advanced,  and successful in their ability to attack what is basically described as basically undefended — nee’ undefendable — assets.

In all of this talk of “Cyberwar,” we were led to believe that the U.S. Government — despite hostile acts of “cyberaggression” from “enemies” foreign and domestic — never engaged in pre-emptive acts of Cyberwar.  We were led to believe that despite escalating cases of documented incursions across our critical infrastructure (Aurora, Titan Rain, etc.,) that our response was reactionary, limited in scope and reach and almost purely detective/forensic in nature.

It’s pretty clear that was a farce.

However, what’s interesting — besides the amazing geopolitical, cultural, socio-economic, sovereign,  financial and diplomatic issues that war of any sort brings — including “cyberwar” — is that even in the Private Sector, we’re still led to believe that we’re both unable, unwilling or forbidden to do anything but passively respond to attack.

There are some very good reasons for that argument, and some which need further debate.

Advanced adversaries are often innovative and unconstrained in their attack methodologies yet defenders remain firmly rooted in the classical OODA-fueled loops of the past where the A, “act,” generally includes some convoluted mixture of detection, incident response and cleanup…which is often followed up with a second dose when the next attack occurs.

As such, “Defenders” need better definitions of what “defense” means and how a silent discard from a firewall, a TCP RST from an IPS or a blip from Bro is simply not enough.  What I’m talking about here is what defensive linemen look to do when squared up across from their offensive linemen opponents — not to just hold the line to prevent further down-field penetration, but to sack the quarterback or better yet, cause a fumble or error and intercept a pass to culminate in running one in for points to their advantage.

That’s a big difference between holding till fourth down and hoping the offense can manage to not suffer the same fate from the opposition.

That implies there’s a difference between “winning” and “not losing,” with arbitrary values of the latter.

Put simply, it means we should employ methods that make it more and more difficult, costly, timely and non-automated for the attacker to carry out his/her mission…[more] active defense.

I’ve written about this before in 2009 “Incomplete Thought: Offensive Computing – The Empire Strikes Back” wherein I asked people’s opinion on both their response to and definition of “offensive security.”  This was a poor term…so I was delighted when I found my buddy Rich Mogull had taken the time to clarify vocabulary around this issue in his blog titled: “Thoughts on Active Defense, Intrusion Deception, and Counterstrikes.

Rich wrote:

…Here are some possible definitions we can work with:

  • Active defense: Altering your environment and system responses dynamically based on the activity of potential attackers, to both frustrate attacks and more definitively identify actual attacks. Try to tie up the attacker and gain more information on them without engaging in offensive attacks yourself. A rudimentary example is throwing up an extra verification page when someone tries to leave potential blog spam, all the way up to tools like Mykonos that deliberately screw with attackers to waste their time and reduce potential false positives.
  • Intrusion deception: Pollute your environment with false information designed to frustrate attackers. You can also instrument these systems/datum to identify attacks. DataSoft Nova is an example of this. Active defense engages with attackers, while intrusion deception can also be more passive.
  • Honeypots & tripwires: Purely passive (and static) tools with false information designed to entice and identify an attacker.
  • Counterstrike: Attack the attacker by engaging in offensive activity that extends beyond your perimeter.

These aren’t exclusive – Mykonos also uses intrusion deception, while Nova can also use active defense. The core idea is to leave things for attackers to touch, and instrument them so you can identify the intruders. Except for counterattacks, which move outside your perimeter and are legally risky.

I think that we’re seeing the re-emergence of technology that wasn’t ready for primetime now become more prominent in consideration when folks refresh their toolchests looking for answers to problems that “passive response” offers.  It’s important to understand that tools like these — in isolation — won’t solve many complex attacks, nor are they a silver bullet, but understanding that we’re not limited to cleanup is important.

The language of “active defense,” like Rich’s above, is being spoken more and more.

Traditional networking and security companies such as Juniper* are acquiring upstarts like Mykonos Software in this space.  Mykonos’ mission is to “…change the economics of hacking…by making the attack surface variable and inserting deceptive detection points into the web application…mak[ing] hacking a website more time consuming, tedious and costly to an attacker. Because the web application is no longer passive, it also makes attacks more difficult.”

VC’s like Kleiner Perkins are funding companies whose operating premise is a more active “response” such as the in-stealth company “Shape Security” that expects to “…change the web security paradigm by shifting costs from defenders to hackers.”

Or, as Rich defined above, the notion of “counterstrike” outside one’s “perimeter” is beginning to garner open discussion now that we’ve seen what’s possible in the wild.

In fact, check out the abstract at Defcon 20 from Shawn Henry of newly-unstealthed company “Crowdstrike,” titled “Changing the Security Paradigm: Taking Back Your Network and Bringing Pain to the Adversary:

The threat to our networks is increasing at an unprecedented rate. The hostile environment we operate in has rendered traditional security strategies obsolete. Adversary advances require changes in the way we operate, and “offense” changes the game.

Shawn Henry Prior to joining CrowdStrike, Henry was with the FBI for 24 years, most recently as Executive Assistant Director, where he was responsible for all FBI criminal investigations, cyber investigations, and international operations worldwide.

If you look at Mr. Henry’s credentials, it’s clear where the motivation and customer base are likely to flow.

Without turning this little highlight into a major opus — because when discussing this topic it’s quite easy to do so given the definition and implications of “active defense,”– I hope this has scratched an itch and you’ll spend more time investigating this fascinating topic.

I’m convinced we will see more and more as the cybersword rattling continues.

Have you investigated technology solutions that offer more “active defense?”

/Hoff

* Full disclosure: I work for Juniper Networks who recently acquired Mykonos Software mentioned above.  I hold a position in, and enjoy a salary from, Juniper Networks, Inc. 😉

Enhanced by Zemanta

Unsafe At Any Speed: The Darkside Of Automation

July 29th, 2011 5 comments

I’m a huge proponent of automation. Taking rote processes from the hands of humans & leveraging machines of all types to enable higher agility, lower cost and increased efficacy is a wonderful thing.

However, there’s a trade off; as automation matures and feedback loops become more closed with higher and higher clock rates yielding less time between execution, our ability to both detect and recover — let alone prevent — within a cascading failure domain is diminished.

Take three interesting, yet unrelated, examples:

  1. The premise of the W.O.P.R. in War Games — Joshua goes apeshit and almost starts WWIII by invoking a simulated game of global thermonuclear war
  2. The Airbus 380 failure – the luck of having 5 pilots on-board and their skill to override hundreds of cascading automation failures after an engine failure prevented a crash that would have killed hundreds of people.*
  3. The AWS EBS outage — the cloud version of Girls Gone Wild; automated replication caught in a FOR…NEXT loop

These weren’t “maliciously initiated” issues, they were accidents.  But how about “events” like Stuxnet?  What about a former Gartner analyst having his home automation (CASA-SCADA) control system hax0r3d!? There’s another obvious one missing, but we’ll get to that in a minute (hint: Flash Crash)

How do we engineer enough failsafe logic up and down the stack that can function at the same scale as the decision and controller logic does?   How do we integrate/expose enough telemetry that can be produced and consumed fast enough to actually allow actionable results in a timeframe that allows for graceful failure and recovery (nee survivability.)

One last example that is pertinent: high frequency trading (HFT) —  highly automated, computer driven, algorithmic-based stock trading at speeds measured in millionths of a second.

Check out how this works:

[Check out James Urquhart’s great Wisdom Of the Clouds blog post: “What Cloud Computing Can Learn From Flash Crash“]

In the use-case of HFT, ruthlessly squeezing nanoseconds from the processing loops — removing as much latency as possible from every element of the stack — literally has implications in the millions of dollars.

Technology vendors are doing many very interesting and innovative things architecturally to achieve these goals — some of them quite audacious — and anything that gets in the way or adds latency is generally not considered “useful.”  Security is usually one of them.

There are most definitely security technologies that allow for very low latency insertion of things like firewalls that have low single-digit microsecond latency figures (small packet,) but interestingly enough we’re also governed by the annoying laws of physics and things like propagation delay, serialization delay, TCP/IP protocol overhead, etc. all adds up.

Thus traditional approaches to “in-line” security — both detective and preventative — are not generally sustainable in these environments and thus require some deep thought so as to provide solutions that will scale as well as these HFT systems do…no short order.

I think this is another good use for “big data” and security data analytics.  Consider very high speed side-band systems that function along with these HFT systems that could potentially leverage the logic in these transactional trading systems to allow us to get closer to being able to solve the challenges of these environments.  Integrate these signaling and telemetry planes with “fabric-enabled” security capabilities and we might get somewhere useful.

This tees up nicely my buddy James Arlen’s talk at Blackhat on the insecurity of high frequency trading systems: “Security when nano seconds count”  You should plan on checking it out…I know I will.

/Hoff

*H/T to @reillyusa who also pointed me to “Questions Raised About Airbus Automated Control System” regarding the doomed Air France 447 flight.  Also, serendipitously, @etherealmind posted a link to a a story titled “Volkswagen demonstrates ‘Temporary Auto Pilot'” — what could *possibly* go wrong? 😉

Enhanced by Zemanta

More On Security & Big Data…Where Data Analytics and Security Collide

July 22nd, 2011 No comments
Racks of telecommunications equipment in part ...

Image via Wikipedia

My last blog post “InfoSec Fail: The Problem With Big Data Is Little Data,” prattled on a bit about how large data warehouses (or data lakes, from “Big Data Requires A Big, New Architecture,”) the intersection of next generation data centers, mobility and cloud computing were putting even more stress on “security”:

As Big Data and the databases/datastores it lives in interact with then proliferation of PaaS and SaaS offers, we have an opportunity to explore better ways of dealing with these problems — this is the benefit of mass centralization of information.

Of course there is an equal and opposite reaction to the “data gravity” property: mobility…and the replication (in chunks) and re-use of the same information across multiple devices.

This is when Big Data becomes Small Data and the ability to protect it gets even harder.

With the enormous amounts of data available, mining it — regardless of its source — and turning it into actionable information (nee intelligence) is really a strategic necessity, especially in the world of “security.”

Traditionally we’ve had to use tools such as security event information management (SEIM) tools or specialized visualization* suites to make sense of what ends up being telemetry which is often disconnected from the transaction and value of the asset from which they emanate.

Even when we do start to be able to integrate and correlate event, configuration, vulnerability or logging data, it’s very IT-centric.  It’s very INFRASTRUCTURE-centric.  It doesn’t really include much value about the actual information in use/transit or the implication of how it’s being consumed or related to.

This is where using Big Data and collective pools of sourced “puddles” as part of a larger data “lake” and then mining it using toolsets such as Hadoop come into play.

We’re starting to see the commercialization of Hadoop outside of vertical use cases for financial services and healthcare and more broadly adopted for analytics across entire lines of business, industry and verticals.  Combine the availability of cheap storage with ever more powerful and cost-effective compute and network and you’ve got a goldmine ready to tap.

One such solution you’ll hear more about is Zettaset who commercialize and productize Hadoop to enable the construction of enormously powerful data security warehouses and analytics.

Zettaset is a key component of a solution offering that is doing what I describe above for a CISO of a large company who integrates enormous amounts of disparate and seemingly unrelated data to make managed risk decisions that is fed to humans and automated processes alike.

These data are sourced from all across the business — including IT — and allows the teams and constituent interested parties from across the company to slice and dice data from petabytes of information which previously would have been silted.  Powerful.

Look for more announcements about this solution around the Blackhat timeframe.  It’s cool stuff.

This is one example where Big Data and “security” are paired in the positive.

/Hoff

* Ken Oestreich (@Fountnhead) tweeted an interesting and pertinent comment regarding my points related to SEIM and visualization tools that summarized the general idea I was getting at in referencing these existing toolsets:

…which of course was underscored by the clearly-bored Christian Reilly who has Citrix’s Cloud strategy already wrapped up tighter than piñata at a Mexican Wedding:

Related articles

Enhanced by Zemanta

Cloud & Virtualization Stacks: Users Fear Lock-In, Ecosystem Fears Lock-Out…

June 7th, 2011 No comments
Cover of "Groundhog Day (15th Anniversary...

Cover via Amazon

I don’t think I’m verbalizing something very well…so I decided to write it down. I still don’t think I’m managing to stick my point, but perhaps clarity will come by discussion.

Simon Wardley has written about market dynamics and behaviors associated with the emergence and ultimate commoditization of things many, many times, but I’m not sure that I’ve found satisfaction in being able to accurately describe the dysfunctional co-dependency between consumer, leading vendor and ecosystem supporters to my liking.

Here’s an example…

There’s an uneasy tension that seems to often become nothing more than wink-and-nod-subtext in discussions relating to the various stacks offered by leading cloud and virtualization vendors.  Even those efforts with the “open” or “open source” descriptor bolted on for good measure.

It occurs to me that this can be attributed to many things; the business and licensing model of the solution provider, the ultimate “consumer” the offering targets, the area(s) of differentiation around technology, the maturity of the ecosystem, and the amount of self-integration versus vendor support required to successfully operationalize and maintain the solution.

More directly, the tension I refer to is the desire (or at least oft-verbalized complaints) on the behalf of “consumers” of cloud and virtualization stacks to not be “locked in” to a single vendor balanced against the odd juxtaposition — but not entirely unreasonable requirement — of simultaneously not being subject to the “integrator’s dilemma,” and having to support it all themselves.

Stir in the ecosystem of ISVs and solutions providers who orbit around these stacks, adding value where they have the “permission” to do so before either the stack provider obviates their existence by baking those features in directly, or simply makes it increasingly more difficult to roadmap given engineering dependencies they can’t control or count on.

I alluded to some of this in my blog titled Cloud Computing, Open* and the Integrator’s Dilemma wherein I mused:

I am just as worried about the fate of OpenStack and its enterprise versus service provider audience and how it’s being perceived as they watch the mad scramble by tech companies to add value and get a seat at the table.

Each of these well-intentioned projects are curated by public cloud operators and technology vendors and are indirectly positioned for the benefit of enterprises, but not really meant for their consumption — at least not if they don’t end up putting enterprises right back where they were trying to escape from in the first place with cloud computing: the integrator’s dilemma.

If you look at the underlying premise of OpenStack — it’s modularity, flexibility and open design — what you get is the ability to craft a solution finely tuned to an operating environment of your design. Integrate solutions into the stack as you see fit.  Contribute code.  Develop an ecosystem. Integrate, manage, maintain…

This is as much a problem as it is a solution for an enterprise.  This is why, in many cases, enterprises choose to use a single vendor with a single neck to choke in order to avoid having to act as an integrator in the first place or simply look to outsource to one or more public cloud providers and avoid this in the first place.

Chances are, most are realistically caught up somewhere in the nether-regions in between the two.

This all sounds so eerily familiar…

Enhanced by Zemanta

The Curious Case Of the MBO Cloud

December 23rd, 2010 1 comment

I was speaking to an enterprise account manager the other day regarding strategic engagements in Cloud Computing in very large enterprises.

He remarked on the non-surprising parallelism occurring as these companies build and execute on cloud strategies that involve both public and private cloud initiatives.

Many of them are still trying to leverage the value of virtualization and are thus often conservative about their path forward.  Many are blazing new trails.

We talked about the usual barriers to entry for even small PoC’s: compliance, security, lack of expertise, budget, etc., and then he shook his head solemnly, stared at the ground and mumbled something about a new threat to the overall progress toward enterprise cloud adoption.

MBO Cloud.

We’ve all heard of public, private, virtual private, hybrid, and community clouds, right? But “MBO Cloud?”

I asked. He clarified:

Cloud computing is such a hot topic, especially with its promise of huge cost savings, agility, and the reduced time-to-market for services and goods that many large companies who might otherwise be unable or unwilling to be able to pilot using a public cloud provider and also don’t want to risk much if any capital outlay for software and infrastructure to test private cloud are taking an interesting turn.

They’re trying to replicate Amazon or Google but not for the right reasons or workloads. They just look at “cloud” as some generic low-cost infrastructure platform that requires some open source and a couple of consultants — or even a full-time team of “developers” assigned to make it tick.

They rush out, buy 10 off-the-shelf white-label commodity multi-CPU/multi-core servers, acquire a plain vanilla NAS or low-end SAN storage appliance, sprinkle on some Xen or KVM, load on some unremarkable random set of open source software packages to test with a tidy web front-end and call it “cloud.”  No provisioning, no orchestration, no self-service portals, no chargeback, no security, no real scale, no operational re-alignment, no core applications…

It costs them next to nothing and it delivers about the same because they’re not designing for business cases that are at all relevant, they’re simply trying to copy Amazon and point to a shiny new rack as “cloud.”

Why do they do this? To gain experience and expertise? To dabble cautiously in an emerging set of technological and operational models?  To offload critical workloads that scale up and down?

Nope.  They do this for two reasons:

  1. Now that they have proven they can “successfully” spin up a “cloud” — however useless it may be — that costs next to nothing, it gives them leverage to squeeze vendors on pricing when and if they are able to move beyond this pile of junk, and
  2. Management By Objectives (MBO) — or a fancy way of saying, “bonus.”  Many C-levels and their ops staff are compensated via bonus on hitting certain objectives. One of them (for all the reasons stated above) is “deliver on the strategy and execution for cloud computing.” This half-hearted effort sadly qualifies.

So here’s the problem…when these efforts flame out and don’t deliver, they will impact the success of cloud in general — everywhere from a private cloud vendor to even potentially public cloud offers like AWS.  Why?  Because as we already know, *anything* that smells at all like failure gets reflexively blamed on cloud these days, and as these craptastic “cloud” PoC’s fail to deliver — even on minimal cash outlay — it’s going to be hard to gain a second choice given the bad taste left in the mouths of the business and management.

The opposite point could also be made in regard to public cloud services — that these truly “false cloud*” trials based on poorly architected and executed bubble gum and bailing wire will drive companies to public cloud (however longer that may take as compliance and security catch up.)

It will be interesting to see which happens first.

Either way, beware the actual “false cloud” but realize that the motivation behind many of them isn’t the betterment of the business or evolution of IT, it’s the fattening of wallets.

/Hoff

* I’m leveraging “false cloud” here to truly illustrate a point; despite actually useful private cloud initiatives, this is a term unfortunately levied on all private cloud initiatives by certain public cloud providers.

Enhanced by Zemanta

Incomplete Thought: Why We Have The iPhone and AT&T To Thank For Cloud…

December 15th, 2010 1 comment
Image representing iPhone as depicted in Crunc...
Image via CrunchBase

I’m not sure this makes any sense whatsoever, but that’s why it’s labeled “incomplete thought,” isn’t it? 😉

A few weeks ago I was delivering my Cloudinomicon talk at the Cloud Security Alliance Congress in Orlando and as I was describing the cyclical nature of computing paradigms and the Security Hamster Sine Wave of Pain, it dawned on me — out loud — that we have Apple’s iPhone and its U.S. carrier, AT&T, to thank for the success of Cloud Computing.

My friends from AT&T perked up when I said that.  Then I explained…

So let me set this up. It will require some blog article ping-pong in order to reference earlier scribbling on the topic, but here’s the very rough process:

  1. I’ve pointed out that there are two fundamental perspectives when describing Cloud and Cloud Computing: the operational provider’s view and the experiential consumer’s view.  To the provider, the IT-centric, empirical and clinical nuances are what matters. To the consumer, anything that connects to the Internet via any computing platform using any app that interacts with any sort of information is also cloud.  There’s probably a business/market view, but I’ll keep things simple for purpose of illustration.  I wrote about this here:  Cloud/Cloud Computing Definitions – Why they Do(n’t) Matter…
  2. As we look at the adoption of cloud computing, the consumption model ultimately becomes more interesting than how the service is delivered (as it commoditizes.) My presentation “The Future of Cloud” focused on the fact that the mobile computing platforms (phones, iPads, netbooks, thin(ner) clients, etc) are really the next frontier.  I pointed out that we have the simultaneous mass re-centralization of applications and data in massive cloud data centers (however distributed ethereally they may be) and the massive distribution of the same applications and data across increasingly more intelligent, capable and storage-enabled mobile computing devices.  I wrote about this here: Slides from My Cloud Security Alliance Keynote: The Cloud Magic 8 Ball (Future Of Cloud)
  3. The iPhone isn’t really that remarkable a piece of technology in and of itself, in fact it capitalizes on and cannibalizes many innovations and technologies that came before it.  However, as I mentioned in my post “Cloud Maturity: Just Like the iPhone, There’s An App For That…The thing I love about my iPhone is that it’s not a piece of technology I think about but rather, it’s the way I interact with it to get what I want done.  It has its quirks, but it works…for millions of people.  Add in iTunes, the community of music/video/application artists/developers and the ecosystem that surrounds it, and voila…Cloud.”

  4. At each and every compute paradigm shift, we’ve seen the value of the network waffle between “intelligent” platform and simple transport depending upon where we were with the intersection of speeds/feeds and ubiquity/availability of access (the collision of Moore’s and Metcalfe’s laws?)  In many cases, we’ve had to rely on workarounds that have hindered the adoption of really valuable and worthwhile technologies and operational models because the “network” didn’t deliver.

I think we’re bumping up against point #4 today.  So here’s where I find this interesting.  If we see the move to the consumerized view of accessing resources from mobile platforms to resources located both on-phone and in-cloud, you’ll notice that even in densely-populated high-technology urban settings, we have poor coverage, slow transit and congested, high-latency, low-speed access — wired and wireless for that matter.

This is a problem. In fact it’s such a problem that if we look backward to about 4 years ago when “cloud computing” and the iPhone became entries in the lexicon of popular culture, this issue completely changed the entire application deployment model and use case of the iPhone as a mobile platform.  Huh?

Do you remember when the iPhone first came out? It was a reasonably capable compute platform with a decent amount of storage. It’s network connectivity, however, sucked.

Pair that with the fact that the application strategy was that there was emphatically, per Steve Jobs, not going to be native applications on the iPhone for many reasons, including security.  Every application was basically just a hyperlink to a web application located elsewhere.  The phone was nothing more than a web browser that delivered applications running elsewhere (for the most part, especially when things like Flash were’nt present.) Today we’d call that “The Cloud.”

Interestingly, at this point, he value of the iPhone as an application platform was diminished since it was not highly differentiated from any other smartphone that had a web browser.

Time went by and connectivity was still so awful and unreliable that Apple reversed direction to drive value and revenue in the platform, engaged a developer community, created the App Store and provided for a hybrid model — apps both on-platform and off — in order to deal with this lack of ubiquitous connectivity.  Operating systems, protocols and applications were invented/deployed in order to deal with the synchronization of on- and off-line application and information usage because we don’t have pervasive high-speed connectivity in the form of cellular or wifi such that we otherwise wouldn’t care.

So this gets back to what I meant when I said we have AT&T to thank for Cloud.  If you can imagine that we *did* have amazingly reliable and ubiquitous connectivity from devices like our iPhones — those consumerized access points to our apps and data — perhaps the demand for and/or use patterns of cloud computing would be wildly different from where they are today. Perhaps they wouldn’t, but if you think back to each of those huge compute paradigm shifts — mainframe, mini, micro, P.C., Web 1.0, Web 2.0 — the “network” in terms of reliability, ubiquity and speed has always played a central role in adoption of technology and operational models.

Same as it ever was.

So, thanks AT&T — you may have inadvertently accelerated the back-end of cloud in order to otherwise compensate, leverage and improve the front-end of cloud (and vice versa.)  Now, can you do something about the fact that I have no signal at my house, please?

/Hoff

Enhanced by Zemanta

On Security Conference Themes: Offense *Versus* Defense – Or, Can You Code?

November 22nd, 2010 7 comments

This morning’s dialog on Twitter from @wmremes and @singe reminded me of something that’s been bouncing around in my head for some time.

Wim blogged about a tweet Jeff Moss made regarding Black Hat DC in which he suggested CFP submissions should focus on offense (versus defense.)

Black Hat (and Defcon) have long focused on presentations which highlight novel emerging attacks.  There are generally not a lot of high-profile “defensive” presentations/talks because for the most part, they’re just not sexy, generally they involve hard work/cultural realignment and the reality that as hard as we try, attackers will always out-innovate and out-pace defenders.

More realistically, offense is sexy and offense sells — and it often sells defense.  That’s why vendors sponsor those shows in the first place.

Along these lines, one will notice that within our industry, the defining criterion for the attack versus defend talks and those that give them, is one’s ability to write code and produce tools that demonstrate the vulnerability via exploit.  Conceptual vulnerabilities paired with non-existent exploits are generally thought of as fodder for academia.  Only when a tool that weaponizes an attack shows up do people pay attention.

Zero days rule by definition. There’s no analog on the defensive side unless you buy into marketing like “…ahead of the threat.” *cough* Defense for offense that doesn’t exist generally doesn’t get the majority of the funding 😉

So it’s no wonder that security “rockstars” in our industry are generally those who produce attack/offensive code which illustrate how a vector can be exploited.  It’s tangible.  It’s demonstrable.  It’s sexy.

On the other hand, most defenders are reconciled to using tools that others wrote — or become specialists in the integration of them — in order to parlay some advantage over the ever-increasing wares of the former.

Think of those folks who represent the security industry in terms of mindshare and get the most amount of press.  Overwhelmingly it’s those “hax0rs” who write cool tools — tools that are more offensive in nature, even if they produce results oriented toward allowing practitioners to defend better (or at least that’s how they’re sold.)  That said, there are also some folks who *do* code and *do* create things that are defensive in nature.

I believe the answer lies in balance; we need flashy exploits (no matter how impractical/irrelevant they may be to a large amount of the population) to drive awareness.  We also need  more practitioner/governance talks to give people platforms upon which they can start to architect solutions.  We need more defenders to be able to write code.

Perhaps that’s what Richard Bejtlich meant when he tweeted: “Real security is built, not bought.”  That’s an interesting statement on lots of fronts. I’m selfishly taking Richard’s statement out of context to support my point, so hopefully he’ll forgive me.

That said, I don’t write code.  More specifically, I don’t write code well.  I have hundreds of ideas of things I’d like to do but can’t bridge the gap between ideation and proof-of-concept because I can’t write code.

This is why I often “invent” scenarios I find plausible, talk about them, and then get people thinking about how we would defend against them — usually in the vacuum of either offensive or defensive tools being available, or at least realized.

Sometimes there aren’t good answers.

I hope we focus on this balance more at shows like Black Hat — I’m lucky enough to get to present my “research” there despite it being defensive in nature but we need more defensive tools and talks to make this a reality.

/Hoff

Enhanced by Zemanta