Archive

Posts Tagged ‘AWS’

Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 15 comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself 🙂

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting 🙂

Related articles

 

Enhanced by Zemanta

With Cloud, The PaaSibilities Are Endless…

January 26th, 2012 3 comments

I read a very interesting article from ZDNet UK this morning titled “Amazon Cuts Off Stack at the PaaS

The gist of the article is that according to Werner Vogels (@werner,) AWS’ CTO, they have no intention of delivering a PaaS service and instead expect to allow an ecosystem of PaaS providers, not unlike Heroku, to flourish atop their platform:

“We want 1,000 platforms to bloom,” said Vogels, before explaining Amazon has “no desire to go and really build a [PaaS].”

That’s all well and good, but it lead me to scratch my head, especially with regard to what I *thought* AWS already offered in terms of PaaS with BeanStalk, which is described thusly in their FAQ:

Q: What is AWS Elastic Beanstalk?
AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
Q: How is AWS Elastic Beanstalk different from existing application containers or platform-as-a-service solutions?
Most existing application containers or platform-as-a-service solutions, while reducing the amount of programming required, significantly diminish developers’ flexibility and control. Developers are forced to live with all the decisions pre-determined by the vendor – with little to no opportunity to take back control over various parts of their application’s infrastructure. However, with AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using AWS Elastic Beanstalk’s management capabilities.
While these snippets from the FAQ certainly seem to describe infrastructure components that enable PaaS (meta-PaaS?) when you combine the other elements of AWS’ offerings, it sure as heck sounds like PaaS regardless of what you call it.
In fact, a Twitter exchange with @GeorgeReese, @krishnan and @jamessaull well summarized the headscratching:
With all those components, AWS can certainly enable PaaS platforms like Heroku to “flourish.” 
However, suggesting that despite having all the raw components and not pointing to it and saying “PaaS” is like having all the components to assemble a bomb, not package it as such, and declaring it’s not dangerous because in that state it won’t go off.
I’d say the potential for going BOOM! are real.  It appears Marten Mickos was hinting at the same thing:

However, Mickos disputed Vogels’ claim that Amazon is going to let a thousand platforms bloom.

“He will always say that, and Amazon will slowly take a step higher and higher,” he said, before pointing to Beanstalk as an example. “[But] in my view PaaS has middleware components… and I could agree that it is okay to add [those] to an IaaS.”

In the long term, as I’ve stated prior, the value in platforms will be in how easy they make it for developers to create and deliver applications fluidly.

I may not be as good at marketing as some, but that sounds less like an infrastructure-centric business model and much more like an application-centric one.

Moving on up is where it’s at.  I saw the scratching on the cave walls when I wrote “Silent Lucidity: IaaS — Already A Dinosaur. The Evolution of PaaSasarus Rex” back in 2009.

What do you think?  Is AWS being coy?

Enhanced by Zemanta

QuickQuip: Don’t run your own data center if you’re a public IaaS < Sorta...

January 10th, 2012 7 comments

Patrick Baillie, the CEO of Swiss IaaS provider, CloudSigma, wrote a very interesting blog published on GigaOm titled “Don’t run your own data center if you’re a public IaaS.”

Baillie leads off by describing how AWS’ recent outage is evidence as to why the complexity of running facilities (data centers) versus the services running atop them is best segregated and outsourced to third parties in the business of such things:

Why public IaaS cloud providers should outsource their data centers

While there are some advantages for cloud providers operating data centers in-house, including greater control, capacity, power and security, the challenges, such as geographic expansion, connectivity, location, cost and lower-tier facilities can often outweigh the benefits. In response to many of these challenges, an increasing number of cloud providers are realizing the benefits of working with a third-party data center provider.

It’s  a very interesting blog, sprinkled throughout with pros and cons of rolling your own versus outsourcing but it falls down in being able to carry the burden in logic of some the assertions.

Perhaps I misunderstood, but the article seemed to focus on single DC availability as though (per my friend @CSOAndy’s excellent summarization) “…he missed the obvious reason: you can arbitrage across data centers” and “…was focused on single DC availability. Arbitrage means you just move your workloads automagically.”

I’ll let you read the setup in its entirety, but check out the conclusion:

In reality, taking a look at public cloud providers, those with legacy businesses in hosting, including Rackspace and GoGrid, tend to run their own facilities, whereas pure-play cloud providers, like my company CloudSigma, tend to let others run the data centers and host the infrastructure.

The business of operating a data center versus operating a cloud is very different, and it’s crucial for such providers to focus on their core competency. If a provider attempts to do both, there will be sacrifices and financial choices with regards to connectivity, capacity, supply, etc. By focusing on the cloud and not the data center, public cloud IaaS providers don’t need to make tradeoffs between investing in the data center over the cloud, thereby ensuring the cloud is continually operating at peak performance with the best resources available.

The points above were punctuated as part of a discussion on Twitter where @georgereese commented “IaaS is all about economies of scale. I don’t think you win at #cloud by borrowing someone else’s”

Fascinating.  It’s times like these that I invoke the widsom of the Intertubes and ask “WWWD” (or What Would Werner Do?)

If we weren’t artificially limited in this discussion to IaaS only, it would have been interesting to compare this to SaaS providers like Google or Salesforce or better yet folks like Zynga…or even add supporting examples like Heroku (who run atop AWS but are now a part of SalesForce o_O)

I found many of the points raised in the discussion intriguing and good food for thought but I think that if we’re talking about IaaS — and we leave out AWS which directly contradicts the model proposed — the logic breaks almost instantly…unless we change the title to “Don’t run your own data center if you’re a [small] public IaaS and need to compete with AWS.”

Interested in your views…

/Hoff

Enhanced by Zemanta

App Stores: From Mobile Platforms To VMs – Ripe For Abuse

March 2nd, 2011 4 comments
Android Market

Image via Wikipedia

This CNN article titled “Google pulls 21 apps in Android malware scare” describes an alarming trend in which malicious code is embedded in applications which are made available for download and use on mobile platforms:

Google has just pulled 21 popular free apps from the Android Market. According to the company, the apps are malware aimed at getting root access to the user’s device, gathering a wide range of available data, and downloading more code to it without the user’s knowledge.

Although Google has swiftly removed the apps after being notified (by the ever-vigilant “Android Police” bloggers), the apps in question have already been downloaded by at least 50,000 Android users.

The apps are particularly insidious because they look just like knockoff versions of already popular apps. For example, there’s an app called simply “Chess.” The user would download what he’d assume to be a chess game, only to be presented with a very different sort of app.

Wow, 50,000 downloads.  Most of those folks are likely blissfully unaware they are owned.

In my Cloudifornication presentation, I highlighted that the same potential for abuse exists for “virtual appliances” which can be uploaded for public consumption to app stores and VM repositories such as those from VMware and Amazon Web Services:

The feasibility for this vector was deftly demonstrated shortly afterward by the guys at SensePost (Clobbering the Cloud, Blackhat) who showed the experiment of uploading a non-malicious “phone home” VM to AWS which was promptly downloaded and launched…

This is going to be a big problem in the mobile space and potentially just as impacting in cloud/virtual datacenters as people routinely download and put into production virtual machines/virtual appliances, the provenance and integrity of which are questionable.  Who’s going to police these stores?

(update: I loved Christian Reilly’s comment on Twitter regarding this: “Using a public AMI is the equivalent of sharing a syringe”)

/Hoff

Enhanced by Zemanta