Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 Leave a comment Go to comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself :)

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting :)

Related articles

 

Enhanced by Zemanta
  1. November 29th, 2012 at 14:26 | #1

    Yes, and…

    Agree the issue very much is about how to keep security thinking up to pace with other areas of innovation and how to iterate successful controls. One of the precepts of today’s #IndustrialInternet (http://www.gereports.com/meeting-of-minds-and-machines/) discussion was progress can be expected if we apply our old solutions to new industries. Can we preserve the nugget or core value found in perimeter thinking when we move forward (e.g. microvisors) and at the same time discard prior applications of it suited only to legacy environments?

  2. M3k0
    November 29th, 2012 at 14:55 | #2

    AWS is a service offering. It’s not a silver bullet. Heck it’s not even for everyone. Maybe it’s just the way you said ‘best thing to happen to security’ but I’m not of this belief at all.

    I agree with the majority of your opinions on how things are and how things have been from a security perspective. That being said, AWS itself has some issues. For example, the MIT/SanDiegoU paper from ’09 with PoC attacks against AWS http://www.cs.tau.ac.il/~tromer/papers/cloudsec.pdf are still very compelling. Then there’s the whole Patriot Act thing where the ‘privacy’ of you data can be directly correlated to the security of your data when talking about the U.S. Gov’t.

    Thinking out loud … I don’t know. This is such a new space I think we need to go beyond looking at just technical security and assess what it is we’re putting the cloud. Umbrella comments such as ‘best thing’ seem hasty when you consider how much information is already compromised almost dialy while its hosted on/in cloud platforms. If anything the cloud only seems to shift the focus of security. The challenges, risks, and vectors are all still there.

    • beaker
      November 29th, 2012 at 19:53 | #3

      I’m sorry if it wasn’t obvious, but my use of AWS was an analog.

      Specifically, it was a service representative of Public Cloud, as I stated in the piece.

      You’ve missed the point here. Nowhere did I assess (in this piece) the security of the AWS platform itself,
      but rather the changes it was causing in the security models of people hosting their applications on said
      platform.

      Check out some of my presentations…you’ll see I’ve addressed the point you think I’m making elsewhere,
      but it wasn’t at ALL the point of this blog.

      Thanks for the comment,

      /Hoff

  3. November 29th, 2012 at 15:14 | #4

    Shifting security thinking to rational survivability feels akin to shifting innovation from cat and mouse driven improvements to ‘Ghandain innovations’.

    • beaker
      November 29th, 2012 at 19:44 | #5

      I have no idea what that means, but at least you used “Ghandain” instead of invoking Godwin’s law, so I’ve got that going for me, I guess…

  4. Donny Parrott
    November 29th, 2012 at 18:22 | #6

    Great article. I have been looking for the next thought provoking pot stirring.

    On titles. IT personnel in general are greatly disregarded and not shown the respect of the other referenced professions. Maybe this comes from the lack of standard backed licensing (engineer). However, I counter that many solve problems, design solutions, and invest personal resources beyond the referenced professions. Let them have them.

    Imagine an MD having to develop new antibiotics every 45 days while being deluged with a million infection attempts and hundreds of thousands infections daily.

    But, back on track… I belive many agree with the premise and key points above, yet there are a number of hindering issues: tradition, cost, kingdoms, vendor platitudes, and effort. Along with articulating the methodology and process for the definition, the real struggle is convincing the current security lords to release (or reconfigure) their fiefdoms.

    I am still working through the idea that the network should be untrusted going forward. All data is encrypted and signed prior to transfer to enable transport over any network. Only the source and destination have visibility into the data. This model allows for mobility as I prepare for untrusted providers to exist between source and destination. This is highly contrary to the current trend of encryption at the border and man in the middle decryption and inspection. This can be accomplished within an environment (with a fair level of overhed), but how do you extend this to a globally distributed consumer?

    So many rabbits to chase…

  5. November 30th, 2012 at 04:58 | #7

    Good post Hoff. I have been saying in my blog posts and presentations recently that the public cloud is the best thing that ever happened to application security. I see it in the RFPs that I fill out for our cloud based platform. Once we fill out an RFP and the customer sees a 100% public cloud solution, I then get a huge security questionnaire to fill out that ironically our on-premise built competitors don’t need to answer to. In previous jobs I could never justify the investments in security because the perimeter security and basic app security was good enough to generate revenue. In this new world, we can’t get a customer to sign a contract unless we can pass every major audit there is. So the public cloud has brought application security to the forefront where it has belonged all these years. Thanks for your post.

  6. Will Hogan
    November 30th, 2012 at 04:59 | #8

    You’re funny. I’ve never heard of that law but I’m going to use it now@beaker

    http://en.wikipedia.org/wiki/Godwin's_law

  7. M3k0
    November 30th, 2012 at 10:48 | #9

    @beaker

    Actually, I knew AWS was being used as an analog. I was building upon what you said in your piece. The document I referenced applies to public cloud in general. Not just AWS even though it mentions it directly.

    I also understand the point you’re making. I’m just not entirely sold yet that the PC will be the resounding change you’re hopeful for.

  8. Kevin Neely
    November 30th, 2012 at 11:19 | #10

    @Donny Parrott
    For the last point, I don’t think this is possible to go blind & encrypt everything until the public cloud vendors start offering some amount of security event data to their customers. Without this, how can the customer alert on the fact that Hoff is accessing customer data from an IP in Idaho while simultaneously downloading email from an IP in Amsterdam at the same time twitter says he is sitting outside VMware’s parking lot in Santa Clara?

    When you ask most vendors about log data, they blink a couple times and then say “you can make a request and we will email you a report”. Not very useful. Obviously, I come from the security side, but in my mind, ‘IT’ becomes a coordinator, integrator, and provider of certain foundational and/or infrastructure technologies to enable the rest.

  9. November 30th, 2012 at 11:42 | #11

    @beaker
    A mentor heads up security for entities using cloud such as London Underground and the CIA. Mgmt has noticed innovation is too incremental (in security a game of cat and mouse). By setting up labs overseas, where people are innovating with fewer resources rethinking from the ground up, they are expecting larger leaps in innovation (and security). http://hbr.org/2010/07/innovations-holy-grail/ar/1

  10. December 3rd, 2012 at 08:55 | #12

    @odedh
    With the risk of oversimplifying your point, you are suggesting the model of zero trust. There are two foundations to this model I want to highlight. First, is the assumption that the environment you operate in might be malicious. Second, is the assumption that there is nothing else doing the work of protection for you, you have to defend yourself.

    I personally believe that zero trust way of operation is the only sustainable mode of operation going forward. We specifically work at the bottom of the stack, and taking care of the physical attack vector. Making the cloud providers themselves blind to whats running on their hosts, not the other way around. Our work is a foundation for additional work that needs to be done by the guest VMs, and applications as you stated clearly in your post.

  11. beaker
    December 3rd, 2012 at 19:29 | #13

    @Oded Horovitz

    Hey Oded…I’d really like to follow-up and see how you guys are coming along…I think the first post I did regarding your solution you were out of the country and we didn’t get a chance to connect.

    /Hoff

  12. December 3rd, 2012 at 22:09 | #14

    You are always welcome in our office, we can run you through a demo of a live system.

  13. March 20th, 2013 at 23:58 | #15

    Amazon Web Services are a great way for tech startups to have access to the computing power that they need. I worked in a photo software developer shop and we depended a lot on them. It was so great to see how we could determine how much to spend depending on our levels of activity. Pretty much we knew how much to turn up the “volume” after making a major email blast or releasing a new version of our software. We had a great run and I missed a lot these startup days.

  1. November 30th, 2012 at 08:01 | #1