Home > A6, Cloud Computing, Cloud Security > Transparency: I Do Not Think That Means What You Think That Means…

Transparency: I Do Not Think That Means What You Think That Means…

vizziniHa ha! You fool! You fell victim to one of the classic blunders – The most famous of which is “never get involved in a cloud war in Asia” – but only slightly less well-known is this: “Never go against Werner when availability is on the line!”

As an outsider, it’s easy to play armchair quarterback, point fingers and criticize something as mind-bogglingly marvelous as something the size and scope of Amazon Web Services.  After all, they make all that complexity disappear under the guise of a simple web interface to deliver value, innovation and computing wonderment the likes of which are really unmatched.

There’s an awful lot riding on Amazon’s success.  They set the pace by which an evolving industry is now measured in terms of features, functionality, service levels, security, cost and the way in which they interact with customers and the community of ecosystem partners.

An interesting set of observations and explanations have come out of recent events related to degraded performance, availability and how these events have been handled.

When something bad happens, there’s really two ways to play things:

  1. Be as open as possible, as quickly as possible and with as much detail as possible, or
  2. Release information only as needed, when pressured and keep root causes and their resolutions as guarded as possible

This, of course, is an over-simplification of the options, complicated by the need for privacy, protection of intellectual property, legal issues, compliance or security requirements.  That’s not really any different than any other sort of service provider or IT department, but then again, Amazon’s Web Services aren’t like any other sort of service provider or IT department.

So when something bad happens, it’s been my experience as a customer (and one that admittedly does not pay for their “extra service”) that sometimes notifications take longer than I’d like, status updates are not as detailed as I might like and root causes sometimes cloaked in the air of the mysterious “network connectivity problem” — a replacement for the old corporate stand-by of “blame the firewall.”  There’s an entire industry cropping up to help you with these sorts of things.

Something like the BitBucket DDoS issue however, is not a simple “network connectivity problem.”  It is, however, a problem which highlights an oft-played pantomime of problem resolution involving any “managed” service being provided by a third party to which you as the customer have limited access at various critical points in the stack.

This outage represents a disconnect in experience versus expectation with how customers perceive the operational underpinnings of AWS’ operations and architecture and forces customers to reconsider how all that abstracted infrastructure actually functions in order to deliver what — regardless of what the ToS say — they want to believe it delivers.  This is that perception versus reality gap I mentioned earlier.  It’s not the redonkulous “end-of-cloud” scenarios parroted by the masses of the great un(cloud)washed, but it’s serious nonetheless.

As an example, BitBucket’s woes of over 20+ hours of downtime due to UDP (and later TCP) DDoS floods led to the well-documented realization that support was inadequate, monitoring insufficient and security defenses lacking — from the perspective of both the customer and AWS*.  The reality is that based on what we *thought* we knew about how AWS functioned to protect against these sorts of things, these attacks should never have wrought the damage they did.  It seems AWS was equally as surprised.

It’s important to note that these were revelations made in near real-time by the customer, not AWS.

Now, this wasn’t a widespread problem, so it’s understandable to a point as to why we didn’t hear a lot from AWS with regards to this issue, but after this all played out, when we look at what has been disclosed publicly by AWS, it appears the issue is still not remedied and despite the promise to do better, a follow-on study seems to suggest that the problem may not yet be well understood or solved by AWS (See: Amazon EC2 vulnerable to UDP flood attacks) (Ed: After I wrote this, I got a notification that this particular issue has been fixed which is indeed, good news.)

Now, releasing details about any vulnerability like this could put many many customers at risk from similar attack, but the lack of transparency  of service and architecture means that we’re left with more questions than answers. How can a customer (like me) today defend themselves against an attack like this in the lurch of not knowing what causes it or how to defend against it? What happens when the next one surfaces?

Can AWS even reliably detect this sort of thing given the “socialist security” implementation of good enough security spread across its constituent customers?

Security by obscurity in cloud cannot last as the gold standard.

This is the interesting part about the black-box abstraction that is Cloud, not just for Amazon, but any massively-scaled service provider; the more abstracted the service, the more dependent upon the provider or third parties we will become to troubleshoot issues and protect our assets.  In many cases, however, it will simply take much more time to resolve issues since visibility and transparency are limited to what the provider chooses or is able to provide.

We’re in the early days still of what customers know to ask about how security is managed in these massively scaled multi-tenant environments and since in some cases we are contractually prevented from exercising tests designed to understand the limits, we’re back to trusting that the provider has it handled…until we determine they don’t.

Put that in your risk management pipe and smoke it.

The network and systems that make up our cloud providers offerings must do a better job in stopping bad things from occurring before they reach our instances and workloads or customers should simply expect that they get what they pay for.  If the provider capabilities do not improve, combined with less visibility and an inability to deploy compensating controls, we’re potentially in a much worse spot than having no protection at all.

This is another opportunity to quietly remind folks about the Audit, Assertion, Assessment and Assurance API (A6) API that is being brought to life; there will hopefully be some exciting news here shortly about this project, but I see A6 as playing a very important role in providing a solution to some of the issues I mention here.  Ready when you are, Amazon.

If only it were so simple and transparent:

Inigo Montoya: You are using Bonetti’s Defense against me, ah?
Man in Black: I thought it fitting considering the rocky terrain.
Inigo Montoya: Naturally, you must suspect me to attack with Capa Ferro?
Man in Black: Naturally… but I find that Thibault cancels out Capa Ferro. Don’t you?
Inigo Montoya: Unless the enemy has studied his Agrippa… which I have.

/Hoff

*It’s only fair to mention that depending upon a single provider for service, no matter how good they may be and not taking advantage of monitoring services (at an extra cost,) is a risk decision that comes with consequences, one of them being longer time to resolution.

  1. October 13th, 2009 at 02:19 | #1

    Another thing that Lori's post got me thinking about in the context of all this was who pays for all this traffic? Presumably the sources will change (though never fully disappear) but if Amazon is blocking such things at ingress will the victim still pay per GB? Will there be the option of a "clean feed" for research, DIY security, etc.? Currently DDoS costs Amazon in terms of reputation but if/when it's a part of every day life will it not turn into DDo$ against the victim?

    Sam

  2. October 13th, 2009 at 02:21 | #2

    @Sam Johnston

    Incidentally this could be a fairly significant differentiator between providers that do charge for ingress data (Amazon) vs those that don't (GoGrid) – it's then in the provider's best interests to stamp out DDoS as quickly as possible rather than turn a blind eye, bill for traffic *and* force them to spin up more instances.

  3. October 13th, 2009 at 02:35 | #3

    @Sam Johnston

    Excellent points, both, Sam. Fades into my EDoS concept, too…

    In my new Cloudifornication preso, I pair a bunch of these sorts of events together to illustrate just how technically and economically frail this arrangement currently is.

  4. October 13th, 2009 at 03:28 | #4

    There's another good example of transparency in the oauth bug that was found last April.

    http://blog.oauth.net/2009/04/22/acknowledgement-

    http://oauth.net/advisories/2009-1

  5. December 31st, 2009 at 09:43 | #5

    @Sam Johnston

    In EC2 if the traffic is stopped by the Security Group (the EC2 network's firewall) then the the customer does not pay for it. Once the incoming traffic has made it through the Security Group, the cost is borne by the customer.

    One of the problems in the BitBucket scenario was that BitBucket was unable to even see the DDoS flood that was affecting them because the traffic – as far as they, the customer, could see – did not exist. Only Amazon could see that traffic: the Security Groups had filtered out the traffic from the customer's view.

  6. google
    August 17th, 2011 at 16:31 | #6

    I liked your article is an interesting technology
    thanks to google I found you

  7. February 7th, 2012 at 10:31 | #7

    @Jay: Theyre rerunning the whole series tonight.

  1. October 13th, 2009 at 12:04 | #1
  2. October 13th, 2009 at 18:10 | #2
  3. October 16th, 2009 at 10:21 | #3
  4. October 31st, 2009 at 08:53 | #4
  5. November 22nd, 2009 at 15:01 | #5
  6. April 6th, 2010 at 15:46 | #6