Home > Cloud Computing, Cloud Security > The Cloud Is a Fickle Mistress: DDoS&M…

The Cloud Is a Fickle Mistress: DDoS&M…

It’s interesting to see how people react when they are reminded that the “Cloud” still depends upon much of the same infrastructure and underlying protocols that we have been using for years.

BGP, DNS, VPNs, routers, swtiches, firewalls…

While it’s fun to talk about new attack vectors and sexy exploits, it’s the oldies and goodies that will come back to haunt us:


Building more and more of our business’ ability to remain an on-going concern on infrastructure that was never designed to support it is a scary proposition.  We’re certainly being afforded more opportunity to fix some of these problems as the technology improves, but it’s a patching solution to an endemic problem, I’m afraid.  We’ve got two ways to look at Cloud:

  • Skipping over the problems we have and “fixing” crappy infrastructure and applications by simply adding mobility and orchestration to move around an issue, or
  • Actually starting to use Cloud as a forcing function to fundamentally change the way we think about, architect, deploy and manage our computing capabilities in a more resilient, reliable and secure fashion

If I were a betting man…

Remember that just because it’s in the “Cloud” doesn’t mean someone’s sprinkled magic invincibility dust on your AppStack…

That web service still has IP addresses, open sockets. It still gets transported over MANY levels of shared infrastructure, from the telcos to the DNS infrastructure…you’re always at someone elses’ mercy.

Dan Kaminsky has done a fabulous job reminding us of that.

A more poignant reminder of our dependency on the Same Old Stuff™ is the recent DDoS attacks against Cloud provider Go-Grid:


Our network is currently the target of a large, distributed DDoS attack that began on Monday afternoon.   We took action all day yesterday to mitigate the impact of the attack, and its targets, so that we could restore service to GoGrid customers.  Things were stabilized by 4 PM PDT and most customer servers were back online, although some of you continued to experience intermittent loss in network connectivity.

This is an unfortunate thing.  It’s also a good illustration of the sorts of things you ought to ask your Cloud service providers about.  With whom do they peer? What is their bandwidth? How many datacenters do they have and where? What DoS/DDoS countermeasures do you have in place? Have they actually dealt with this before?  Do they drill disaster scenarios like this?

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

This is where engineering, architecture and security meet the road.  Your provider’s ability to sustain an attack like this is critical.  Further, how you’ve designed your BCP/DR contingency plans is pretty important, too.  Until we get true portability/interoperability between Cloud providers, it’s still up to you to figure out how to make this all work.  Remember that when you’re assuming those TCO calculations accurately reflect reality.

Big providers like eBay, Amazon, and Microsoft invest huge sums of money and manpower to ensure they are as survivable as they can be during attacks like this.  Do you?  Does your Cloud Provider? How many do you have.

Again, even Amazon goes down.  At this point, it’s largely been operational issues on their end and not the result of a massive attack. Imagine, however, if someday it is.  What would that mean to you?

As more and more of our applications and information are moved from inside our networks to beyond the firewalls and exposed to a larger audience (or even co-mingled with others’ data) the need for innovation and advancement in security is only going to skyrocket to start to deal with many of these problems.


Categories: Cloud Computing, Cloud Security Tags:
  1. MadKat97
    April 2nd, 2009 at 09:05 | #1

    I think you probably meant "withstand an attack," rather than "sustain an attack" …

  2. April 2nd, 2009 at 09:16 | #2

    You don't have to spend a lot of money on Anti-DDoS gear to ensure survivability. Hire 2 BGP engineers and 2 DNS admins. Get 2 prefixes (preferably provider-independent, but good luck with that these days), 2 routers, and 2 IP transit relationships. Make sure that the IP transit providers support RFC 1997 prefix blackholing. Tag your own IP space (by prefix or IP) when it's targeted by DoS/DDoS with a community and send it to your transit providers. Move DNS names to the other prefix if you have to. DDoSer moves his/her traffic to the new IP/prefix? Great — play cat and mouse. They'll get bored before your paid engineers do.

    Ok maybe that is expensive, but you'd think that somebody would provide a service like this by now. I blame "too many analysts and not enough engineers".

  3. April 2nd, 2009 at 09:36 | #3


    No I actually meant their ability to sustain (operations)…but I'll change it to clarify.



  4. da'dos
    April 4th, 2009 at 06:56 | #4

    @Andre Gironda

    Right – until they start to DoS you by ip address rather than FQDN. Nice try, but the bad guys understand DNS too.

  5. April 5th, 2009 at 15:09 | #5

    Good post. I agree that just because a server is virtual and in the cloud doesn't mean it isn't susceptible to many of the threats faced by a plain vanilla internet-facing web server. By the same token it looks like some of the same techniques we use to protect physical servers should work for virtual servers.

    What would be really helpful is a table showing threats in the fist column and our typical response in the physical world in the second column. The third column would be our response in the virtual world of the Cloud. My assumption is that there would be a lot of over lap and that the table would highlight those areas that really are different and should be the focus of attention and research.

    I guess what I am suggesting is something similar to the way that you exploded the SPI stack. What do you think?

  6. April 8th, 2009 at 09:38 | #6

    <blockquote cite="#commentbody-2185">

    da’dos :

    @Andre Gironda

    Right – until they start to DoS you by ip address rather than FQDN. Nice try, but the bad guys understand DNS too.

    The BGP blackhole is blocking by IP. I don't think you know what you're talking about. The bad guys don't know what they can't see. Unannounced prefixes and BGP triggered blackhole routes put tools in the hands of defenders. No DNS necessary.

    If you really had an argument, you'd say the OPPOSITE of what you said. If adversaries DDoS'd a target by roaming FQDN instead of by IP. But that would require re-writing a specialized DNS client library that worked along with the BGP routing table churn.

  1. April 3rd, 2009 at 12:37 | #1
  2. April 8th, 2009 at 12:55 | #2
  3. April 29th, 2009 at 08:50 | #3
  4. February 25th, 2010 at 14:09 | #4