Search Results

Keyword: ‘EDos’

More On Clouds & Botnets: MeatClouds, CloudFlux, LeapFrog, EDoS and More!

March 13th, 2009 6 comments

After my "Frogs" talk at Source Boston yesterday, Adam O'Donnell and I chatted about one of my chuckle slides I threw up in the presentation in which I give some new names to some (perhaps not new) attack/threat scenarios which involve Cloud Computing:

CloudSecBingo.058

  • MeatCloud - Essentially abusing Amazon's Mechanical Turk and using it to produce the Cloud version of a sweat shop; exploiting the ignorant for fun and profit to perform menial illegal muling tasks on your behalf…think SETI meets underage garment workers…
  • CloudFlux – Take a mess of stolen credit cards, open up  a slew of Amazon AWS accounts using them, build/scale to thousands of instances overnight, launch carpet bomb attack (you choose,) tear it down/have it torn down, and move your botnet elsewhere…rinse, lather, repeat…
  • LeapFrog – As we move to hybrid private/public clouds and load balancing/cloudbursting across multiple cloud providers, we'll interconnect Clouds via VPNs to the "trusted internals" of your Cloudbase… Attackers will thank us by abusing these tunnels to penetrate your assets through the, uh, back door.
  • vMotion Poison Potion – When VMware's vCloud makes its appearance and we start to allow vMotion across datacenters and across Clouds (in the clear?,) imagine the fun we'll have as we see attacks against vMotion protocols and VM state…  
  • EDoS – Economic Denial of Sustainability – Covered previously here

Adam mentioned that I might have considered that Botnets were a great example of a Cloud-based service and wrote a very cool piece about it on ZDNet here.

I remembered after the fact that I wrote a related blog on the topic several months ago titled "Cloud Computing: Invented by Criminals, Secured by ???" as a rif on something Reuven Cohen wrote.

/Hoff
Categories: Cloud Computing, Cloud Security Tags:

A Couple Of Follow-Ups On The EDoS (Economic Denial Of Sustainability) Concept…

January 23rd, 2009 25 comments

I wrote about the notion of EDoS (Economic Denial Of Sustainability) back in November.  You can find the original blog post here.

The basic premise of the concept was the following:

I had a thought about how the utility and agility of the cloud
computing models such as Amazon AWS (EC2/S3) and the pricing models
that go along with them can actually pose a very nasty risk to those
who use the cloud to provide service.

That
thought got me noodling about how the pay-as-you-go model could
be used for nefarious means.

Specifically, this
usage-based model potentially enables $evil_person who knows that a
service is cloud-based to manipulate service usage billing in orders of
magnitude that could be disguised easily as legitimate use of the
service but drive costs to unmanageable levels. 

If you take Amazon's AWS usage-based pricing model (check out the cost calculator here,) one might envision that instead of worrying about a lack of resources, the
elasticity of the cloud could actually provide a surplus of compute,
network and storage utility that could be just as bad as a deficit.

Instead
of worrying about Distributed Denial of Service (DDos) attacks from
botnets and the like, imagine having to worry about delicately
balancing forecasted need with capabilities like Cloudbursting to deal
with a botnet designed to make seemingly legitimate requests for
service to generate an economic denial of sustainability (EDoS) —
where the dyamicism of the infrastructure allows scaling of service
beyond the economic means of the vendor to pay their cloud-based
service bills.

At any rate, here are a couple of interesting related items:

  1. Wei Yan, a threat researcher for Trend Micro, recently submitted an IEEE journal submission titled "Anti-Virus In-the-Cloud Service: Are We Ready for the Security Evolution?" in which he discusses and interesting concept for cloud-based AV and also cites/references my EDoS concept.  Thanks, Wei!
     
  2. There is a tangential story making the rounds recently about how researcher Brett O'Connor has managed to harness Amazon's EC2 to harvest/host/seed BitTorrent files.

    The relevant quote from the story that relates to EDoS is really about the visibility (or lack thereof) as to how cloud networks in their abstraction are being used and how the costs associated with that use might impact the cloud providers themselves.  Remember, the providers have to pay for the infrastructure even if the "consumers" do not:

    "This means, says Hobson, that hackers and other interested parties can
    simply use a prepaid (and anonymous) debit card to pay the $75 a month
    fee to Amazon and harvest BitTorrent applications at high speed with
    little or no chance of detection…

    It's not clear that O'Connor's clever work-out represents anything new
    in principle, but it does raise the issue of how cloud computing
    providers plan to monitor and manage what their services are being used
    for."

It's likely we'll see additional topics that relate to EDoS soon.

UPDATE: Let me try and give a clear example that differentiates EDoS from DDoS in a cloud context, although ultimately the two concepts are related:

DDoS (and DoS for that matter) attacks are blunt force trauma. The goal, regardless of motive, is to overwhelm infrastructure and remove from service a networked target by employing a distributed number of $evil_doers.  Example: a botnet is activated to swarm/overwhelm an Internet connected website using an asynchronous attack which makes the site unavailable due to an exhaustion of resources (compute, network or storage.)

EDoS attacks are death by 1000 cuts.  EDoS can also utilize distributed $evil_doers as well as single entities, but works by making legitimate web requests at volumes that may appear to be "normal" but are done so to drive compute, network and storage utility billings in a cloud model abnormally high.  Example: a botnet is ativated to visit a website whose income results from ecommerce purchases.  The requests are all legitimate but the purchases never made.  The vendor has to pay the cloud provider for increased elastic use of resources where revenue was never recognized to offset them.

We have anti-DDoS capabilities today with tools that are quite mature.  DDoS is generally easy to spot given huge increases in traffic.  EDoS attacks are not necessarily easy to detect, because the instrumentation and busines logic is not present in most applications or stacks of applications and infrastructure to provide the correlation between "requests" and " successful transactions."  In the example above, increased requests may look like normal activity.

Given the attractiveness of startups and SME/SMB's to the cloud for cost and agility, this presents a problem  The SME/SMB customers do not generally invest in this sort of integration, the cloud computing platform providers generally do not have the intelligence and visibility into these applications which they do not own, and typical DDoS tools don't, either.

So DDoS and EDoS ultimately can end with the same outcome: the target whithers and ceases to be able to offer service, but I think that EDoS is something significant that should be discussed and investigated.

/Hoff

Cloud Computing Security: From DDoS (Distributed Denial Of Service) to EDoS (Economic Denial of Sustainability)

November 27th, 2008 12 comments

Turkeykini
It's Thanksgiving here in the U.S., so in between baking, roasting and watching Risk Astley rickroll millions in the Macy's Thanksgiving Day Parade, I had a thought about how the utility and agility of the cloud computing models such as Amazon AWS (EC2/S3) and the pricing models that go along with them can actually pose a very nasty risk to those who use the cloud to provide service.

That thought — in between vigorous whisking during cranberry sauce construction — got me noodling about how the pay-as-you-go model could be used for nefarious means.

Specifically, this usage-based model potentially enables $evil_person who knows that a service is cloud-based to manipulate service usage billing in orders of magnitude that could be disguised easily as legitimate use of the service but drive costs to unmanageable levels. 

If you take Amazon's AWS usage-based pricing model (check out the cost calculator here,) one might envision that instead of worrying about a lack of resources, the elasticity of the cloud could actually provide a surplus of compute, network and storage utility that could be just as bad as a deficit.

Instead of worrying about Distributed Denial of Service (DDos) attacks from botnets and the like, imagine having to worry about delicately balancing forecasted need with capabilities like Cloudbursting to deal with a botnet designed to make seemingly legitimate requests for service to generate an economic denial of sustainability (EDoS) — where the dyamicism of the infrastructure allows scaling of service beyond the economic means of the vendor to pay their cloud-based service bills.

Imagine the shock of realizing that the outsourcing to the cloud to reduce CapEx and move to an OpEx model just meant that while availability, confidentiality and integrity of your service and assets are solid, your sustainability and survivability are threatened.

I know there exists the ability to control instance sprawl and constrain costs, but imagine if this is managed improperly or inexactly because we can't distinguish between legitimate and targeted resource consumption from an "attack."

If you're in the business of ensuring availability of service for large events on the web for a timed event, are you really going to limit service when you think it might drive revenues?

I think this is where service governors will have to get much more
intelligent regarding how services are being consumed and how that
affects the transaction supply chain and embed logic that takes into
account financial risk when scaling to ensure EDoS doesn't kill you. 

I can't say that I haven't had similar concerns when dealing with scalability and capacity planning in hosted or dedicated co-location environments, but generally storage and compute were not billable service options I had to worry about, only network bandwidth.

Back to the mashed potatoes.

/Hoff

From the X-Files – The Cloud in Context: Evolution from Gadgetry to Popular Culture

November 27th, 2009 4 comments

apple1984

[This post was originally authored November 27, 2009.  I pushed it back to the top of the stack because I think it’s an interesting re-visitation of the benefits and challenges we are experiencing in Cloud today]

Below is an article I wrote many months ago prior to all the Nicholas Carr “electricity ain’t Cloud” discussions.  The piece was one from a collection that was distributed to “…the Intelligence Community, the DoD, and Congress” with the purpose of giving a high-level overview of Cloud security issues.

The Cloud in Context: Evolution from Gadgetry to Popular Culture

It is very likely that should one develop any interest in Cloud Computing (“Cloud”) and wish to investigate its provenance, one would be pointed to Nicholas Carr’s treatise “The Big Switch” for enlightenment. Carr offers a metaphoric genealogy of Cloud Computing, mapped to, and illustrated by, a keenly patterned set of observations from one of the most important catalysts of a critical inflection point in modern history: the generation and distribution of electricity.

Carr offers an uncannily prescient perspective on the evolution and adaptation of computing by way of this electric metaphor, describing how the scale of technology, socioeconomic, and cultural advances were all directly linked to the disruptive innovation of a shift from dedicated power generation in individual factories to a metered utility of interconnected generators powering distribution grids feeding all. He predicts a similar shift from insular, centralized, private single-function computational gadgetry to globally-networked, distributed, public service-centric collaborative fabrics of information interchange.

This phenomenon will not occur overnight nor has any other paradigm shift in computing occurred overnight; bursts of disruptive innovation have a long tail of adoption. Cloud is not the product or invocation of some singular technology, but rather an operational model that describes how computing will mature.

There is no box with blinking lights that can be simply pointed to as “Cloud” and yet it is clearly more than just timesharing with Internet connectivity. As corporations seek to drive down cost and gain efficiency force-multipliers, they have ruthlessly focused on divining what is core to their businesses, and expensive IT cost-centers are squarely in the crosshairs for rigorous valuation.

To that end, Carr wrote another piece on this very topic titled “IT Doesn’t matter” in which he argued that IT was no longer a strategic differentiator due to commoditization, standardization, and cost. This was followed by “The End of Corporate Computing” wherein he suggested that IT will simply subscribe to IT services as an outsourced function. Based upon these themes, Cloud seems a natural evolutionary outcome motivated primarily by economics as companies pare down their IT investment — outsourcing what they can and optimizing what is left.

Enter Cloud Computing

The emergence of Cloud as cult-status popular culture also has its muse anchored firmly in the little machines nestled in the hands of those who might not realize that they’ve helped create the IT revolution at all: the consumer. The consumer’s shift to an always-on, many-to-many communication model with unbridled collaboration and unfettered access to resources, sharply contrasts with traditional IT — constrained, siloed, well-demarcated, communication-restricted, and infrastructure-heavy.

Regardless of any value judgment on the fate of Man, we are evolving to a society dedicated to convenience, where we are not tied to the machine, but rather the machine is tied to us, and always on. Your applications and data are always there, consumed according to business and pricing models that are based upon what you use while the magic serving it up remains transparent.

This is Cloud in a nutshell; the computing equivalent to classical Greek theater’s Deus Ex Machina.

For the purpose of this paper, it is important that I point out that I refer mainly to so-called “Public Cloud” offerings; those services provided by parties external to the data owner who provides an “outsourced” service capability on behalf of the consumer.

This graceful surrender of control is the focus of my discussion. Private Clouds — those services that may operate on the corporation’s infrastructure or those of a provider but managed under said corporation’s control and policies, offers a different set of benefits and challenges but not to the degree of Public Cloud.

There are also hybrid and brokered models, but to keep focused, I shall not address these directly.

Cloud Reference Model

Cloud Reference Model

A service is generally considered to be “Cloud-based” should it meet the following characteristics and provide for:

  • The abstraction of infrastructure from the resources that deliver them
  • The democratization of those resources as an elastic pool to be consumed
  • Services-oriented, rather than infrastructure or application-centric
  • Enabling self-service, scale on-demand elasticity and dynamism
  • Employs a utility-like model of consumption and allocation

Cloud exacerbates the issues we have faced for years in the information security, assurance, and survivability spaces and introduces new challenges associated with extreme levels of abstraction, mobility, scale, dynamism and multi-tenancy. It is important that one contemplate the “big picture” of how Cloud impacts the IT landscape and how given this “service- centric” view, certain things change whilst others remain firmly status quo.

Cloud also provides numerous challenges to the way in which computing and resources are organized, operated, governed and secured, given the focus on:

  • Automated and autonomic resource provisioning and orchestration
  • Massively interconnected and mashed-up data sources, conduits and results
  • Virtualized layers of software-driven, service-centric capability rather than infrastructure or application- specific monoliths
  • Dynamic infrastructure that is aware of and adjusts to the information, applications and services (workloads) running over it, supporting dynamism and abstraction in terms of scale, policy, agility, security and mobility

As a matter of correctness, virtualization as a form of abstraction may exist in many forms and at many layers, but it is not required for Cloud. Many Cloud services do utilize virtualization to achieve scale and I make liberal use of this assumptive case in this paper. As we grapple with the tradeoffs between convenience, collaboration, and control, we find that existing products, solutions and services are quickly being re-branded and adapted as “Cloud” to the confusion of all.keep focused, I shall not address these directly.

Modeling the Cloud

There exist numerous deployment, service delivery models and use cases for Cloud, each offering a specific balance of integrated features, extensibility/ openness and security hinged on high levels of automation for workload distribution.

Three archetypal models generally describe cloud service delivery, popularly referred to as the “SPI Model,” where “SPI” refers to Software, Platform and Infrastructure (as a service) respectively.

NIST - Visual Cloud Model

NIST – Visual Cloud Model

Using the National Institute of Standards and Technology’s (NIST) draft working definition as the basis for the model:

Software as a Service (SaaS)

The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email).

The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Platform as a Service (PaaS)

The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., Java, Python, .Net). The consumer does not manage or control the underlying cloud infrastructure,

Infrastructure as a Service (IaaS)

The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

Understanding the relationship and dependencies between these models is critical. IaaS is the foundation of all Cloud services with PaaS building upon IaaS, and SaaS — in turn — building upon PaaS. We will cover this in more detail later in the document.

Peanut Butter & Jelly — Making the Perfect Cloud Sandwich

Infostructure/Metastructure/Infrastructure

Infostructure/Metastructure/Infrastructure

To understand how Cloud will affect security, visualize its functional structure in three layers:

  • The Infrastructure layer represents the traditional compute, network and storage hardware and operating systems familiar to us all. Virtualization platforms also exist at this layer and expose their capabilities northbound.
  • The Infostructure layer represents the programmatic components such as applications and service objects that produce, operate on or interact with the content, information and metadata.
  • Sitting in between Infrastructure and Infostructure is the Metastructure layer. This layer represents the underlying set of protocols and functions with layers such as DNS, BGP, and IP address management, which “glue” together and enable the applications and content at the Infostructure layer to in turn be delivered by the Infrastructure.

Certain areas of Cloud Computing’s technology underpinnings are making progress, but those things that will ultimately make Cloud the ubiquitous and transparent platform for our entire computing experience remain lacking.

Unsurprisingly, most of the deficient categories of technology or capabilities are those that need to be delivered from standards and consensus-driven action; things that have always posed challenges such as management, governance, provisioning, orchestration, automation, portability, interoperability and security. As security solutions specific to Cloud are generally slow in coming while fast innovating attackers are unconstrained by rules of engagement, it will come as no surprise that we are constantly playing catch up.

Cloud is a gradual adaptation rather than a wholesale re-tooling, and represents another cycle of investment which leaves us to consider where to invest our security dollars to most appropriately mitigate threat and vulnerability:

Typically, we react by cycling between investing in host-based controls > application controls > information controls > user controls > network controls and back again. While our security tools tend to be out of phase and less innovative than the tools of our opposition, virtualization and Cloud may act as much needed security forcing functions that get us beyond solving just the problem du jour.

The need to apply policy to workloads throughout their lifecycle, regardless of state, physical location, or infrastructure from which they are delivered, is paramount. Collapsing the atomic unit of the datacenter to the virtual machine boundary may allow for a simpler set of policy expressions that travel with the VM instance. At the same time, Cloud’s illusion of ubiquity and infinite scale means that we will not know where our data is stored, processed, or used.

Combine mobility, encryption, distributed resources with multiple providers, and a lack of open standards with economic cost pressure and even basic security capabilities seem daunting. Cloud simultaneously re-centralizes some resources while de-perimeterizing trust boundaries and distributing data. Understanding how the various layers map to traditional non-Cloud architecture is important, especially in relation to the Cloud deployment model used; there are significant trade-offs in integration, extensibility, cost, management, governance, compliance, and security.

Live by the Cloud, Die by the Cloud

Despite a tremendous amount of interest and momentum, Cloud is still very immature — pockets of innovation spread out across a long-tail of mostly-proprietary infrastructure-, platform-, and software-as-a-service offerings that do not provide for much in the way of or workload portability or interoperability.

Cloud is not limited to lower cost “server” functionality. With the fevered adoption of netbooks, virtualization, low-cost storage services, fixed/mobile convergence, the proliferation of “social networks,” and applications built to take advantage of all of this, Cloud becomes a single pane of glass for our combined computing experience. N.B., these powers are not inherently ours alone; the same upside can be used for wrongdoing.

In an attempt to whet the reader’s appetite in regards to how Cloud dramatically impacts the risk modeling, assumptions, and security postures of today, I will provide a reasonably crisp set of examples, chosen to bring pause:

Organizational and Operational Misalignment

The way in which most enterprise IT organizations are structured — in functional silos optimized to specialized, isolated functions — is diametrically opposed to the operational abstraction provided by Cloud.

The on-demand, elastic and self-service capabilities through simple interfaces and automated service layers abstract away core technology and support staff alike.

Few IT departments are prepared for what it means to apply controls, manage service levels, implement and manage security capabilities, and address compliance when the IT department is operationally irrelevant in that process. This leaves huge gaps in both identifying and managing risk, especially in outsourced models where ultimately the operational responsibility is “Cloudsourced” but the accountability is not.

The ability to apply specific security controls and measure compliance in mass-marketed Public Cloud services presents very real barriers to entry for enterprises who are heavily regulated, especially when balanced against the human capital (expertise) built-up by organizations.

Monoculture of Operating Systems, Virtualized Components, and Platforms

The standardization (de facto and de jure) on common interfaces to Cloud resources can expose uniform attack vectors that could affect one consumer, or, in the case of multi-tenant Public Cloud offerings, affect many. This is especially true in IaaS offerings where common sets of abstraction layers (such as hypervisors,) prototyped OS/application bundles (usually in the form of virtual machines) and common sets of management functions are used — and used to extend and connect the walled garden internal assets of enterprises to the public or semi-public Cloud environments of service providers operating infrastructure in proxy.

While most attack vectors target applications and information at the Infostructure layer or abuse operating systems and assorted hardware at the Infrastructure layer, the Metastructure layer is beginning to show signs of stress also. Recent attacks against key Metastructure elements such as BGP and DNS indicate that aging protocols do not fare well.

Segmentation and Isolation In Multi-tenant environments

Multi-tenancy in the Cloud (whether in the Public or Private Cloud contexts) brings new challenges to trust, privacy, resiliency and reliability model assertions by providers.  Many of these assertions are based upon the premise that that we should trust — without reliably provable models or evidence — that in the absence of relevant illustration, Cloud is simply trustworthy in all of these dimensions, despite its immaturity. Vendors claim “airtight” information, process, application, and service, but short of service level agreements, there is little to demonstrate or substantiate the claims that software-enabled Cloud Computing — however skinny the codebase may be — is any more (or less) secure than what we have today, especially with commercialized and proprietary implementations.

In multi-tenant Cloud offerings, exposures can affect millions, and placing some types of information in the care of others without effective compensating controls may erode the ROI valuation offered by Cloud in the first place, and especially so as the trust boundaries used to demarcate and segregate workloads of different consumers are provided by the same monoculture operating system and virtualization platforms described above.

Privacy of Data/Metadata, Exfiltration, and Leakage

With increased adoption of Cloud for sensitive workloads, we should expect innovative attacks against Cloud assets, providers, operators, and end users, especially around the outsourcing and storage of confidential information. The uptake is that solutions focused on encryption, at rest and in motion, will have the side effect of more and more tools (legitimate or otherwise) losing visibility into file systems, application/process execution, information and network traffic. Key management becomes remarkably relevant once again — on a massive scale.

Recent proof-of-concepts such as so-called side- channel attacks demonstrate how it is possible to determine where a specific virtual instance is likely to reside in a Public multi-tenant Cloud and allow an attacker to instantiate their own instance and cause it to be located such that it is co-resident with the target. This would potentially allow for sniffing and exfiltration of confidential data — or worse, potentially exploit vulnerabilities which would violate the sanctity of isolated workloads within the Cloud itself.

Further, given workload mobility — where the OS, applications and information are contained in an instance represented by a single atomic unit such as a virtual machine image — the potential for accidental or malicious leakage and exfiltration is real. Legal intercept, monitoring, forensics, and attack detection/incident response are heavily impacted, especially at the volume and levels of traffic envisioned by large Cloud providers, creating blind spots in ways we can’t fathom today.

Inability to Deploy Compensating or Detective Controls

The architecture of Cloud services — as abstract as they ought to be — means that in many cases the security of workloads up and down the stack are still dependent upon the underlying platform for enforcement. This is problematic inasmuch as the constructs representing compute, networking and storage resources — and security — are in many cases themselves virtualized.

Further we are faced with more stealthy and evasive malware that is able to potentially evade detection while co-opting (or rootkitting) not only software and hypervisors, but exploiting vulnerabilities in firmware and hardware such as CPU chipsets.

These sorts of attack vectors are extremely difficult to detect let alone defend against. Referring back to the monoculture issue above, a so-called blue- pilled hypervisor, uniform across tens of thousands of compute nodes providing multi-tenant Cloud services could be catastrophic. It is simply not yet feasible to provide parity in security capabilities between physical and Cloud environments; the maturity of solutions just isn’t there.

These are heady issues and should not be taken lightly when considering what workloads and services are candidates for various Cloud offerings.

What’s old is news again…

Perhaps it is worth adapting familiar attack taxonomies to Cloud.

Botnets that previously required massive malware- originated endpoint compromise in order to function can easily activate in standardized fashion, in apparently legitimate form, and in large numbers by criminals who wish to harness the organized capabilities of Bots without the effort. Simply use stolen credit cards to establish fake accounts using a provider’s Infrastructure-as-a-Service and hundreds or thousands of distributed images could be activated in a very short timeframe.

Existing security threats such as DoS/DDoS attacks, SPAM and phishing will continue to be a prime set of tools for the criminal ecosystem to leverage the distributed and well-connected Cloud as well as targeted attacks against telecommuters using both corporate and consumerized versions of Cloud services.

Consider a new take on an old problem based on ecommerce: Click-fraud. I frame this new embodiment as something called EDoS — economic denial of sustainability. Distributed Denial of Service (DDoS) attacks are blunt force trauma. The goal, regardless of motive, is to overwhelm infrastructure and remove from service a networked target by employing a distributed number of attackers. An example of DDoS is where a traditional botnet is activated to swarm/overwhelm an Internet connected website using an asynchronous attack which makes the site unavailable due to an exhaustion of resources (compute, network, or storage.)

EDoS attacks, however, are death by a thousand cuts. EDoS can also utilize distributed attack sources as well as single entities, but works by making legitimate web requests at volumes that may appear to be “normal” but are done so to drive compute, network, and storage utility billings in a cloud model abnormally high.

An example of EDoS as a variant of click fraud is where a botnet is activated to visit a website whose income results from ecommerce purchases. The requests are all legitimate but purchases are never made. The vendor has to pay the cloud provider for increased elastic use of resources but revenue is never recognized to offset them.

We have anti-DDoS capabilities today with tools that are quite mature. DDoS is generally easy to spot given huge increases in traffic. EDoS attacks are not necessarily easy to detect, because the instrumentation and business logic is not present in most applications or stacks of applications and infrastructure to provide the correlation between “requests” and “ successful transactions.” In theexample above, increased requests may look like normal activity. Many customers do not invest in this sort of integration and Cloud providers generally will not have visibility into applications that they do not own.

Ultimately the most serious Cloud concern is presented by way of the “stacked turtles” analogy: layer upon layer of complex interdependencies at the Infastructure, Metastructure and Infostructure layers, predicated upon fragile trust models framed upon nothing more than politeness. Without re-engineering these models, strengthening the notion of (id)entity management, authentication and implementing secure protocols, we run the risk of Cloud simply obfuscating the fragility of the supporting layers until something catastrophic occurs.

Combined with where and how our data is created, processed, accessed, stored, and backed up — and by whom and using whose infrastructure — Cloud yields significant concerns related to on-going security, privacy, compliance and resiliency.

Moving Forward – Critical Areas of Focus

The Cloud Security Alliance (http://www. cloudsecurityalliance.org) issued its “Guidance for Critical Areas of Focus” related to Cloud Computing Security and defined fifteen domains of concern:

  • Cloud Architecture
  • Information lifecycle management
  • Governance and Enterprise Risk Management
  • Compliance & Audit
  • General Legal
  • eDiscovery
  • Encryption and Key Management
  • Identity and Access Management
  • Storage
  • Virtualization
  • Application Security
  • Portability & Interoperability
  • Data Center Operations Management
  • Incident Response, Notification, Remediation
  • “Traditional” Security impact (business continuity, disaster recovery, physical security)

The sheer complexity of the interdependencies between the Infrastructure, Metastructure and Infostructure layers makes it almost impossible to recommend focusing on only a select subset of these items since all are relevant and important.

Nevertheless, those items in boldface most deserve initial focus just to retain existing levels of security, resilience, and compliance while information and applications are moved from the walled gardens of the private enterprise into the care of others.

Attempting to retain existing levels of security will consume the majority of Cloud transition effort.  Until we see an expansion of available solutions to bridge the gaps between “traditional” IT and dynamic infrastructure 2.0 capabilities, any company can only focus on the traditional security elements of sound design, encryption, identity, storage, virtualization and application security. Similarly, until a standardized set of methods allow well-defined interaction between the Infrastructure, Metastructure and Infostructure layers, companies will be at the mercy of industry for instrumenting, much less auditing,

Cloud elements — yet, as was already stated, the very sameness of standardization creates shared risk. As with any change of this magnitude, the potential of Cloud lies between its trade-offs. In security terms, this “big switch” surrenders visibility and control so as to gain agility and efficiency. The question is, how to achieve a net positive result?

Well-established enterprise security teams who optimize their security spend on managing risk versus purely threat, should not be surprised by Cloud. To these organizations, adapting their security programs to the challenges and opportunities provided by Cloud is business as usual. For organizations unprepared for Cloud, the maturity of security programs they can buy will quickly be outmoded.

Summary

The benefits of Cloud are many. The challenges are substantial. How we deal with these challenges and their organizational, operational, architectural, and technical impacts will fundamentally change the way in which we think about assessing and assuring the security of our assets.

Amazon Web Services: It’s Not The Size Of the Ship, But Rather The Motion Of the…

October 16th, 2009 3 comments
From Hoff's Preso: Cloudifornication - Indiscriminate Information Intercourse Involving Internet Infrastructure

From Hoff's Preso: Cloudifornication - Indiscriminate Information Intercourse Involving Internet Infrastructure

Carl Brooks (@eekygeeky) gets some fantastic, thought-provoking interviews.  His recent article wherein he interviewed Peter DeSantis, VP of EC2, Amazon Web Services, was titled: “Amazon would like to remind you where the hype started” is another great example.

However, this article left a bad taste in my mouth and ultimately invites more questions than it answers. Frankly I felt like there was a large amount of hand-waving in DeSantis’ points that glossed over some very important issues related to security issues of late.

DeSantis’ remarks implied, per the title of the article, that to explain the poor handling and continuing lack of AWS’ transparency related to the issues people like me raise,  the customer is to blame due to hype and overly aggressive, misaligned expectations.

In short, it’s not AWS’ fault they’re so awesome, it’s ours.  However, please don’t remind them they said that when they don’t live up to the hype they help perpetuate.

You can read more about that here “Transparency: I Do Not Think That Means What You Think That Means…

I’m going to skip around the article because I do agree with Peter DeSantis on the points he made about the value proposition of AWS which ultimately appear at the end of the article:

“A customer can come into EC2 today and if they have a website that’s designed in a way that’s horizontally scalable, they can run that thing on a single instance; they can use [CloudWatch[] to monitor the various resource constraints and the performance of their site overall; they can use that data with our autoscaling service to automatically scale the number of hosts up or down based on demand so they don’t have to run those things 24/7; they can use our Elastic Load Balancer service to scale the traffic coming into their service and only deliver valid requests.”

“All of which can be done self-service, without talking to anybody, without provisioning large amounts of capacity, without committing to large bandwidth contracts, without reserving large amounts of space in a co-lo facility and to me, that’s a tremendously compelling story over what could be done a couple years ago.”

Completely fair.  Excellent way of communicating the AWS value proposition.  I totally agree.  Let’s keep this definitional firmly in mind as we go on.

Here’s where the story turns into something like a confessional that implies AWS is sadly a victim of their own success:

DeSantis said that the reason that stories like the DDOS on Bitbucket.org (and the non-cloud Sidekick story) is because people have come to expect always-on, easily consumable services.

“People’s expectations have been raised in terms of what they can do with something like EC2. I think people rightfully look at the potential of an environment like this and see the tools, the multi- availability zone, the large inbound transit, the ability to scale out and up and fundamentally assume things should be better. “ he said.

That’s absolutely true. We look at what you offer (and how you offered/described it above) and we set our expectations accordingly.

We do assume that things should be better as that’s how AWS has consistently marketed the service.

You can’t reasonably expect to bitch about people’s perception of the service based on how it’s “sold” and then turn around when something negative happens and suggest that it’s the consumers’ fault for setting their expectational compass with the course you set.

It *is* absolutely fair to suggest that there is no release from not using common sense, not applying good architectural logic to deployment of services on AWS, but it’s also disingenuous to expect much of the target market to whom you are selling understands the caveats here when so much is obfuscated by design.  I understand AWS doesn’t say they protect against every threat, but they also do not say they do not…until something happens where that becomes readily apparent 😉

When everything is great AWS doesn’t go around reminding people that bad things can happen, but when bad things happen it’s because of incorrectly-set expectations?

Here’s where the discussion turns to an interesting example —  the BitBucket DDoS issue.

For instance, DeSantis said it would be trivial to wash out standard DDOS attacks by using clustered server instances in different availability zones.

Okay, but four things come to mind:

  1. Why did it take 15 hours for AWS to recognize the DDoS in the first place? (They didn’t actually “detect” it, the customer did)
  2. Why did the “vulnerability” continue to exist for days afterward?
  3. While using different availability zones makes sense, it’s been suggested that this DDoS attack was internal to EC2, not externally-generated
  4. While it *is* good practice and *does* make sense, “clustered server instances in different avail. zones, costs money

Keep those things in the back of your mind for a moment…

“One of the best defenses against any sort of unanticipated spike is simply having available bandwidth. We have a tremendous amount on inbound transit to each of our regions. We have multiple regions which are geographically distributed and connected to the internet in different ways. As a result of that it doesn’t really take too many instances (in terms of hits) to have a tremendous amount of availability – 2,3,4 instances can really start getting you up to where you can handle 2,3,4,5 Gigabytes per second. Twenty instances is a phenomenal amount of bandwidth transit for a customer.” he said.

So again, here’s where I take issue with this “bandwidth solves all” answer. The solution being proposed by DeSantis here is that a customer should be prepared to launch/scale multiple instances in response to a DoS/DDoS, in effect making it the customers’ problem instead of AWS detecting and squelching it in the first place?

Further, when you think of it, the trickle-down effect of DDoS is potentially good for AWS’ business. If they can absorb massive amounts of traffic, then the more instances you have to scale, the better for them given how they charge.  Also, per my point #3 above, it looks as though the attack was INTERNAL to EC2, so ingress transit bandwidth per region might not have done anything to help here.  It’s unclear to me whether this was a distributed DoS attack at all.

Lori MacVittie wrote a great post on this very thing titled “Putting a Price on Uptime” which basically asks who pays for the results of an attack like this:

A lack of ability in the cloud to distinguish illegitimate from legitimate requests could lead to unanticipated costs in the wake of an attack. How do you put a price on uptime and more importantly, who should pay for it?

This is exactly the point I was raising when I first spoke of Economic Denial Of Sustainability (EDoS) here.  All the things AWS speaks to as solutions cost more money…money which many customers based upon their expectations of AWS’ service, may be unprepared to spend.  They wouldn’t have much better options (if any) if they were hosting it somewhere else, but that’s hardly the point.

I quote back to something I tweeted earlier “The beauty of cloud and infinite scale is that you get the benefits of infinite FAIL”

The largest DDOS attacks now exceed 40Gbps. DeSantis wouldn’t say what AWS’s bandwidth ceiling was but indicated that a shrewd guesser could look at current bandwidth and hosting costs and what AWS made available, and make a good guess.

The tests done here showed the capability  to generate 650 Mbps from a single medium instance that attacked another instance which, per Radim Marek, was using another AWS account in another availability zone.  So if the “largest” DDoS attacks now exceed 40 Gbps” and five EC2 instances can handle 5Gb/s, I’d need 8 instances to absorb an attack of this scale (unknown if this represents a small or large instance.)  Seems simple, right?

Again, this about absorbing bandwidth against these attacks, not preventing them or defending against them.  This is about not only passing the buck by squeezing more of them out of you, the customer.

“ I don’t want to challenge anyone out there, but we are very, very large environment and I think there’s a lot of data out there that will help you make that case.” he said.

Of course you wish to challenge people, that’s the whole point of your arguments, Peter.

How much bandwidth AWS has is only one part of the issue here.  The other is AWS’ ability to respond to such attacks in reasonable timeframes and prevent them in the first place as part of the service.  That’s a huge part of what I expect from a cloud service.

So let’s do what DeSantis says and set our expectations accordingly.

/Hoff

AMI Secure? (Or: Shared AMIs/Virtual Appliances – Bot or Not?)

October 8th, 2009 6 comments

angel-devilTo some of you, this is going to sound like obvious and remedial advice that you would consider common sense.  This post is not for you.

Some of you — and you know who you are — are going to walk away from this post with a scratching sound coming from inside your skull.

The convenience of pre-built virtual appliances offered up for use in virtualized environments such as VMware’s Virtual Appliance marketplace or shared/community AMIs on AWS EC2 make for a tempting reduction of time spent getting your virtualized/cloud environments up to speed; the images are there just waiting for a a quick download and then a point and click activation.  These juicy marketplaces will continue to sprout up with offerings of bundled virtual machines for every conceivable need: LAMP stacks, databases, web servers, firewalls…you name it.  Some are free, some cost money.

There’s a darkside to this convenience. You have no idea as to the trustworthiness of the underlying operating systems or applications contained within these tidy bundles of cloudy joy.  The same could be said for much of the software in use today, but cloud simply exacerbates this situation by adding abstraction, scale and the elastic version of the snuggie that convinces people nothing goes wrong in the cloud…until it does

While trust in mankind is noble, trust in software is a palm-head-slapper.  Amazon even tells you so:

AMIs are launched at the user’s own risk. Amazon cannot vouch for the integrity or security of AMIs shared by other users. Therefore, you should treat shared AMIs as you would any foreign code that you might consider deploying in your own data center and perform the appropriate due diligence.

Ideally, you should get the AMI ID from a trusted source (a web site, another user, etc). If you do not know the source of an AMI, we recommended that you search the forums for comments on the AMI before launching it. Conversely, if you have questions or observations about a shared AMI, feel free to use the AWS forums to ask or comment.

Remember that in IaaS-based service offerings, YOU are responsible for the security of your instances.  Do you really know where an AMI/VM/VA came from, what’s running on it and why?  Do you have the skills to be able to answer this question?  How would you detect if something was wrong? Are you using hardening tools?  Logging tools?  Does any of this matter if the “box” is rooted anyway?

As I talk about in my Frogs and Cloudifornication presentations — and as the guys from Sensepost have shown — there’s very little to stop someone from introducing a trojaned/rootkitted AMI or virtual appliance that gets utilized by potentially thousands of people.  Instead of having to compromise clients on the Internet, why not just pwn system images that have the use of elastic cloud resources instead?

Imagine someone using auto-scaling and using a common image to spool up hundreds (more?) instances — infected instances.  Two words: instant Botnet.

There’s no outbound filtering (via security groups) via AWS, so exfiltrating your data would be easy. Registering C&C botnet channels would be trivial, especially over common ports.  Oh, don’t forget that in most IaaS offerings, resource consumption is charged incrementally, so the “owner” gets to pay doubly for the fun — CPU, storage and network traffic could be driven sky high.  Another form of EDoS (economic denial of sustainability.)

Given the fact that we’ve seen even basic DDoS attacks go undetected by these large providers despite their claims, the potential is frightening.

As the AWS admonishment above suggests, apply the same (more, actually) common sense regarding using these shared AMIs and virtual machines as you would were you to download and execute applications on your workstation or visit a website, or…oh, man…this is just a losing proposition. ;(

If you can avoid it, please build your own AMIs or virtual machines or consider trusted sources that can be vetted and for which the provenance and relative integrity can be derived. Please don’t use shared images if you can avoid it.  Please ensure that you know what you’re getting yourself into.

Play safe.

/Hoff

* P.S. William Vambenepe (@vambenepe) reminded me of the other half of this problem when he said (on Twitter) “…it’s not just using someone’s AMI that’s risky. Sharing your AMI can be too http://bit.ly/1qMxgN ” < A great post on what happens when people build AMIs/VMs/VAs with, um, unintended residue left over…check out his great post.

DDoS – A Moose On Cloud’s Table Or A Pea Under The Mattress?

September 7th, 2009 7 comments

DDoSReaders of my blog will no doubt be familiar with Roland Dobbins.  He’s commented on lots of posts here and whilst we don’t always see eye-to-eye, I really respect both his intellect and his style.

So it’s fair to say that Roland is not a shy lad.  Formerly at Cisco and now at Arbor, he’s made his position (and likely his living) on dealing with a rather unpleasant issue in the highly distributed and networked InterTubes: Distributed Denial of Service (DDoS) attacks.

A recent article in ITWire titled “DDoS, the biggest threat to Cloud Computing” sums up Roland’s focus:

“According to Roland Dobbins, solutions architect for network security specialist Arbor Networks, distributed denial of service attacks are one of the must under-rated and ill-guarded against security threats to corporate IT, and in particular the biggest threat facing cloud computing.”

DDOS, Dobbins claims, is largely ignored in many discussions around network and cloud computing security. “Most discussions around cloud security are centred around privacy, confidentially, the separation of data from the application logic, but the security elephant in the room that very few people seem to want to talk about is DDOS. This is the number one security threat facing the cloud model,” he told last week’s Ausnog conference in Sydney.

“In cloud computing where infrastructure is shared by potentially millions of users, DDOS attacks have the potential to have much greater impact than against single tenanted architectures,” Dobbins argues. Yet, he says, “The cloud providers emerging as leaders don’t tend to talk much about their resiliency to DDOS attacks.”

Depending upon where you stand, especially if we’re talking about Public Clouds — and large Public Cloud providers such as Google, Amazon, Microsoft, etc. — you might cock your head to one side, raise an eyebrow and focus on the sentence fragment “…and in particular the biggest threat facing cloud computing.”  One of the reasons DDoS is under-appreciated is because in relative frequency — and in the stable of solutions and skill sets to deal with them — DDoS is a long tail event.

With unplanned outages afflicting almost all major Cloud providers today, the moose on the table seems to be good ol’ internal operational issues at the moment…that’s not to say it won’t become a bigger problem as the models for networked Cloud resources changes, but as the model changes, so will the defensive options in the stable.

With the decentralization of data but the mass centralization of data centers featured by these large Cloud providers, one might see how this statement could strike fear into the hearts of potential Cloud consumers everywhere and Roland is doing his best to serve us a warning — a Public (denial of) service announcement.

Sadly, at this point, however, I’m not convinced that DDoS is “the biggest threat facing Cloud Computing” and whilst providers may not “…talk much about their resiliency to DDoS attacks,” some of that may likely be due to the fact that they don’t talk much about security at all.  It also may be due to the fact that in many cases, what we can do to respond to these attacks is directly proportional to the size of your wallet.

Large network and service providers have been grappling with DDoS for years, so have large enterprises.  Folks like Roland have been on the front lines.

Cloud will certainly amplify the issues of DDoS because of how resources — even when distributed and resiliently load balanced in elastic and “perceptively infinitely scalable” ways — are ultimately organized, offered and consumed.  This is a valid point.

But if we look at the heart of most criminal elements exploiting the Internet today (and what will become Cloud,) you’ll find that the great majority want — no, *need* — victims to be available.  If they’re not, there’s no exploiting them.  DDoS is blunt force trauma — with big, messy, bloody blows that everybody notices.  That’s simply not very good for business.

At the end of the day, I think DDoS is important to think about.  I think variations of DDoS are, too.

I think that most service providers are thinking about it and investing in technology from companies such as Cisco and Arbor to deal with it, but as Roland points out, most enterprises are not — and if Cloud has its way, they shouldn’t have to:

Paradoxically, although Dobbins sees DDOS as the greatest threat to cloud computing, he also sees it as the potential solution for organisations grappling with the complexities of securing the network infrastructure.

“One answer is to get rid of all IT systems and hand them over to an organisation that specialises in these things. If the cloud providers are following best practice and have the visibility to enable them to exert control over their networks it is possible for organisation to outsource everything to them.”

For those organisations that do run their own data centres, he suggests they can avail themselves of ‘clean pipe’ services which protect against DDOS attacks According to Nick Race, head of Arbor Networks Australia, Telstra, Optus and Nextgen Networks all offer such services.

So what about you?  Moose on the table or pea under the mattress?

/Hoff

Cloud Computing Security: (Orchestral) Maneuvers In the Dark?

June 14th, 2009 8 comments

OMDLast week Kevin L. Jackson wrote an insightful article titled: Cloud Computing: The Dawn of Maneuver Warfare in IT Security.  I enjoyed Kevin’s piece but struggled with how I might respond: cheerleader or pundit.  I tried for a bit of both while I found witty references to OMD.*

Kevin’s essay is an interesting — if not hope-filled — glimpse into what IT Security could be as enabled by Cloud Computing and virtualization, were one to be able to suspend disbelief due to the realities of hefty dependencies on archaic protocols, broken trust models and huge gaps in technology and operational culture.  Readers of my blog will certainly recognize this from “The Four Horsemen of the Virtualization Security Apocalypse” and “The Frogs Who Desired a King: A Virtualization and Cloud Computing Security Fable

To the converse, I’ve certainly also done my fair share of trying to change the world both by thought and action in the stance of “cheerleader”; I’ve been involved in everything from massive sensornet deployments to developing AI/Neural Networking based security technologies, so I think I’ve got a fair idea of what the balance looks like.  The salty pragmatist often triumphs, however…

Kevin’s article represents a futurist’s view, which is in no way a bad thing, but I fear it is too far disconnected from the realities of security and operational maturity outside of the navel:

The lead topic of every information technology (IT) conversation today is cloud computing. The key point within each of those conversations is inevitably cloud computing security.  Although this trend is understandable, the sad part is that these conversations will tend to focus on all the standard security pros, cons and requirements. While protecting data from corruption, loss, unauthorized access, etc. are all still required characteristics of any IT infrastructure, cloud computing changes the game in a much more profound way.

Certainly Cloud is a game changer, but just because the rules change does not mean the players do.  We haven’t solved those issues as they pertain to non-virtualized or Cloud infrastructure, so while sad, it’s a crushing truth we have to address.  Further, to get from “here” to “there,” we do need to focus on these issues because that is how we are measured today; most of us don’t get to start from scratch.

To that point, check out “Incomplete Thought: Cloud Security IS Host-Based…At The Moment” for why this gap exists in the first place.

I should make it clear that this does not mean I necessarily disagree with the exploration of Kevin’s future state, in fact I’ve written about it in various forms several times, but it’s important to separate what Cloud will deliver from a security perspective in the short term from the potential of what it can possibly deliver in the long term; this applies to both the cultural and technical perspectives.

I think the most significant challenges I had in reading Kevin’s article revolved around three things:

  1. Mixing tenses in some key spots seemed to imply that out of the box today, Cloud Computing can deliver on the promises Kevin is describing now.  Given the audience, this can lead to unachievable expectations
  2. The disconnect between the public, private and military sectors with an over-reliance on military analogies as a model representing an ideal state of security operations and strategy can be startling
  3. Unrealistic portrayals of where we are with the maturity of Cloud/virtualization mobility, portability, interoperability and security capabilities

In the short term, there are certainly incremental improvements will occur with respect to security thanks to the “lubricant-like” functionality provided by virtualization and Cloud.

These “improvements” however represent gains mostly in automation of manual processes and a resultant increase in efficiency rather than a dramatic improvement in survivability or security given what we have to work with today.

The lack of heterogeneous closed-loop autonomics, governance and orchestration in conjunction with the fact that a huge amount of infrastructure and applications are not virtualization- or Cloud-ready means this picture a vision, not a mission.

Kevin juxtaposes the last few decades of static, Maginot Line IT/Information Security “defense-in-depth” strategy with the unpredictable and “agile, hostile and mobile” notions of military warfighter maneuvers to compare and contrast what he suggests Cloud will deliver with an enlightened state of security capabilities:

Until now, IT security has been akin to early 20th century warfare.  After surveying and carefully cataloging all possible threats, the line of business (LOB) manager and IT professional would debate and eventually settle on appropriate and proportional risk mitigation strategies. The resulting IT security infrastructures and procedures typically reflected a “defense in depth” strategy, eerily reminiscent of the French WWII Maginot line . Although new threats led to updated capabilities, the strategy of extending and enhancing the protective barrier remained. Often describe as an “arms race”, the IT security landscape has settled into ever escalating levels of sophisticated attack versus defense techniques and technologies. Current debate around cloud computing security has seemed to continue without the realization that there is a fundamental change now occurring. Although technologically, cloud computing represents an evolution, strategically it represents the introduction of maneuver warfare into the IT security dictionary.

The concepts of attrition warfare and maneuver warfare dominate strategic options within the military. In attrition warfare, masses of men and material are moved against enemy strongpoints, with the emphasis on the destruction of the enemy’s physical assets. Maneuver warfare, on the other hand, advocates that strategic movement can bring about the defeat of an opposing force more efficiently than by simply contacting and destroying enemy forces until they can no longer fight.

The US Marine Corps concept of maneuver is a “warfighting philosophy that seeks to shatter the enemy’s cohesion through a variety of rapid, focused, and unexpected actions which create a turbulent and rapidly deteriorating situation with which the enemy cannot cope.”   It is important to note, however, that neither is used in isolation.  Balanced strategies combine attrition and maneuver techniques in order to be successful on the battlefield.

The reality is that outside of the military, “shock and awe” doesn’t really work when you’re mostly limited to “compliance and three analysts with a firewall.”  Check out “Security & the Cloud — What Does That Even Mean?

Here’s where the reality distortion fields trumps the rainbows and unicorns:

With cloud computing, IT security can now use maneuver concepts for enhance defense. By leveraging virtualization, high speed wide area networks and broad industry standardization, new and enhanced security strategies can now be implemented. Defensive options can now include the virtual repositioning of entire datacenters. Through “cloudbursting”, additional compute and storage resources can also be brought to bear in a defensive, forensic or counter-offensive manner. The IT team can now actively “fight through an attack” and not just observe an intrusion, merely hoping that the in-place defenses are deep enough. The military analogy continues in that maneuver concepts must be combined with “defense in depth” techniques into holistic IT security strategies.

Allow me to suggest that “fight[ing] through an attack” by simply redirecting/re-positioning the $victim isn’t really an effective definition of an “active countermeasure” anymore than waiting the attack out because there’s no offense, only defense.  There is no elimination of threat.  I’ve written about that a bit: Incomplete Thought: Offensive Computing – The Empire Strikes BackThinning the Herd & Chlorinating the Malware Gene Pool… and Everybody Wing Chun Tonight & “ISPs Providing Defense By Engaging In Offensive Computing” For $100, Alex. Mobility does not imply security.

To wit:

A theoretical example of how maneuver IT security strategies could be use would be in responding to a  denial of service attack launched on DISA datacenter hosted DoD applications. After picking up a grossly abnormal spike in inbound traffic, targeted applications could be immediately transferred to virtual machines hosted in another datacenter. Router automation would immediately re-route operational network links to the new location (IT defense by maneuver). Forensic and counter-cyber attack applications, normally dormant and hosted by a commercial infrastructure-as-a-service (IaaS) provider (a cloudburst), are immediately launched, collecting information on the attack and sequentially blocking zombie machines. The rapid counter would allow for the immediate, and automated, detection and elimination of the attack source.

To pick on this specific example, even given the relatively mature anti-DDoS capabilities we have today without virtualization or Cloud, simply moving resources around in response to an attack does nothing if the assets are bound to the same IP addresses and hostnames. Fundamentally, the static underpinnings holding the infrastructure together hinder this lofty goal.  You can Cloudburst till the cows come home, but the attacks will simply follow.  You transfer all those assets to a new virtual datacenter and for the most part, the bad traffic goes with it. Distributed intelligence can certainly reduce the pain, but with distributed botnets whose node counts can number in the millions, you’re not going to provide for the “…elimination of the attack source.”

With these large scale botnets as an example, the excess capacity and mobility of the $victim could even have unintended worse ramifications such as what I wrote about here: Economic Denial Of Sustainability (EDoS)

In closing, we’ve got two parallel paths of advancing technology: the autonomics of the datacenter and the evolution of security.  I’ll wager we’ll certainly see improvements in the former that are well out-of-phase and timing with the latter, not the least of which is due to what Kevin closed with:

This revolution, of course, doesn’t come without its challenges.  This is truly a cultural shift. Cloud computing provides choice, and in the context of active defense strategies, these choices must be made in real-time.  While the cloud computing advantages of self-service, automation, visibility and rapid provisioning can enable maneuver security strategies, successful implementation requires cooperation and collaboration across multiple entities, both within and without.
The cloud computing era is also the dawning of a new day in IT security.  In the not to distant future, network and IT security training will include both static and active IT security techniques. Maneuver warfare in IT security is here to stay.

It’s absolutely a cultural issue, but we must strive to be realistic about where we are with Cloud and security technology and capabilities as aligned.  As someone who’s spent the last 15 years in IT/Security, I can say that this is NOT the “…dawning of a new day in IT security,” rather it’s still dark out and will be for quite some time.  There is indeed opportunity to utilize Cloud and virtualization to react better, faster and more efficiently, but let’s not pretend we’re treating the problem when what we’re doing is making the symptoms less noticeable.

I am absolutely bullish on Cloud, but not Cloud Security as it stands, at least not until we make headway toward fundamentally fixing the foundational problems we have that allow the problems to occur in the first place.

/Hoff

* I thought that out of all of OMD’s tracks, the most apropos titles to match to this blog post would be “Pandora’s Box,” “Dreaming,” or “The New Stone Age” 😉  Thanks for the motivation, @csoandy

Don’t Hassle the Hoff: Recent Press & Podcast Coverage & Upcoming Speaking Engagements

February 2nd, 2009 No comments

Microphone

Here is some of the recent coverage from the last couple of months on topics relevant to content on my blog, presentations and speaking engagements.  No particular order or priority.

Press/Technology & Security eZines:

Website/Blog Coverage/Meaningful Links:

I should note that many of my cloud computing writing is being republished over at the SYSCON Cloud Computing Journal with a self-branded mini-site: ChristoferHoff.Sys-Con.com

Podcasts/Webcasts/Video:

I am confirmed to  speak at the following upcoming events:

  • Source Boston  - Boston, MA – March 11-13
  • TechTarget Threat Management Decisions Summit – New York, NY – March 26
  • Americas Growth Capital InfoSec Conference (keynote) – San Francisco, CA, April 20
  • RSA 2009 (multiple sessions) – San Francisco, CA, April 21-24
  • Virtualization Congress – Las Vegas, NV, May 4-7
  • (there are others being sorted at the moment

I should/will be attending the following events:

  • Shmoocon
  • Cloud Computing Expo   

/Hoff