Archive

Archive for the ‘Clean Pipes’ Category

CloudPassage & Why Guest-Based Footprints Matter Even More For Cloud Security

February 1st, 2011 4 comments
VM (operating system)

Image via Wikipedia

Every day for the last week or so after their launch, I’ve been asked left and right about whether I’d spoken to CloudPassage and what my opinion was of their offering.  In full disclosure, I spoke with them when they were in stealth almost a year ago and offered some guidance as well as the day before their launch last week.

Disappointing as it may be to some, this post isn’t really about my opinion of CloudPassage directly; it is, however, the reaffirmation of the deployment & delivery models for the security solution that CloudPassage has employed.  I’ll let you connect the dots…

Specifically, in public IaaS clouds where homogeneity of packaging, standardization of images and uniformity of configuration enables scale, security has lagged.  This is mostly due to the fact that for a variety of reasons, security itself does not scale (well.)

In an environment where the underlying platform cannot be counted upon to provide “hooks” to integrate security capabilities in at the “network” level, all that’s left is what lies inside the VM packaging:

  1. Harden and protect the operating system [and thus the stuff atop it,]
  2. Write secure applications and
  3. Enforce strict, policy-driven information-centric security.

My last presentation, “Cloudinomicon: Idempotent Infrastructure, Building Survivable Systems and Bringing Sexy Back to Information Centricity” addressed these very points. [This one is a version I delivered at the University of Michigan Security Summit]

If we focus on the first item in that list, you’ll notice that generally to effect policy in the guest, you must have a footprint on said guest — however thin — to provide the hooks that are needed to either directly effect policy or redirect back to some engine that offloads this functionality.  There’s a bit of marketing fluff associated with using the word “agentless” in many applications of this methodology today, but at some point, the endpoint needs some sort of “agent” to play*

So that’s where we are today.  The abstraction offered by virtualized public IaaS cloud platforms is pushing us back to the guest-centric-based models of yesteryear.

This will bring challenges with scale, management, efficacy, policy convergence between physical and virtual and the overall API-driven telemetry driven by true cloud solutions.

You can read more about this in some of my other posts on the topic:

Finally, since I used them for eyeballs, please do take a look at CloudPassage — their first (free) offerings are based upon leveraging small footprint Linux agents and a cloud-based SaaS “grid” to provide vulnerability management, and firewall/zoning in public cloud environments.

/Hoff

* There are exceptions to this rule depending upon *what* you’re trying to do, such as anti-malware offload via a hypervisor API, but this is not generally available to date in public cloud.  This will, I hope, one day soon change.

Enhanced by Zemanta

Incomplete Thought: The Other Side Of Cloud – Where The (Wild) Infrastructure Things Are…

March 9th, 2010 3 comments

This is bound to be an unpopular viewpoint.  I’ve struggled with how to write it because I want to inspire discussion not a religious battle.  It has been hard to keep it an incomplete thought. I’m not sure I have succeeded 😉

I’d like you to understand that I come at this from the perspective of someone who talks to providers of service (Cloud and otherwise) and large enterprises every day.  Take that with a grain of whatever you enjoy ingesting.  I have also read some really interesting viewpoints contrary to mine, many of which I find really fascinating, just not subscribed to my current interpretation of reality.

Here’s the deal…

While our attention has turned to the wonders of Cloud Computing — specifically the elastic, abstracted and agile delivery of applications and the content they traffic in — an interesting thing occurs to me related to the relevancy of networking in a cloudy world:

All this talk of how Cloud Computing commoditizes “infrastructure” and challenges the need for big iron solutions, really speaks to compute, perhaps even storage, but doesn’t hold true for networking.

The evolution of these elements run on different curves.

Networking ultimately is responsible for carting bits in and out of compute/storage stacks.  This need continues to reliably intensify (beyond linear) as compute scale and densities increase.  You’re not going to be able to satisfy that need by trying to play packet ping-pong and implement networking in software only on the same devices your apps and content execute on.

As (public) Cloud providers focus on scale/elasticity as their primary disruptive capability in the compute realm, there is an underlying assumption that the networking that powers it is magically and equally as scaleable and that you can just replicate everything you do in big iron networking and security hardware and replace it one-for-one with software in the compute stacks.

The problem is that it isn’t and you can’t.

Cloud providers are already hamstrung by how they can offer rich networking and security options in their platforms given architectural decisions they made at launch – usually the pieces of architecture that provide for I/O and networking (such as the hypervisor in IaaS offerings.)  There is very real pain and strain occurring in these networks.  In Cloud IaaS solutions, the very underpinnings of the network will be the differentiation between competitors.  It already is today.

See Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where… or Incomplete Thought: The Cloud Software vs. Hardware Value Battle & Why AWS Is Really A Grid… or Big Iron Is Dead…Long Live Big Iron… and I Love the Smell Of Big Iron In the Morning.

With the enormous I/O requirements of virtualized infrastructure, the massive bandwidth requirements that rich applications, video and mobility are starting to place on connectivity, Cloud providers, ISPs, telcos, last mile operators, and enterprises are pleading for multi-terabit switching fabrics in their datacenters to deal with load *today.*

I was reminded of this today, once again, by the announcement of a 322 Terabit per second switch.  Some people shrugged. Generally these are people who outwardly do not market that they are concerned with moving enormous amounts of data and abstract away much of the connectivity that is masked by what a credit card and web browser provide.  Those that didn’t shrug are those providers who target a different kind of consumer of service.

Abstraction has become a distraction.

Raw networking horsepower, especially for those who need to move huge amounts of data between all those hyper-connected cores running hundreds of thousands of VM’s or processes, still know it as a huge need.

Before you simply think I’m being a shill because I work for networking vendor (and the one that just announced that big switch referenced above,) please check out the relevant writings on this viewpoint which I have held for years which is that we need *both* hardware and software based networking to scale efficiently and the latter simply won’t replace the former.

Virtualization and Cloud exacerbate the network-centric issues we’ve had for years.

I look forward to the pointers to the sustainable, supportable and scaleable 322 Tb/s software-based networking solutions I can download and implement today as a virtual appliance.

/Hoff

Reblog this post [with Zemanta]

Amazon Web Services: It’s Not The Size Of the Ship, But Rather The Motion Of the…

October 16th, 2009 3 comments
From Hoff's Preso: Cloudifornication - Indiscriminate Information Intercourse Involving Internet Infrastructure

From Hoff's Preso: Cloudifornication - Indiscriminate Information Intercourse Involving Internet Infrastructure

Carl Brooks (@eekygeeky) gets some fantastic, thought-provoking interviews.  His recent article wherein he interviewed Peter DeSantis, VP of EC2, Amazon Web Services, was titled: “Amazon would like to remind you where the hype started” is another great example.

However, this article left a bad taste in my mouth and ultimately invites more questions than it answers. Frankly I felt like there was a large amount of hand-waving in DeSantis’ points that glossed over some very important issues related to security issues of late.

DeSantis’ remarks implied, per the title of the article, that to explain the poor handling and continuing lack of AWS’ transparency related to the issues people like me raise,  the customer is to blame due to hype and overly aggressive, misaligned expectations.

In short, it’s not AWS’ fault they’re so awesome, it’s ours.  However, please don’t remind them they said that when they don’t live up to the hype they help perpetuate.

You can read more about that here “Transparency: I Do Not Think That Means What You Think That Means…

I’m going to skip around the article because I do agree with Peter DeSantis on the points he made about the value proposition of AWS which ultimately appear at the end of the article:

“A customer can come into EC2 today and if they have a website that’s designed in a way that’s horizontally scalable, they can run that thing on a single instance; they can use [CloudWatch[] to monitor the various resource constraints and the performance of their site overall; they can use that data with our autoscaling service to automatically scale the number of hosts up or down based on demand so they don’t have to run those things 24/7; they can use our Elastic Load Balancer service to scale the traffic coming into their service and only deliver valid requests.”

“All of which can be done self-service, without talking to anybody, without provisioning large amounts of capacity, without committing to large bandwidth contracts, without reserving large amounts of space in a co-lo facility and to me, that’s a tremendously compelling story over what could be done a couple years ago.”

Completely fair.  Excellent way of communicating the AWS value proposition.  I totally agree.  Let’s keep this definitional firmly in mind as we go on.

Here’s where the story turns into something like a confessional that implies AWS is sadly a victim of their own success:

DeSantis said that the reason that stories like the DDOS on Bitbucket.org (and the non-cloud Sidekick story) is because people have come to expect always-on, easily consumable services.

“People’s expectations have been raised in terms of what they can do with something like EC2. I think people rightfully look at the potential of an environment like this and see the tools, the multi- availability zone, the large inbound transit, the ability to scale out and up and fundamentally assume things should be better. “ he said.

That’s absolutely true. We look at what you offer (and how you offered/described it above) and we set our expectations accordingly.

We do assume that things should be better as that’s how AWS has consistently marketed the service.

You can’t reasonably expect to bitch about people’s perception of the service based on how it’s “sold” and then turn around when something negative happens and suggest that it’s the consumers’ fault for setting their expectational compass with the course you set.

It *is* absolutely fair to suggest that there is no release from not using common sense, not applying good architectural logic to deployment of services on AWS, but it’s also disingenuous to expect much of the target market to whom you are selling understands the caveats here when so much is obfuscated by design.  I understand AWS doesn’t say they protect against every threat, but they also do not say they do not…until something happens where that becomes readily apparent 😉

When everything is great AWS doesn’t go around reminding people that bad things can happen, but when bad things happen it’s because of incorrectly-set expectations?

Here’s where the discussion turns to an interesting example —  the BitBucket DDoS issue.

For instance, DeSantis said it would be trivial to wash out standard DDOS attacks by using clustered server instances in different availability zones.

Okay, but four things come to mind:

  1. Why did it take 15 hours for AWS to recognize the DDoS in the first place? (They didn’t actually “detect” it, the customer did)
  2. Why did the “vulnerability” continue to exist for days afterward?
  3. While using different availability zones makes sense, it’s been suggested that this DDoS attack was internal to EC2, not externally-generated
  4. While it *is* good practice and *does* make sense, “clustered server instances in different avail. zones, costs money

Keep those things in the back of your mind for a moment…

“One of the best defenses against any sort of unanticipated spike is simply having available bandwidth. We have a tremendous amount on inbound transit to each of our regions. We have multiple regions which are geographically distributed and connected to the internet in different ways. As a result of that it doesn’t really take too many instances (in terms of hits) to have a tremendous amount of availability – 2,3,4 instances can really start getting you up to where you can handle 2,3,4,5 Gigabytes per second. Twenty instances is a phenomenal amount of bandwidth transit for a customer.” he said.

So again, here’s where I take issue with this “bandwidth solves all” answer. The solution being proposed by DeSantis here is that a customer should be prepared to launch/scale multiple instances in response to a DoS/DDoS, in effect making it the customers’ problem instead of AWS detecting and squelching it in the first place?

Further, when you think of it, the trickle-down effect of DDoS is potentially good for AWS’ business. If they can absorb massive amounts of traffic, then the more instances you have to scale, the better for them given how they charge.  Also, per my point #3 above, it looks as though the attack was INTERNAL to EC2, so ingress transit bandwidth per region might not have done anything to help here.  It’s unclear to me whether this was a distributed DoS attack at all.

Lori MacVittie wrote a great post on this very thing titled “Putting a Price on Uptime” which basically asks who pays for the results of an attack like this:

A lack of ability in the cloud to distinguish illegitimate from legitimate requests could lead to unanticipated costs in the wake of an attack. How do you put a price on uptime and more importantly, who should pay for it?

This is exactly the point I was raising when I first spoke of Economic Denial Of Sustainability (EDoS) here.  All the things AWS speaks to as solutions cost more money…money which many customers based upon their expectations of AWS’ service, may be unprepared to spend.  They wouldn’t have much better options (if any) if they were hosting it somewhere else, but that’s hardly the point.

I quote back to something I tweeted earlier “The beauty of cloud and infinite scale is that you get the benefits of infinite FAIL”

The largest DDOS attacks now exceed 40Gbps. DeSantis wouldn’t say what AWS’s bandwidth ceiling was but indicated that a shrewd guesser could look at current bandwidth and hosting costs and what AWS made available, and make a good guess.

The tests done here showed the capability  to generate 650 Mbps from a single medium instance that attacked another instance which, per Radim Marek, was using another AWS account in another availability zone.  So if the “largest” DDoS attacks now exceed 40 Gbps” and five EC2 instances can handle 5Gb/s, I’d need 8 instances to absorb an attack of this scale (unknown if this represents a small or large instance.)  Seems simple, right?

Again, this about absorbing bandwidth against these attacks, not preventing them or defending against them.  This is about not only passing the buck by squeezing more of them out of you, the customer.

“ I don’t want to challenge anyone out there, but we are very, very large environment and I think there’s a lot of data out there that will help you make that case.” he said.

Of course you wish to challenge people, that’s the whole point of your arguments, Peter.

How much bandwidth AWS has is only one part of the issue here.  The other is AWS’ ability to respond to such attacks in reasonable timeframes and prevent them in the first place as part of the service.  That’s a huge part of what I expect from a cloud service.

So let’s do what DeSantis says and set our expectations accordingly.

/Hoff

Re-branding Managed Services and SaaS For Security In the Cloud…1995 Never Looked So Shiny

April 28th, 2009 1 comment

I’ve said it before and I’ll say it again: SaaS is not the definition of Cloud Computing.  It’s one element of Cloud Computing.  In the same vein, when you mention “Cloud Security,” it means more than the security features integrated by a SaaS provider to protect their stack.  Oh, it’s an interesting discussion point, but Google and SalesForce.com are not the end-all, be-all of “Cloud Security.”  Unfortunately, they are the face of Cloud Security these days.  Read on as I explain why.

Almost every webinar, presentation and panel I’ve seen in the last six months that promises to discuss “Security Services in the Cloud” usually ends up actually focused on three things:

  1. Managed security services (on-premises or off-premises) of traditional security capabilities/solutions, re-branded as Cloud offerings and
  2. Managed services utilizing a SaaS model for one or more security functions, re-branded as Cloud offerings
  3. A hybrid model involving both managed services of devices/policies and one or more hosted applications (nee SaaS) re-branded as Cloud offerings

Let’s take a look at what these use cases really mean within the context of Cloud Computing.

Managed security services (on-premises or off-premises) of traditional security capabilities/solutions:
Basically, these services are the same old managed services you’ve seen forever with the word “Cloud” stuck somewhere in the description for marketing purposes.
An example is a provider has NOCs/SOCs and manages security infrastructure on your behalf.  This equipment and software can be located on your premises or externally
and because it’s Internet connected, it’s now magically Cloud based.  These services have nothing to do with protecting Cloud-based services, but rather they suggest that
they *use* the Cloud to deliver service.

Managed security services utilizing a SaaS model for one or more security functions:
Any managed services provider who uses a SaaS stack to process information on behalf of their customers via the Internet is re-branding to say they are Cloud based.
The same is true from a security perspective.  Anti-spam, anti-virus, DDoS, URL filtering services, vulnerability management,  etc. are all game. From Google’s Postini
to OpenDNS’ services to Qualys’ vulnerability management, we’re seeing the rampant use of Cloud in these marketing efforts.  Further, vendors who offer
some sort of Cloud-based service that has integrated security functionality (as it should) claim to offer “Cloud Security.”  In all of these cases, scaling is traditionally
done at the software layer and is generally hidden from the customer and how the service scales isn’t usually based on Cloud Computing capabilities at all.

The Hybrid Model
Some providers offer a combination of managed on/off-premise security devices used in conjunction with SaaS offerings to broaden the solution.  There are any number
of MSSP’s who have an Internet-based portal (via VPN) and an on- or off-premise set of capabilities involving appliances and SaaS to deliver some combination of service.
This model can extend to fixed or mobile computing services where things like Clean Pipes are provided.

The challenge is trying to understand how, where and why the word “Cloud” ought to be applied to these services.  Now I want to be clear that there’s nothing particularly “wrong” with branding these services as “Cloud” except for the following:

If you look at the definition of Cloud (at least mine,) it involves the following:

  • Abstraction of Infrastructure
  • Resource Democratization
  • Services Oriented
  • Elasticity/Dynamism
  • Utility Model Of Consumption & Allocation

In the case of security solutions which are generally based on static allocation of resources, static policies, application controls built into an application and in many cases dedicated physical appliances (or fixed-utilization shared virtualized instances,) customers can’t log into a control panel and spin up another firewall, IDP or WAF on-demand. In some cases, they don’t even know these resources exist.  Some might argue that is a good thing.  I’m not debating the efficacy of these solutions, but rather how they are put forward.

Also important is that customers don’t get to pay for only the resources used for the same reasons.

So whilst many services/solutions may virtualize the network stack or even policy, the abstraction of infrastructure from resources and resource democratization get a little fuzzy definitionally.  That’s a minor point, really.

What’s really interesting is the two items I highlighted in boldfaced: Elasticity and the utility model of consumption and allocation.  Traditional security capabilities such as firewalls, IDP, A/V, etc. are generally implemented on physical appliances/networking equipment which from a provisioning and orchestration perspective don’t really subscribe to either the notion of self-administered elasticity or the utility model of consumption/allocation whereby the customer is charged only for what they use.

To me, if your Cloud Security solution does not provide for all of these definitional elements of Cloud, it’s intellectually dishonest (the definition of marketing? 😉 to call it “Cloud Security.”

This is important because “security” is being thought of from the perspective of SaaS or IaaS and each of these models have divergent provisioning, orchestration and management methods that don’t really jive with multi-tenant Cloud models for security.*  As it turns out, the most visible and vocal providers of application services are really the ones peddling “secure cloud” to serve their own messaging needs and so in SaaS stacks, the bundled security integrated into the application is usually a no-cost item.  In other models, it *is* the service that one pays for.

I’ve talked about this quite a bit in my Frogs presentation in which I demonstrate how the lower down the stack provider stops (from SaaS down to Iaas,) the more security a customer is generally still responsible for — or that of a provider.  Much of this is due to the lack of scale in security technology today and static policies with a network disconnected from context and state and unaware of the dynamism of the layers above it:

SPI Stack Security

Without invoking the grumpy-magic-anachronism-damage +4 spell, I am compelled to mention the following.

Back in 1995 I architected one of the world’s first global managed security services using a combination of multi-layered VPNs from across the globe to a set of four regional Internet gateways through which all Internet traffic was tunneled. We manually scaled each set of dedicated clustered firewalls for each customer based on load.  We didn’t even have centralized management for all these firewalls at the time (Provider-1 and VSX weren’t born yet — we helped in their birth) so everything was pretty much a manual process.  This was better than managing CPE devices and allowed us to add features/functions centrally…you know, like the “Cloud.” 😉

Not much has changed with managed security services and their models today.  While they have better centralized management, virtualized policy and even container-based virtual security functions, but we’re still stuck with mostly manually provisioning and a complete disconnect of the security policies from the network and virtualization layers.  Scale is not dynamic.  Neither is pricing.

At the end of the day, from a managed security perspective, be wary of claims of “Cloud Security” and what it means to you.

/Hoff

*This is one of the compelling elements of converged/unified compute fabrics; the ability to tie all the elements together and focus on consistent policy enforcement up and down the stack but for managed security providers, this will take years to make its way into their networks as the revenue models and cost structures for most MSSP’s are simply not aligned with virtualization platform providers.  Perhaps we’ll see a bigger uptake of OSS virtualization platforms in order to deliver these converged services.

A Couple Of Follow-Ups On The EDoS (Economic Denial Of Sustainability) Concept…

January 23rd, 2009 25 comments

I wrote about the notion of EDoS (Economic Denial Of Sustainability) back in November.  You can find the original blog post here.

The basic premise of the concept was the following:

I had a thought about how the utility and agility of the cloud
computing models such as Amazon AWS (EC2/S3) and the pricing models
that go along with them can actually pose a very nasty risk to those
who use the cloud to provide service.

That
thought got me noodling about how the pay-as-you-go model could
be used for nefarious means.

Specifically, this
usage-based model potentially enables $evil_person who knows that a
service is cloud-based to manipulate service usage billing in orders of
magnitude that could be disguised easily as legitimate use of the
service but drive costs to unmanageable levels. 

If you take Amazon's AWS usage-based pricing model (check out the cost calculator here,) one might envision that instead of worrying about a lack of resources, the
elasticity of the cloud could actually provide a surplus of compute,
network and storage utility that could be just as bad as a deficit.

Instead
of worrying about Distributed Denial of Service (DDos) attacks from
botnets and the like, imagine having to worry about delicately
balancing forecasted need with capabilities like Cloudbursting to deal
with a botnet designed to make seemingly legitimate requests for
service to generate an economic denial of sustainability (EDoS) —
where the dyamicism of the infrastructure allows scaling of service
beyond the economic means of the vendor to pay their cloud-based
service bills.

At any rate, here are a couple of interesting related items:

  1. Wei Yan, a threat researcher for Trend Micro, recently submitted an IEEE journal submission titled "Anti-Virus In-the-Cloud Service: Are We Ready for the Security Evolution?" in which he discusses and interesting concept for cloud-based AV and also cites/references my EDoS concept.  Thanks, Wei!
     
  2. There is a tangential story making the rounds recently about how researcher Brett O'Connor has managed to harness Amazon's EC2 to harvest/host/seed BitTorrent files.

    The relevant quote from the story that relates to EDoS is really about the visibility (or lack thereof) as to how cloud networks in their abstraction are being used and how the costs associated with that use might impact the cloud providers themselves.  Remember, the providers have to pay for the infrastructure even if the "consumers" do not:

    "This means, says Hobson, that hackers and other interested parties can
    simply use a prepaid (and anonymous) debit card to pay the $75 a month
    fee to Amazon and harvest BitTorrent applications at high speed with
    little or no chance of detection…

    It's not clear that O'Connor's clever work-out represents anything new
    in principle, but it does raise the issue of how cloud computing
    providers plan to monitor and manage what their services are being used
    for."

It's likely we'll see additional topics that relate to EDoS soon.

UPDATE: Let me try and give a clear example that differentiates EDoS from DDoS in a cloud context, although ultimately the two concepts are related:

DDoS (and DoS for that matter) attacks are blunt force trauma. The goal, regardless of motive, is to overwhelm infrastructure and remove from service a networked target by employing a distributed number of $evil_doers.  Example: a botnet is activated to swarm/overwhelm an Internet connected website using an asynchronous attack which makes the site unavailable due to an exhaustion of resources (compute, network or storage.)

EDoS attacks are death by 1000 cuts.  EDoS can also utilize distributed $evil_doers as well as single entities, but works by making legitimate web requests at volumes that may appear to be "normal" but are done so to drive compute, network and storage utility billings in a cloud model abnormally high.  Example: a botnet is ativated to visit a website whose income results from ecommerce purchases.  The requests are all legitimate but the purchases never made.  The vendor has to pay the cloud provider for increased elastic use of resources where revenue was never recognized to offset them.

We have anti-DDoS capabilities today with tools that are quite mature.  DDoS is generally easy to spot given huge increases in traffic.  EDoS attacks are not necessarily easy to detect, because the instrumentation and busines logic is not present in most applications or stacks of applications and infrastructure to provide the correlation between "requests" and " successful transactions."  In the example above, increased requests may look like normal activity.

Given the attractiveness of startups and SME/SMB's to the cloud for cost and agility, this presents a problem  The SME/SMB customers do not generally invest in this sort of integration, the cloud computing platform providers generally do not have the intelligence and visibility into these applications which they do not own, and typical DDoS tools don't, either.

So DDoS and EDoS ultimately can end with the same outcome: the target whithers and ceases to be able to offer service, but I think that EDoS is something significant that should be discussed and investigated.

/Hoff

What To Do When Your “Core” Infrastructure Services Aren’t In Your “Core?”

January 21st, 2009 11 comments
Okay.  I am teh lam3r.  I'd be intellectually dishonest if I didn't post this, and it's likely I'll revise it once I get to think about it more, but I've got to get it down.  Thanks to an innocent tweet from @botchagalupe I had an aneurysm epiphany.  Sort of 😉

A little light went on in my head this morning regarding how the cloud, or more specifically layers of clouds and the functions they provide (a-la SOA,) dramatically impact the changing landscape of what we consider "core infrastructure services," our choices on architecture, service provisioning, and how and from whence they are provided.  

Specifically, the synapse fired on the connection between Infrastructure 2.0 as is usually talked about from the perspective of the evolution from the enterprise inside to out versus the deployment of services constructed from scratch to play in the cloud.

You've no doubt seen discussions from Greg Ness (InfoBlox) and Lori Mac Vittie (f5) regarding their interpretation of Infrastructure 2.0 and the notion that by decoupling infrastructure services from their physical affinity we can actually "…enable greater levels of integration between the disparate layers of infrastructure: network, application, the endpoint, and IP address management, necessary to achieve interconnectedness."

Totally agree.  Been there, done that, bought the T-Shirt, but something wasn't clicking as it relates to what this means relative to cloud.

I was slurping down some java this morning and three things popped into my head as I was flipping between Twitter and Google Reader wondering about how I might consider launching a cloud-based service architecture and what impact it would have on my choices for infrastructure and providers.

Here are the three things that I started to think about in regards to what "infrastructure 2.0" might mean to me in this process, beyond the normal criteria related to management, security, scalability, etc…
  1. I always looked at these discussions of Infrastructure 2.0 as ideation/marketing by vendors on how to take products that used to function in the "Infratructure 1.0" dominion, add a service control plane/channel and adapt them for the inside-out version of the new world order that is cloud. This is the same sort of thing we've dealt with for decades and was highlighted when one day we all discovered the Internet and had to connect to it — although in that case we had standards!
  2. Clouds are often discussed in either microcosmic vacuum or lofty, fluffy immensity and it makes it hard to see the stratosphere for the cirrocumulus.  Our "non-cloud" internal enterprises today are conglomerates of technology integration with pockets of core services which provide the underpinnings for much of what keeps the machinery running.  Cloud computing is similar in approach, but in this regard, it brings home again the point that there is no such thing as "THE Cloud" but rather that the overarching integration challenge lays in the notion of overlays or mash-ups of multiple clouds, their functions, and their associated platforms and API's. 
  3. Further, and as to my last blog post on private clouds and location independence, I really do believe that the notion of internal versus external clouds is moot, but that the definitional nuance of public versus private clouds — and their requisite control requirements — are quite important.  Where, why, how and by whom services are provided becomes challenging because the distinction between inside and out can be really, really fuzzy, even more so if you're entirely cloud based in the first place.
For some reason, my thinking never really coalesced on how what relevance these three points have as it relates to the delivery of a service (and thus layers of applications) in a purely cloud based architecture built from scratch without the encumbrance of legacy infrastructure solutions.  

I found this awesome blog post from Mike Brittain via a tweet from @botchagalupe titled "How we built a web hosting infrastructure on EC2" and even though the article is a fascinating read, the single diagram in the post hit me like a hammer in the head…and I don't know why it did, because it's not THAT profound, but it jiggled something loose that is probably obvious to everyone else already:

Ec2-architecture-full
Do you see the first three layers?  Besides the "Internet," as the transport, you'll see two of the most important service delivery functions staring back at you: Akamai's "Site Accelerator Proxy" CDN/Caching/Optimization offering and Neustar's "UltraDNS" distributed, topologically intelligent DNS services

Both of these critical services (one might say "core infrastructure 2.0" services) are, themselves, cloud-based.  Of course, the entire EC2/S3 environment which hosts the web services is cloud-based, too.

The reason the light bulb went on for me is that I found that I was still caught in the old school infrastructure-as-a-box line of thought when it came to how I might provide the CDN/Caching and distributed DNS capabilities of my imaginary service.

It's likely I would have dropped right to the weeds and started thinking about which geographic load balancers (boxes) and/or proxies I might deploy somewhere and how (or if) they might integrate with the cloud "hosting/platform provider" to give me the resiliency and dynamic capabilities I wanted, let alone firewalls, IDP, etc.

Do I pick a provider that offers as part of the infrastructure a specific hardware-based load-balancing platform?  Do I pick on that can accommodate the integration of a software-based virtual appliances. Should I care?  With the cloud I'm not supposed to, but I find that I still, for many reasons — good and bad — do.

I never really thought about simply using a cloud-based service as a component in a mash-up of services that already does these things in ways that would be much cheaper, simpler, resilient and scalable than I could construct with "infrastructure 1.0" thinking.   Heck, I could pick 2 or 3 of them, perhaps. 

That being said, I've used outsourced "cloud-based" email filtering, vulnerability management, intrusion detection & prevention services, etc., but there are still some functions that for some reason appear to sacrosanct in the recesses of my mind?

I think I always just assumed that the stacking of outsourced (commoditized) services across multiple providers would be too complex but in reality, it's not very different from my internal enterprise that has taken decades to mature many of these functions (and consolidate them.)

Despite the relative immaturity of the cloud, it's instantly benefited from this evolution. Now, we're not quite all the way there yet.  We still are lacking standards and that service control plane shared amongst service layers doesn't really exist.

I think it's a huge step to recognize that it's time to get over the bias of applying so called "infrastructure 1.0" requirements to the rules of engagement in the cloud by recognizing that many of these capabilities don't exist in the enterprise, either.

Now, it's highly likely that the two players above (Neustar and Akamai) may very well use the same boxes that *I* might have chosen anyway, but it's irrelevant.  It's all about the service and engineering enough resiliency into the design (and choices of providers) such that I mitigate the risk of perhaps not having that "best of breed" name plate on a fancy set of equipment somewhere.

I can't believe the trap I fell into in terms of my first knee-jerk reaction regarding architecture, especially since I've spent so much of the last 5 years helping architect and implement "cloud" or "cloud-like" security services for outsourced capabilities.

So anyway, you're probably sitting here saying "hey, idiot, this is rather obvious and is the entire underlying premise of this cloud thing you supposedly understand inside and out."  That comment would be well deserved, but I had to be honest and tell you that it never really clicked until I saw this fantastic example from Mike.

Huh.

/Hoff

Cloud Computing: Invented By Criminals, Secured By ???

November 3rd, 2008 10 comments

I was reading Reuven Cohen's "Elastic Vapor: Life In the Cloud Blog" yesterday and he wrote an interesting piece on what is being coined "Fraud as a Service."  Basically, Reuven describes the rise of botnets as the origin of "cloud" based service utilities as chronicled from Uri Rivner's talk at RSA Europe:

I hate to tell you this, it wasn't Amazon, IBM or even Sun who invented
cloud computing. It was criminal technologists, mostly from eastern
Europe who did. Looking back to the late 90's and the use of
decentralized "warez" darknets. These original private "clouds" are the
first true cloud computing infrastructures seen in the wild. Even way
back then the criminal syndicates had developed "service oriented
architectures" and federated id systems including advanced encryption.
It has taken more then 10 years before we actually started to see this
type of sophisticated decentralization to start being adopted by
traditional enterprises
.

The one sentence that really clicked for me was the following:

In this new world order, cloud computing will not just be a requirement for scaling your data center but also protecting it.

Amen. 

One of the obvious benefits of cloud computing is the distribution of applications, services and information.  The natural by-product of this is additional resiliency from operational downtime caused by error or malicious activity.

This benefit is a also a forcing function; it will require new security methodologies and technology to allow the security (policies) to travel with the applications and data as well as enforce it.

I wrote about this concept back in 2007 as part of my predictions for 2008 and highlighted it again in a post titled: "Thinning the Herd and Chlorinating the Malware Gene Pool" based on some posts by Andy Jaquith:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn't care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function 'X' is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn't performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda's cGrid technology for an interesting view of this model.

This
basically means that we should distribute the sampling, detection and
prevention functions across the entire networked ecosystem, not just to
dedicated security appliances; each of the end nodes should communicate
using a standard signaling and telemetry protocol so that common
threat, vulnerability and effective disposition can be communicated up
and downstream to one another and one or more management facilities.

It will be interesting to watch companies, established and emerging, grapple with this new world.

/Hoff

Google’s Chrome: We Got {Secure?} Browsing Bling, Yo.

September 1st, 2008 No comments

Googlebling
From the Department of "Oops, I did it again…"

Back in June/July of 2007, I went on a little rant across several blog posts about how Google was directly entering the "security" business and would eventually begin to offer more than just "secure" search functions, but instead the functional equivalent of "clean pipes" or what has now become popularized as safe "cloud computing."

I called it S^2aaS (Secure Software as a Service) 😉  OK, so I’m not in marketing.

Besides the numerous initiatives by Google focused on adding more "security" to their primary business (search) the acquisition of GreenBorder really piqued my interest.   Then came the Postini buyout.

To be honest, I just thought this was common sense and fit what I understood was the longer term business model of Google.  To me it was writing on the wall.  To others, it was just me rambling.

So in my post from last year titled "Tell Me Again How Google Isn’t Entering the Security Market?  GooglePOPs will Bring Clean Pipes…" I suggested the following:

In fact, I reckon that in the long term we’ll see the evolution
of the Google Toolbar morph into a much more intelligent and rich
client-side security application proxy service whereby Google actually
utilizes client-side security of the Toolbar paired with the
GreenBorder browsing environment and tunnel/proxy all outgoing requests
to GooglePOPs.

Google will, in fact, become a monster ASP.  Note that I said
ASP and not ISP.  ISP is a commoditized function.  Serving applications
and content as close to the user as possible is fantastic.  So pair all
the client side goodness with security functions AND add GoogleApps and
you’ve got what amounts to a thin client version of the Internet.

Now we see what Google’s been up to with their announcement of Chrome (great writeup here,) which is their foray into the Browser market with an open source model with heaps of claimed security and privacy functions built in.  But it’s the bigger picture that’s really telling.

Hullo!  This isn’t about the browser market!  It’s about the transition of how we’re going to experience accessing our information; from where, what and how.  Chrome is simply an illustration of a means to an end.

Take what I said above and pair it with what they say below…I don’t think we’re that far off, folks…

From Google’s Blog explaining Chrome:

…we began
seriously thinking about what kind of browser could exist if we started
from scratch and built on the best elements out there. We realized that
the web had evolved from mainly simple text pages to rich, interactive
applications and that we needed to completely rethink the browser. What
we really needed was not just a browser, but also a modern platform for
web pages and applications, and that’s what we set out to build.

Under the hood, we were able to build the foundation of a
browser that runs today’s complex web applications much better. By
keeping each tab in an isolated "sandbox", we were able to prevent one
tab from crashing another and provide improved protection from rogue
sites. We improved speed and responsiveness across the board. We also
built a more powerful JavaScript engine, V8, to power the next
generation of web applications that aren’t even possible in today’s
browsers.

Here come the GooglePipes being fed by the GooglePOPs, being… 😉

/Hoff

Categories: Clean Pipes, De-Perimeterization, Google Tags:

GooglePOPs – Cloud Computing and Clean Pipes: Told Ya So…

May 8th, 2008 9 comments

In July of last year, I prognosticated that Google with it’s various acquisitions was entering the security space with the intent to not just include it as a browser feature for search and the odd GoogleApp, but a revenue-generating service delivery differentiator using SaaS via applications and clean pipes delivery transit in the cloud for Enterprises.

My position even got picked up by thestreet.com.  By now it probably sounds like old news, but…

Specifically, in my post titled "Tell Me Again How Google Isn’t Entering the Security Market? GooglePOPs will Bring Clean Pipes…" I argued (and was ultimately argued with) that Google’s $625M purchase of Postini was just the beginning:

This morning’s news that Google is acquiring Postini for $625 Million dollars doesn’t surprise me at all and I believe it proves the point.

In fact, I reckon that in the long term we’ll see the evolution of the Google Toolbar morph into a much more intelligent and rich client-side security application proxy service whereby Google actually utilizes client-side security of the Toolbar paired with the GreenBorder browsing environment and tunnel/proxy all outgoing requests to GooglePOPs.

What’s a GooglePOP?

These GooglePOPs (Google Point of Presence) will house large search and caching repositories that will — in conjunction with services such as those from Postini — provide a "clean pipes service to the consumer.  Don’t forget utility services that recent acquisitions such as GrandCentral and FeedBurner provide…it’s too bad that eBay snatched up Skype…

Google will, in fact, become a monster ASP.  Note that I said ASP and not ISP.  ISP is a commoditized function.  Serving applications and content as close to the user as possible is fantastic.  So pair all the client side goodness with security functions AND add GoogleApps and you’ve got what amounts to a thin client version of the Internet.

Here’s where we are almost a year later.  From the Ars Technica post titled "Google turns Postini into Google Web Security for Enterprise:"

The company’s latest endeavor, Google Web Security for Enterprise, is now available, and promises to provide a consistent level of system security whether an end-user is surfing from the office or working at home halfway across town.

The new service is branded under Google’s "Powered by Postini" product line and, according to the company, "provides real-time malware protection and URL filtering with policy enforcement and reporting. An additional feature extends the same protections to users working remotely on laptops in hotels, cafes, and even guest networks." The service is presumably activated by signing in directly to a Google service, as Google explicitly states that workers do not need access to a corporate network.

The race for cloud and secure utility computing continues with a focus on encapsulated browsing and application delivery environments, regardless of transport/ISP, starting to take shape.   

Just think about the traditional model of our enterprise and how we access our resources today turned inside out as a natural progression of re-perimeterization.  It starts to play out on the other end of the information centricity spectrum.

What with the many new companies entering this space and the likes of Google, Microsoft and IBM banging the drum, it’s going to be one interesting ride.

/Hoff

On Bandwidth and Botnets…

October 3rd, 2007 No comments

An interesting story in this morning’s New York Times titled "Unlike U.S., Japanese Push Fiber Over Profit" talked about Japan’s long term investment efforts to build the world’s first all-fiber national network and how Japan leads the world’s other industrialized nations, including the U.S., in low-cost, high speed services centered around Internet access.  Check out this illustration:

2007broadbandgraphic_2
The article states that approximately 8 million Japanese subscribe to the fiber-enabled service offerings that provides performance at roughly 30 times that of a corresponding xDSL offering.

For about $55 a month, subscribers have access to up to 100Mb/s download capacity.

France Telecom is rumored to be rolling out services that offer 2.5Gb/s downloads!

I have Verizon FIOS which is delivered via fiber to my home and subscribe at a 20Mb/s download tier.

What I find very interesting about the emergence of this sort of service is that if you look at a typical consumer’s machine, it’s not well hardened, not monitored and usually easily compromised.  At this rate, the bandwidth of some of these compromise-ready consumer’s home connectivity is eclipsing that of mid-tier ISP’s!

This is even more true, through anecdotal evidence gathering, of online gamers who are typically also P2P filesharing participants and early adopters of new shiny kit — it’s a Bot Herder’s dream come true.

At xDSL speeds of a few Mb/s, a couple of infected machines as participants in a targeted synchronized fanning DDoS attack can easily take down a corporate network connected to the Internet via a DS3 (45Mb/s.)  Imagine what a botnet of a couple of 60Mb/s connected endpoints could do — how about a couple of thousand?  Hundreds of thousands?

This is great news for some as this sort of capacity will be economically beneficial to cyber-criminals as it reduces the exposure risk of Botnet Herders; they don’t have to infect nearly the same amount of machines to deliver exponentially higher attack yields given the size of the pipes.  Scary.

I’d suggest that using the lovely reverse DNS entries that service providers use to annotate logical hop connectivity will be even more freely used to target these high-speed users; you know, like (fictional):

bigass20MbpsPipe.vzFIOS-05.bstnma.verizon-gni.net (7x.y4.9z.1)

As an interesting anecdote from the service provider perspective, the need for "Clean Pipes" becomes even more important and the providers will be even more so financially motivated to prevent abuse of their backbone long-hauls by infected machines.

This, in turn, will drive the need for much more intelligent, higher throughput infrastructure and security service layers to mitigate the threat which is forcing folks to take a very hard look about how they architect their networks and apply security.

/Hoff