Archive

Archive for the ‘De-Perimeterization’ Category

Overlays: Wasting Away Again In Abstractionville…

May 5th, 2012 3 comments
IBM Cloud Computing

(Photo credit: IvanWalsh.com)

 

I’m about to get in a metal tube and spend 14 hours in the Clouds.  I figured I’d get something off my chest while I sit outside in the sun listening to some Jimmy Buffett.

[Network] overlays.  They bug me.  Let me tell you why.

The Enterprise, when considering “moving to the Cloud” generally takes one of two approaches depending upon culture, leadership, business goals, maturity and sophistication:

  1. Go whole-hog with an all-in Cloud strategy. 
    Put an expiration date on maintaining/investing in legacy apps/infrastructure and instead build an organizational structure, technology approach, culture, and operational model that is designed around building applications that are optimized for “cloud” — and that means SaaS, PaaS, and IaaS across public, private and hybrid models with a focus on how application delivery and information (including protecting) is very different than legacy deployments, or…
  2. Adopt a hedging strategy to get to Cloud…someday.
    This usually means opportunistically picking low risk, low impact, low-hanging fruit that can be tip-toed toward and scraping together the existing “rogue” projects already underway, sprinkling in some BYOD, pointing to a virtualized datacenter and calling a 3 day provisioning window with change control as “on-demand,” and “Cloud.”  Oh, and then deploying gateways, VPNs, data encryption and network overlays as an attempt to plug holes by paving over them, and calling that “Cloud,” also.

See that last bit?

This is where so-called “software defined networking (SDN),” the myriad of models that utilize “virtualization” and all sorts of new protocols and service delivery mechanisms are being conflated into the “will it blend” menagerie called “Cloud.”  It’s an “eyes wide shut” approach.

Now, before you think I’m being dismissive of “virtualization” or SDN, I’m not.  I believe. Wholesale. But within the context of option #2 above, it’s largely a waste of time, money, and effort.  It’s putting lipstick on a pig.

You either chirp or get off the twig.

Picking door #2 is where the Enterprise looks at shiny new things based on an article in the WSJ, Wired or via peer group golf outing and says “I bet if we added yet another layer of abstraction atop the piles of already rapidly abstracting piles of shite we already have, we would be more agile, nimble, efficient and secure.”  We would be “cloud” enabled.

[To a legacy-minded Enterprise,] Cloud is the revenge of VPN and PKI…

The problem is that just like the folks in Maine will advise: “You can’t get there from here.”  I mean, you can, but the notion that you’ll actually pull it off by stacking turtles, applying band-aids and squishing the tyranny of VLANs by surrounding them in layer 3 network overlays and calling this the next greatest thing since sliced bread is, well, bollocks.

Look, I think SDN, protocols like Openflow and VXLAN/NVGRE, etc. are swell.  I think the separation of control and data planes and the notion that I can programmatically operate my network is awesome.  I think companies like Nicira and Bigswitch are doing really interesting things.  I think that Cloudstack, Openstack and VMWare present real opportunity to make things “better.”

Hey, look, we’re just like Google and Amazon Web Services Now!

But to an Enterprise without a real plan as to what “Cloud” really means to their business, these are largely overlays within the context of #2.  Within the context of #1, they’re simply mom and apple pie and are, for the most part, invisible.  That’s not where the focus actually is.

That said, for a transitional Enterprise, these things give them pause, but should be looked upon as breadcrumbs that indicate a journey, not the destination.  They’re a crutch and another band-aid to solve legacy problems.  They’re really a means to an end.

These “innovations” *are* a step in the right direction.  They will let us do great things. They will let a whole new generation of operational models and a revitalized ecosystem flourish AND it will encourage folks to think differently.  But about what?  And to solve what problem(s)?

If you simply expect to layer them on your legacy infrastructure, operational models and people and call it “Cloud,” you’re being disingenuous.

Ultimately, to abuse an analogy, network overlays are a layover on the itinerary of our journey to the Cloud, but not where we should ultimately land. I see too many companies focusing on the transition…and by the time they get there, the target will have moved.  Again.  Just like it always does.

They’re hot now because they reflect something we should have done a long time ago, but like hypervisors, one day [soon] network overlays will become just a feature and not a focus.

/Hoff

 

Enhanced by Zemanta

CloudPassage & Why Guest-Based Footprints Matter Even More For Cloud Security

February 1st, 2011 4 comments
VM (operating system)

Image via Wikipedia

Every day for the last week or so after their launch, I’ve been asked left and right about whether I’d spoken to CloudPassage and what my opinion was of their offering.  In full disclosure, I spoke with them when they were in stealth almost a year ago and offered some guidance as well as the day before their launch last week.

Disappointing as it may be to some, this post isn’t really about my opinion of CloudPassage directly; it is, however, the reaffirmation of the deployment & delivery models for the security solution that CloudPassage has employed.  I’ll let you connect the dots…

Specifically, in public IaaS clouds where homogeneity of packaging, standardization of images and uniformity of configuration enables scale, security has lagged.  This is mostly due to the fact that for a variety of reasons, security itself does not scale (well.)

In an environment where the underlying platform cannot be counted upon to provide “hooks” to integrate security capabilities in at the “network” level, all that’s left is what lies inside the VM packaging:

  1. Harden and protect the operating system [and thus the stuff atop it,]
  2. Write secure applications and
  3. Enforce strict, policy-driven information-centric security.

My last presentation, “Cloudinomicon: Idempotent Infrastructure, Building Survivable Systems and Bringing Sexy Back to Information Centricity” addressed these very points. [This one is a version I delivered at the University of Michigan Security Summit]

If we focus on the first item in that list, you’ll notice that generally to effect policy in the guest, you must have a footprint on said guest — however thin — to provide the hooks that are needed to either directly effect policy or redirect back to some engine that offloads this functionality.  There’s a bit of marketing fluff associated with using the word “agentless” in many applications of this methodology today, but at some point, the endpoint needs some sort of “agent” to play*

So that’s where we are today.  The abstraction offered by virtualized public IaaS cloud platforms is pushing us back to the guest-centric-based models of yesteryear.

This will bring challenges with scale, management, efficacy, policy convergence between physical and virtual and the overall API-driven telemetry driven by true cloud solutions.

You can read more about this in some of my other posts on the topic:

Finally, since I used them for eyeballs, please do take a look at CloudPassage — their first (free) offerings are based upon leveraging small footprint Linux agents and a cloud-based SaaS “grid” to provide vulnerability management, and firewall/zoning in public cloud environments.

/Hoff

* There are exceptions to this rule depending upon *what* you’re trying to do, such as anti-malware offload via a hypervisor API, but this is not generally available to date in public cloud.  This will, I hope, one day soon change.

Enhanced by Zemanta

The Classical DMZ Design Pattern: How To Kill Security In the Cloud

July 7th, 2010 6 comments

Every day I get asked to discuss how Cloud Computing impacts security architecture and what enterprise security teams should do when considering “Cloud.”

These discussions generally lend themselves to a bifurcated set of perspectives depending upon whether we’re discussing Public or Private Cloud Computing.

This is unfortunate.

From a security perspective, focusing the discussion primarily on the deployment model instead of thinking holistically about the “how, why, where, and who” of Cloud, often means that we’re tethered to outdated methodologies because it’s where our comfort zones are.

When we’re discussing Public Cloud, the security teams are starting to understand that the choice of compensating controls and how they deploy and manage them require operational, economic and architectural changes.  They are also coming to terms with the changes to application architectures as it relates to distributed computing and SOA-like implementation.  It’s uncomfortable and it’s a slow-slog forward (for lots of good reasons,) but good questions are asked when considering the security, privacy and compliance impacts of Public Cloud and what can/should be done about them and how things need to change.

When discussing Private Cloud, however, even when a “clean slate design” is proposed, the same teams tend to try to fall back to what they know and preserve the classical n-tier application architecture separated by physical or virtual compensating controls — the classical split-subnet DMZ or perimeterized model of “inside” vs “outside.” They can do this given the direct operational control exposed by highly-virtualized infrastructure.  Sometimes they’re also forced into this given compliance and audit requirements. The issue here is that this discussion centers around molding cloud into the shape of the existing enterprise models and design patterns.

This is an issue; trying to simultaneously secure these diametrically-opposed architectural implementations yields cost inefficiencies, security disparity, policy violations, introduces operational risk and generally means that  the ball doesn’t get moved forward in protecting the things that matter most.

Public Cloud Computing is a good thing for the security machine; it forces us to (again) come face-to-face with the ugliness of the problems of securing the things that matter most — our information. Private Cloud Computing — when improperly viewed from the perspective of simply preserving the status quo — can often cause stagnation and introduce roadblocks.  We’ve got to move beyond this.

Public Cloud speaks to the needs (and delivers on) agility, flexibility, mobility and efficiency. These are things that traditional enterprise security are often not well aligned with.  Trying to fit “Cloud” into neat and tidy DMZ “boxes” doesn’t work.  Cloud requires revisiting our choices for security. We should take advantage of it, not try and squash it.

/Hoff

Enhanced by Zemanta

Slides from My Cloud Security Alliance Keynote: The Cloud Magic 8 Ball (Future Of Cloud)

March 7th, 2010 No comments

Here are the slides from my Cloud Security Alliance (CSA) keynote from the Cloud Security Summit at the 2010 RSA Security Conference.

The punchline is as follows:

All this iteration and debate on the future of the “back-end” of Cloud Computing — the provider side of the equation — is ultimately less interesting than how the applications and content served up will be consumed.

Cloud Computing provides for the mass re-centralization of applications and data in mega-datacenters while simultaneously incredibly powerful mobile computing platforms provide for the mass re-distribution of (in many cases the same) applications and data.  We’re fixated on the security of the former but ignoring that of the latter — at our peril.

People worry about how Cloud Computing puts their applications and data in other people’s hands. The reality is that mobile computing — and the clouds that are here already and will form because of them — already put, quite literally, those applications and data in other people’s hands.

If we want to “secure” the things that matter most, we must focus BACK on information centricity and building survivable systems if we are to be successful in our approach.  I’ve written about the topics above many times, but this post from 2009 is quite apropos: The Quandary Of the Cloud: Centralized Compute But Distributed Data You can find other posts on Information Centricity here.

Slideshare direct link here (embedded below.)

Reblog this post [with Zemanta]

Comments on the PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

February 16th, 2010 2 comments

I saw a very interesting post on LinkedIn with the title PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

PricewaterhouseCoopers are working with the Technology Strategy Board (part of BIS) on a high profile research project which aims to identify future technology and cyber security trends. These statements are forward looking and are intended to purely start a discussion around emerging/possible future trends. This is a great chance to be involved in an agenda setting piece of research. The findings will be released in the Spring at Infosec. We invite you to offer your thoughts…

The cloud/thin computing will fundamentally change the nature of cyber security…

The nature of cyber security threats will fundamentally change as the trend towards thin computing grows. Security updates can be managed instantly by the solution provider so every user has the latest security solution, the data leakage threat is reduced as data is stored centrally, systems can be scanned more efficiently and if Botnets capture end-point computers, the processing power captured is minimal. Furthermore, access to critical data can be centrally managed and as more email is centralised, malware can be identified and removed more easily. The key challenge will become identity management and ensuring users can only access their relevant files. The threat moves from the end-point to the centre.

What are your thoughts?

My response is simple.

Cloud Computing or “Thin Computing” as described above doesn’t change the “nature” of (gag) “cyber security” it simply changes its efficiency, investment focus, capital model and modality. As to the statement regarding threats with movement “…from the end-point to the centre,” the surface area really becomes amorphous and given the potential monoculture introduced by the virtualization layers underpinning these operations, perhaps expands.

Certainly the benefits described in the introduction above do mean changes to who, where and when risk mitigation might be applied, but those activities are, in most cases, still the same as in non-Cloud and “thick” computing.  That’s not a “fundamental change” but rather an adjustment to a platform shift, just like when we went from mainframe to client/server.  We are still dealing with the remnant security issues (identity management, AAA, PKI, encryption, etc.) from prior  computing inflection points that we’ve yet to fix.  Cloud is a great forcing function to help nibble away at them.

But, if you substitute “client server” in relation to it’s evolution from the “mainframe era” for “cloud/thin computing” above, it all sounds quite familiar.

As I alluded to, there are some downsides to this re-centralization, but it is important to note that I do believe that if we look at what PaaS/SaaS offerings and VDI/Thin/Cloud computing offers, it makes us focus on protecting our information and building more survivable systems.

However, there’s a notable bifurcation occurring. Whilst the example above paints a picture of mass re-centralization, incredibly powerful mobile platforms are evolving.  These platforms (such as the iPhone) employ a hybrid approach featuring both native/local on-device applications and storage of data combined with the potential of thin client capability and interaction with distributed Cloud computing services.*

These hyper-mobile and incredibly powerful platforms — and the requirements to secure them in this mixed-access environment — means that the efficiency gains on one hand are compromised by the need to once again secure  diametrically-opposed computing experiences.  It’s a “squeezing the balloon” problem.

The same exact thing is occurring in the Private versus Public Cloud Computing models.

/Hoff

* P.S. Bernard Golden also commented via Twitter regarding the emergence of Sensor nets which also have a very interesting set of implications on security as it relates to both the examples of Cloud and mobile computing elements above.

Reblog this post [with Zemanta]

The Emotion of VMotion…

September 29th, 2009 8 comments
VMotion - Here's Where We Are Today

VMotion - Here's Where We Are Today

A lot has been said about the wonders of workload VM portability.

Within the construct of virtualization, and especially VMware, an awful lot of time is spent on VM Mobility but as numerous polls and direct customer engagements have shown, the majority (50% and higher) do not use VMotion.  I talked about this in a post titled “The VM Mobility Myth:

…the capability to provide for integrated networking and virtualization coupled with governance and autonomics simply isn’t mature at this point. Most people are simply replicating existing zoned/perimertized non-virtualized network topologies in their consolidated virtualized environments and waiting for the platforms to catch up. We’re really still seeing the effects of what virtualization is doing to the classical core/distribution/access design methodology as it relates to how shackled much of this mobility is to critical components like DNS and IP addressing and layer 2 VLANs.  See Greg Ness and Lori Macvittie’s scribblings.

Furthermore, Workload distribution (Ed: today) is simply impractical for anything other than monolithic stacks because the virtualization platforms, the applications and the networks aren’t at a point where from a policy or intelligence perspective they can easily and reliably self-orchestrate.

That last point about “monolithic stacks” described what I talked about in my last post “Virtual Machines Are the Problem, Not the Solution” in which I bemoaned the bloat associated with VM’s and general purpose OS’s included within them and the fact that VMs continue to hinder the notion of being able to achieve true workload portability within the construct of how programmatically one might architect a distributed application using an SOA approach of loosely coupled services.

Combined with the VM bloat — which simply makes these “workloads” too large to practically move in real time — if one couples the annoying laws of physics and current constraints of virtualization driving the return to big, flat layer 2 network architecture — collapsing core/distribution/access designs and dissolving classical n-tier application architectures — one might argue that the proposition of VMotion really is a move backward, not forward, as it relates to true agility.

That’s a little contentious, but in discussions with customers and other Social Media venues, it’s important to think about other designs and options; the fact is that the Metastructure (as it pertains to supporting protocols/services such as DNS which are needed to support this “infrastructure 2.0”) still isn’t where it needs to be in regards to mobility and even with emerging solutions like long-distance VMotion between datacenters, we’re butting up against laws of physics (and costs of the associated bandwidth and infrastructure.)

While we do see advancements in network-driven policy stickiness with the development of elements such as distributed virtual switching, port profiles, software-based vSwitches and virtual appliances (most of which are good solutions in their own right,) this is a network-centric approach.  The policies really ought to be defined by the VM’s themselves (similar to SOA service contracts — see here) and enforced by the network, not the other way around.

Further, what isn’t talked about much is something that @joe_shonk brought up, which is that the SAN volumes/storage from which most of these virtual machines boot, upon which their data is stored and in some cases against which they are archived, don’t move, many times for the same reasons.  In many cases we’re waiting on the maturation of converged networking and advances in networked storage to deliver solutions to some of these challenges.

In the long term, the promise of mobility will be delivered by a split into three four camps which have overlapping and potentially competitive approaches depending upon who is doing the design:

  1. The quasi-realtime chunking approach of VMotion via the virtualization platform [virtualization architect,]
  2. Integration distribution and “mobility” at the application/OS layer [application architect,] or
  3. The more traditional network-based load balancing of traffic to replicated/distributed images [network architect.]
  4. Moving or redirecting pointers to large pools of storage where all the images/data(bases) live [Ed. forgot to include this from above]

Depending upon the need and capability of your application(s), virtualization/Cloud platform, and network infrastructure, you’ll likely need a mash-up of all three four.  This model really mimics the differences today in architectural approach between SaaS and IaaS models in Cloud and further suggests that folks need to take a more focused look at PaaS.

Don’t get me wrong, I think VMotion is fantastic and the options it can ultimately delivery intensely useful, but we’re hamstrung by what is really the requirement to forklift — network design, network architecture and the laws of physics.  In many cases we’re fascinated by VM Mobility, but a lot of that romanticization plays on emotion rather than utilization.

So what of it?  How do you use VM mobility today?  Do you?

/Hoff

Calling All Private Cloud Haters: Amazon Just Peed On Your Fire Hydrant…

August 26th, 2009 15 comments

Werner Vogels brought a smile to my face today with his blog titled “Seamlessly Extending the Data Center – Introducing Amazon Virtual Private Cloud.”  In short:

We have developed Amazon Virtual Private Cloud (Amazon VPC) to allow our customers to seamlessly extend their IT infrastructure into the cloud while maintaining the levels of isolation required for their enterprise management tools to do their work.

In one fell swoop, AWS has:

  • Legitimized Private Cloud as a reasonable, needed, and prudent step toward Cloud adoption for enterprises,
  • Substantiated the value proposition of Private Cloud as a way of removing a barrier to Cloud entry for enterprises, and
  • Validated the ultimate vision toward hybrid Clouds and Inter-Cloud

They made this announcement from the vantage point of operating as a Public Cloud provider — in many cases THE Public Cloud provider of choice for those arguing from an exclusionary perspective that Public Cloud is the only way forward.

Now, it’s pretty clear on AWS’ position on Private Cloud; straight form the horse’s mouth Werner says “Private Cloud is not the Cloud” (see below) — but it’s also clear they’re willing to sell you some 😉

The cost for VPC isn’t exorbitant, but it’s not free, either, so the business case is clearly there (see the official VPC site)– VPN connectivity is $0.05 per VPN connection with data transfer rates of $0.10 per GB inbound and ranging from $0.17 per GB – $0.10 per GB outbound depending upon volume (with heavy data replication or intensive workloads people are going to need to watch the odometer.)

I’m going to highlight a couple of nuggets from his post:

We continuously listen to our customers to make sure our roadmap matches their needs. One important piece of feedback that mainly came from our enterprise customers was that the transition to the cloud of more complex enterprise environments was challenging. We made it a priority to address this and have worked hard in the past year to find new ways to help our customers transition applications and services to the cloud, while protecting their investments in their existing IT infrastructure. …

Private Cloud Is Not The Cloud – These CIOs know that what is sometimes dubbed “private cloud” does not meet their goal as it does not give them the benefits of the cloud: true elasticity and capex elimination. Virtualization and increased automation may give them some improvements in utilization, but they would still be holding the capital, and the operational cost would still be significantly higher.

We have been listening very closely to the real requirements that our customers have and have worked closely with many of these CIOs and their teams to understand what solution would allow them to treat the cloud as a seamless extension of their datacenter, where their standard management practices can be applied with limited or no modifications. This needs to be a solution where they get all the benefits of cloud as mentioned above [Ed: eliminates cost, elastic, removes “undifferentiated heavy lifting”] while treating it as a part of their datacenter.

We have developed Amazon Virtual Private Cloud (Amazon VPC) to allow our customers to seamlessly extend their IT infrastructure into the cloud while maintaining the levels of isolation required for their enterprise management tools to do their work.

With Amazon VPC you can:

  • Create a Virtual Private Cloud and assign an IP address block to the VPC. The address block needs to be CIDR block such that it will be easy for your internal networking to route traffic to and from the VPC instance. These are addresses you own and control, most likely as part of your current datacenter addressing practice.
  • Divide the VPC addressing up into subnets in a manner that is convenient for managing the applications and services you want run in the VPC.
  • Create a VPN connection between the VPN Gateway that is part of the VPC instance and an IPSec-based VPN router on your own premises. Configure your internal routers such that traffic for the VPC address block will flow over the VPN.
  • Start adding AWS cloud resources to your VPC. These resources are fully isolated and can only communicate to other resources in the same VPC and with those resources accessible via the VPN router. Accessibility of other resources, including those on the public internet, is subject to the standard enterprise routing and firewall policies.

Amazon VPC offers customers the best of both the cloud and the enterprise managed data center:

  • Full flexibility in creating a network layout in the cloud that complies with the manner in which IT resources are managed in your own infrastructure.
  • Isolating resources allocated in the cloud by only making them accessible through industry standard IPSec VPNs.
  • Familiar cloud paradigm to acquire and release resources on demand within your VPC, making sure that you only use those resources you really need.
  • Only pay for what you use. The resources that you place within a VPC are metered and billed using the familiar pay-as-you-go approach at the standard pricing levels published for all cloud customers. The creation of VPCs, subnets and VPN gateways is free of charge. VPN usage and VPN traffic are also priced at the familiar usage based structure

All the benefits from the cloud with respect to scalability and reliability, freeing up your engineers to work on things that really matter to your business.

Jeff Barr did a great job of giving a little more detail on his blog but also brought up a couple of points I need to noodle on from a security perspective:

Because the VPC subnets are used to isolate logically distinct functionality, we’ve chosen not to immediately support Amazon EC2 security groups. You can launch your own AMIs and most public AMIs, including Microsoft Windows AMIs. You can’t launch Amazon DevPay AMIs just yet, though.

The Amazon EC2 instances are on your network. They can access or be accessed by other systems on the network as if they were local. As far as you are concerned, the EC2 instances are additional local network resources — there is no NAT translation. EC2 instances within a VPC do not currently have Internet-facing IP addresses.

We’ve confirmed that a variety of Cisco and Juniper hardware/software VPN configurations are compatible; devices meeting our requirements as outlined in the box at right should be compatible too. We also plan to support Software VPNs in the near future.

The notion of the VPC and associated VPN connectivity coupled with the “software VPN” statement above reminds me of Cohesive F/T’s VPN-Cubed solution.  While this is an IaaS-focused discussion, it’s only fair to bring up Google’s Secure Data Connector that was announced some moons ago from a SaaS/PaaS perspective, too.

I would be remiss in my musings were I not to also suggest that Cloud brokers and Cloud service providers such as RightScale, GoGrid, Terremark, etc. were on the right path in responding to customers’ needs well before this announcement.

Further, it should be noted that now that the 800lb Gorilla has staked a flag, this will bring up all sorts of additional auditing and compliance questions, as any sort of broad connectivity into and out of security zones and asset groupings always do.  See the PCI debate (How to Be PCI Compliant In the Cloud)

At the end of the day, this is a great step forward toward — one I am happy to say that I’ve been talking about and presenting (see my Frogs presentation) for the last two years.

/Hoff

Jericho Forum’s Cloud Cube Model…Rubik, Rubric and Righteous!

April 16th, 2009 No comments

I’m looking forward to the RSA conference this year; I am going to get to discuss Virtualization and Cloud Computing security a lot.

One of the events I’m really looking forward to is a panel discussion at the Jericho Forum’s event (Wednesday the 22nd, starting at 3pm) with some really good friends of mine and members of the Forum.

We’re going to be discussing Jericho’s Cloud Cube Model:

jericho-cloudcubeI think that the Cloud Cube does a nice job describing the multi-dimensional elements of Cloud Computing and frames not only Cloud use cases, but also how they are deployed and utilized.

Here’s why I am excited; if you look at the Cube above and the table I built below in this blog (The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…) to get deeper into Cloud definitions, despite the differences in names, you will notice some remarkable similarities, especially the notion of how “internal/external” is called out separately from “perimertized/de-perimeterized.” This is akin to my table labeling of “Infrastructure located” and “managed by” column headings.  Further the “outsourced/insourced” maps to my “managed by:” column.  I like the “proprietary/open” dimension, also, which I didn’t include in my table, but I did reference in my Frogs presentation. I think I’ll extend the table to show that, also.

hppiev7

I am very much looking forward to discussing this on the panel.  I’ve been preaching about the Jericho Forum since my religious conversion many years ago.

As I said in my Frogs preso, Cloud Computing is the evolution of the “re-perimeterization” model on steroids.

/Hoff

Google’s Chrome: We Got {Secure?} Browsing Bling, Yo.

September 1st, 2008 No comments

Googlebling
From the Department of "Oops, I did it again…"

Back in June/July of 2007, I went on a little rant across several blog posts about how Google was directly entering the "security" business and would eventually begin to offer more than just "secure" search functions, but instead the functional equivalent of "clean pipes" or what has now become popularized as safe "cloud computing."

I called it S^2aaS (Secure Software as a Service) 😉  OK, so I’m not in marketing.

Besides the numerous initiatives by Google focused on adding more "security" to their primary business (search) the acquisition of GreenBorder really piqued my interest.   Then came the Postini buyout.

To be honest, I just thought this was common sense and fit what I understood was the longer term business model of Google.  To me it was writing on the wall.  To others, it was just me rambling.

So in my post from last year titled "Tell Me Again How Google Isn’t Entering the Security Market?  GooglePOPs will Bring Clean Pipes…" I suggested the following:

In fact, I reckon that in the long term we’ll see the evolution
of the Google Toolbar morph into a much more intelligent and rich
client-side security application proxy service whereby Google actually
utilizes client-side security of the Toolbar paired with the
GreenBorder browsing environment and tunnel/proxy all outgoing requests
to GooglePOPs.

Google will, in fact, become a monster ASP.  Note that I said
ASP and not ISP.  ISP is a commoditized function.  Serving applications
and content as close to the user as possible is fantastic.  So pair all
the client side goodness with security functions AND add GoogleApps and
you’ve got what amounts to a thin client version of the Internet.

Now we see what Google’s been up to with their announcement of Chrome (great writeup here,) which is their foray into the Browser market with an open source model with heaps of claimed security and privacy functions built in.  But it’s the bigger picture that’s really telling.

Hullo!  This isn’t about the browser market!  It’s about the transition of how we’re going to experience accessing our information; from where, what and how.  Chrome is simply an illustration of a means to an end.

Take what I said above and pair it with what they say below…I don’t think we’re that far off, folks…

From Google’s Blog explaining Chrome:

…we began
seriously thinking about what kind of browser could exist if we started
from scratch and built on the best elements out there. We realized that
the web had evolved from mainly simple text pages to rich, interactive
applications and that we needed to completely rethink the browser. What
we really needed was not just a browser, but also a modern platform for
web pages and applications, and that’s what we set out to build.

Under the hood, we were able to build the foundation of a
browser that runs today’s complex web applications much better. By
keeping each tab in an isolated "sandbox", we were able to prevent one
tab from crashing another and provide improved protection from rogue
sites. We improved speed and responsiveness across the board. We also
built a more powerful JavaScript engine, V8, to power the next
generation of web applications that aren’t even possible in today’s
browsers.

Here come the GooglePipes being fed by the GooglePOPs, being… 😉

/Hoff

Categories: Clean Pipes, De-Perimeterization, Google Tags:

The Final Frontier(?): Virtualizing the DMZ…

June 30th, 2008 5 comments

Vmwaredmz_virtualization
Alessandro from virtualization.info and I were chatting today regarding VMware’s latest best-practices document titled "DMZ Virtualization with VMware Infrastructure.

This is a nine page overview that does a reasonably good job of laying out many of the architectural/topological options available when thinking about taking the steps toward virtualizing what some consider the "final frontier" in the proving grounds of production-level virtualization — the (Internet-facing) DMZ.

The whitepaper was timely because I was just finishing up my presentation for Blackhat and was busy creating a similar set of high-level architectural examples to use in my presentation.  I decided to reference those in the document because they quite elegantly represent the starting points that many folks would use as a stepping off point in their virtual DMZ adventures.

…and I think it will be an adventure punctuated perhaps by evolutionary steps as documented in the options presented in the whitepaper.

As I read through the document, I had to remind myself of the fact that this was intended to be a high-level document and not designed to cover the hairy edges of network and security design. 

The whitepaper highlighted some of the reasonable trade-off’s in complexity, resiliency, management, functionality, operational expertise, and cost but given where my head and focus are today, I have to admit that it still gnawed at me from a security perspective which is still too weak for my liking.

I’ve hinted at why in my original Four Horsemen slide, and I’m going to be speaking for 75 minutes on the topic at Blackhat, so come get your VirtSec boogie on there for a full explanation…

Alessandro got dinged in a comment on his blog for a statement in which he suggested that partially-collapsed as well as fully-collapsed DMZ’s with virtual separation of trust zones "…should be avoided at all costs because they imply the inviolability of the hypervisor (at any level: from the virtual networking to the kernel) something that nor VMware neither any other virtualization vendor can grant."

This appears contradictory to his initial assessment of DMZ virtualization wherein he stated that "…there [is] nothing bad in virtualizing the DMZ as long as we are fully aware of the risks."  In a way, I think I understand exactly where Alessandro is coming from, even if I don’t completely agree with him (or at least I partially do…)

This really paints an altogether unfortunate and yet accurate picture of the circular arguments folks engage in when they combine the following topics in a single argument:

  • Securing virtualization
  • Virtualizing security
  • Security via virtualization

In the same way that we trust our operating system vendors who provide us with the operational underpinnings of our datacenters with the hope that they will approach a reasonable level of "security" in their products, we are basically at the same point with our virtualization (OS) platform providers.

Hope is not a strategy, but it seems we’ve at least accepted it for the time being… ;(

Sure there are new attack vectors and operational risks, but the slippery slope of not being able to really quantify whether you are more or less at risk based solely on the one-dimensional data point of the infallibility of the hypervisor  and then write the whole concept off seems a little odd to me.

If you’re truly assessing risk in the potential virtualization of your DMZ, you’ll take the operational/architectural guidelines as well as the subjective business impacts into consideration.  Simply stating that one should or should not virtualize a DMZ without a holistic approach is myopic.

To circle back on the topic, the choice of whether to — and how to — virtualize your DMZ  is really starting to gain traction.  I think the whitepaper took a decent first-pass stab at exploring how one might approach it, but the devil’s in the details — or at least the devil’s 4 horsemen are 😉

/Hoff