Archive

Archive for December, 2008

SPOILER: I know what Sotirov and Applebaum’s 25C3 Preso. Is…

December 29th, 2008 4 comments

UPDATE: HA! So I was *so* close to the real thing!  Turns out that instead of 240 Nintendo DS Lites, they used 200 clustered  Sony PS III's! I actually guessed that in an email to Sotirov, too!  I can't believe you people doubted me!

I initially thought they used the go-kart crashes in Super Mario brothers to emulate MD5 "collisions."

Check out Ryan Naraine's write-up here.

So Alexander Sotirov and Jacob Applebaum are giving a presentation tomorrow at 25C3 titled "Making the Theoretical Possible."

There's a summary of their presentation abstract posted via the link above, but the juicy parts are redacted, hiding the true nature of the crippling effects of the 'sploit about to be released upon the world:

25C3_censored

I have a Beowulf cluster of 240 Nintendo DS Lite's running in my basement and harnessing the capabilities thereof was able to apply my custom-written graphical binary isomorphic differ algorithm using neural networking based self-organizing maps and reverse steganography to deduce the obscured content.

I don't wish to be held liable for releasing the content of this prior to their presentation nor do I wish to be pursued for any fair use violations, so I'm hosting the results off shore.

Please click here for the non-redacted image hosted via a mirror site that reveals the content of the abstract.

/Hoff

Categories: Jackassery Tags:

Virtualization? So last Tuesday.

December 27th, 2008 3 comments

This post contains nothing particularly insightful other than a pronounced giant sucking sound that's left a vacuum in terms of forward motion regarding security and virtualization.

Why?

Three things:
  1. There's an awful lot of focus moving from the (cough) mature space of server virtualization to the myriad of options and solutions on client virtualization as we're seeing the transition of where we focus our efforts swing again.  

    We're in the throes of yet another "great awakening" where we some of us realize that (gasp!) it's the information we ought to secure and that the platforms themselves are insecure and should be treated as such.  However, we've got so much security invested in the network and servers that we play ping-pong between securing them, bypassing the crown jewels.

    Virtualization has just reinforced that behavior and as we take stock of where we are in (not) securing these vectors looking for the next silver bullet, we knee jerk back to the the conduit through which the user interacts with our precious data: the client.

    The client, it seems, is the focus yet again, driven mostly by economics.  It's interesting to note that even though the theme of RSA this last go-round was "Information Centricity"  someone didn't get the memo. 

    Check out this graphic from my post a ways back titled "Security Will Not End Up In the Network…" for why this behavior is not only normal but will unfortunately lead us to always focus on the grass which turns out not to be greener on the other side.  I suppose I should really break out the "host" into server and client, accordingly:

  2. Youarehere_3

    Further, and rightfully so, the accelerated convergence of storage and networking thanks to virtualization is causing heads to a-splode in ways that cause security to be nothing more than a shrug and a prayer.  What it means to "secure the cloud" is akin to pissing in the wind at the moment.  Hey, if you've got to go, you've got to go…

  3. ISV's are in what a amounts to a holding platform waiting for VDCOS, VI4, vSphere with vNetworking and the VMsafe API's to be released so they can unleash their next round of security software appliances to tackle the problems highlighted in my Four Horsemen of the Virtualization Security Apocalypse series.  For platforms other than VMware, we've seen bupkis as it relates to innovation of VirtSec.  
  4. The "Cloud" has assimilated us all and combined with the stalling function above, has left us waffling in ambivalence.  The industry is so caught up in the momentum of this new promised revenue land that the blinding opportunity combined with a lack of standards and a slew of new business and technology models means that innovation is being driven primarily by startups while existing brands jockey to retool.

It's messy.  It's going to get messier, but the good news is that it's a really exciting time.  We're going to see old friends like IAM, IDP, VPNs, and good old fashioned routing and switching tart themselves up, hike up the hemlines and start trolling for dates again as virtualization 2.x, VirtSec and Cloud/Cloud Security make all the problems we haven't solved (but know we need to) relevant and pressing once again.

All those SysAdmin and NetAdmin skills you started with before you became a "security professional" will really help in sorting through all this mud.

There exist lots of opportunity to make both evolutionary and revolutionary advancements in solving many of the problems we've been suffering from for decades.  Let's work to press forward and not lose sight of where we're going and more importantly from whence we've come.

/Hoff
  

Servers and Switches and VMM’s, Oh My! Cisco’s California “Server Switch”

December 21st, 2008 4 comments

Cisco-Virtualization-Wow
From the desk of Cisco's "Virtualization Wow!" Campaign: When is a switch with a server not a virtualization platform?  When it's a server with a switch as a virtualization platform, of course! 😉

I can't help but smile at the announcement that Cisco is bringing to market a blade-based chassis which bundles together Intel's Nehalem-based server processors, the Nexus 5000 switch, and VMware's virtualization and management platform.  From InformationWeek:

Cisco's system, code-named California, likely will be introduced in the
spring, according to the sources. It will meld Cisco's Nexus 5000 switch that converges storage
and data network traffic, blade servers that employ Intel Nehalem
processors, and virtualization management with help from VMware. 

This news was actually broken back in the beginning of December by virtualization.info but I shamefully missed it.  It looked like a bunch of others did, too.

This totally makes sense as virtualization has driven convergence across the compute, network and storage realms and has highlighted the fact that the provisioning, automation, and governance — up and down the stack — demands a unified approach for management and support.

For me, this is the natural come-about of what I wrote about in July of 2007 in a blog post titled "Cisco & VMware – The Revolution Will Be…Virtualized?":

This [convergence of network, compute and virtualization, Ed.] is interesting for sure and if you look at the way in which the
demand for flexibility of software combined with generally-available
COTS compute stacks and specific network processing where required, the
notion that Cisco might partner with VMWare or a similar vendor such as
SWSoft looks compelling.  Of course with functionality like KVM in the Linux kernel, there's no reason they have to buy or ally…

Certainly there are already elements of virtualization within
Cisco's routing, switching and security infrastructure, but many might
argue that it requires a refresh in order to meet the requirements of
their customers.  It seems that their CEO does.

When I last blogged about Cisco's partnership with VMware and (what is now called) the Nexus 1000v/VN-Link, I made reference to the fact that I foresaw the extraction of the VM's from the servers and suggested that we would see VM's running in the Nexus switches themselves.  Cisco representatives ultimately put a stake in the sand and said this would never happen in the comments of that blog post.

Now we know what they meant and it makes even more sense.

So the bundling of the Nexus 5000* (with the initiator,) the upcoming protocol for VM-Flow affinity tagging, the integrated/converged compute and storage capabilities, and Intel's SR-IOV/MR-IOV/IOMMU technologies in the Nehalem, all supported by the advances with vNetworking/VN-Link makes this solution a force to be reckoned with.

Other vendors, especially those rooted in servers and networking such as HP and IBM, are likely to introduce their own solutions, but given the tight coupling of the partnership, investment and technology development between Intel, VMware and Cisco, this combo will be hard to beat. 

Folks will likely suggest that Cisco has no core competency in building, selling or supporting "servers," but given the channel and partnership with VMware — with virtualization abstracting that hardware-centric view in the first place — I'm not sure this really matters.  We'll have to see how accurate I am on this call.

Regardeless of the semantic differences of where the CPU execution lives (based on my initial prediction,) all the things I've been talking about that seemed tangentially-related but destined to come together seem to have.  Further, here we see the resurgence (or at least redefinition of Big Iron, all over again…)

Remember the Four Horsemen slides and the blog post (the network is the computer, is the network, is the…) where I dared you to figure out where "the network" wasn't in the stack?  This is an even more relevant question today. 

It's going to be a very interesting operational and organizational challenge from a security perspective when your server, storage, networking and virtualization platform all come from a single source.

California Dreamin'…

/Hoff

* Not that the 1000v ain't cool, but that little slide that only appeared once at VMworld about the 5000v and the initiator was just too subtely delicious not to be the real juice in the squeeze. The 1000v obviously has its place and will be a fantastic solution, but for folks who are looking for a one-stop shop for their datacenter blueprint designs heavily leveraging virtualization, this makes nothing but sense.

Categories: Cisco, Virtualization, VMware Tags:

Rogue VM Sprawl? Really?

December 19th, 2008 6 comments

Pirateflag
I keep hearing about the impending doom of (specifically) rogue VM sprawl — our infrastructure overrun with the unchecked proliferation of virtual machines running amok across our enterprises.  Oh the horror!

Most of the examples use the consolidation of server VM's onto hosts as delivered by virtualization as their example.

I have to ask you though, given what it takes to spin up a VM on a platform such as VMware, how can you have a "rogue" VM sprawling its way across your enterprise!? 

Someone — an authorized administrator — had to have loaded it into inventory, configured its placement on a virtual switch, and spun it up via VirtualCenter or some facsimile thereof depending upon platform.

That's the definition of a rogue?  I can see where this may be a definitional issue, but the marketeers are getting frothed up over this very issue, whispering in your ear constantly about the impending demise of your infrastructure…and undetectable hypervisor rootkits, too.  🙂

It may be that the ease of which a VM *can* be spun up legitimately can lead to the overly-exhuberant deployment of VM's without understanding the impact this might have on the infrastructure, but can we please stop grouping stupidity and poor capacity planning/impact analysis with rogueness?  They're two different things.

If administrators are firing off VMs that are unauthorized, unhardened, and unaccounted for, you have bigger problems than that of virtualization and you ought to consider firing them off. 

The inventory of active VMs is a reasonably easy thing to keep track of; if it's running, I can see it. 

I know "where" it is and I can turn it off.  To me, the bigger problem is represented by the offline VMs which can live outside that inventory window, just waiting to be reactivated from their hypervisorial hibernation.

But does that represent "rogue?"

You want an example of a threat which represents truly rogue VM "sprawl" that people ought to be afraid of?  OK, here's one, and it happened to me.  I talk about it all the time and people usually say "Oh, man, I never thought of that…"  usually because we're focused on server virtualization and not the client side.

The Setup: It's 9:10am about 4-5 years ago.  Settling in to read email after getting to work, the klaxon alarms start blaring.  The IDS/IPS consoles start going mad. Users can't access network resources.  DHCP addresses are being handed out from somewhere internally on the network from pools allocated to Korean address space.

We take distributed sniffer traces.  Trackback through firewall, IDS and IPS logs and isolate the MAC address in the CAM tables of the 96 port switch to which the offending DHCP server appears to be plugged, although we can't ping it.

My analyst is now on a mission to unplug the port, so he undocks his laptop and the alarms silence.

I look over at him.  He has a strange look on his face.  He docks his laptop again.  Seconds later the alarms go off again.

The Culprit: Turns out said analyst was doing research at home on our W2K AD/DHCP server hardening scripts.  He took our standard W2K server image, loaded it as a VM in VMware Workstation and used it at home to validate funtionality.

The image he used had AD/DHCP services enabled.

When he was done at home the night before, he minimized VMware and closed his laptop.

When he came in to work the next morning, he simply docked and went about reading email, forgetting the VMW instance was still running.  Doing what it does, it started responding to DHCP requests on the network.

Because he was using shared IP addresses for his VM and was "behind" the personal firewall on his machine which prohibits ICMP requests based on the policy (but obviously not bootp/DHCP) we couldn't ping it or profile the workstation…

Now, that's a rogue VM.  An accidental rogue VM.  Imagine if it were an attacker.  Perhaps he/she was a legitimate user but disgruntled.  Perhaps he/she decided to use wireless instead of wired.  How much fun would that be?

Stop with the "rogue (server) VM" FUD, wouldya?

Kthxbye.

/Hoff

Categories: Virtualization Tags:

Using Twitter (Via the Cloud) As a Human-Powered, Second Stage SIEM & IPS

December 18th, 2008 2 comments

Here's the premise that will change the face of network security, compliance, SIEM and IDP forever:

Twitter as a human-powered SIEM and IPS for correlation

This started as a joke I made on Twitter a few weeks ago, but given the astounding popularity of Cloud-based zaniness currently, I'm going open source with my idea and monetize it in the form of a new startup called CloudCorrelator™.

Here's how it works:

  1. You configure all your network devices and your management consoles (aggregated or not) to point to a virtual machine that you install somewhere in your infrastructure.  It's OVF compliant, so it will work with pretty much any platform.
  2. This VM accepts Syslog, SNMP, raw log formats, and/or XML and will take your streamed message bus inputs, package them up, encrypt them into something we call the SlipStream™, and forward them off to…
  3. …the fantastic cloud-based service called CloudCorrelator™ (running on the ever-popular AWS platform) which normalizes the alerts and correlates them as any SIEM platform does providing all the normal features you'd expect, but in the cloud where storage, availability, security and infinite expandability is guaranteed!  The CloudCorrelator™ is open source, of course.

    This is where it gets fun…

  4. Based upon your policies the CloudCorrelator™ sanitizes your SlipStream™ feed and using the Twitter API will allow Twitter followers to cross-correlate seemingly random events globally, using actual human eyeballs to provide the heuristics and fuzzy logic analysis across domains.

Why bother sending your SlipStream™ to Twitter?  Well, firstly you can use existing search tools to determine if anyone else is seeing similar traffic patterns across diverse networks.  Take TwitterSearch for example.   Better yet, use the TweetStat Cloud to map relevant cross-pollination of events.

That zero day just became a non-event.

I am accepting VC, press and alpha customer inquries immediately.  The @VirtualSIEM Twitter feed should start showing SlipStream™ parses out of CloudCorrelator™ shortly.

/Hoff

Categories: Jackassery Tags:

Virtual Routing – The Anti-Matter of Network SECURITY…

December 16th, 2008 10 comments

Here's a nod to Rich Miller who pointed over (route the node, not the packet) to a blog entry from Andreas Antonopoulos titled "Virtual Routing – The anti-matter of network routing."

The premise, as brought up by Doug Gourlay from Cisco at the C-Scape conference was seemingly innocuous but quite cool:

"How about using netflow information to re-balance servers in a data center"

Routing: Controlling the flow of network traffic to an optimal path between two nodes

Virtual-Routing or Anti-Routing: VMotioning nodes (servers) to optimize the flow of traffic on the network.

Using netflow information, identify those nodes (virtual servers)
that have the highest traffic "affinity" from a volume perspective (or
some other desired metric, like desired latency etc) and move (VMotion,
XenMotion) the nodes around to re-balance the network. For example,
bring the virtual servers exchanging the most traffic to hosts on the
same switch or even to the same host to minimize traffic crossing
multiple switches. Create a whole-data-center mapping of traffic flows,
solve for least switch hops per flow and re-map all the servers in the
data center to optimize network traffic.

My first reaction was, yup, that makes a lot of sense from a network point of view, and given who made the comment, it does make sense. Then I choked on my own tongue as the security weenie in me started in on the throttling process, reminding me that while this is fantastic from an autonomics perspective, it's missing some serious input variables.

Latency of the "network" and VM spin-up aside, the dirty little secret is that what's being described here is a realistic and necessary component of real time (or adaptive) infrastructure.  We need to get ultimately to the point where within context, we have the ability to do this, but I want to remind folks that availability is only one leg of the stool.  We've got the other nasty bits to concern ourselves with, too.

Let's look at this from two perspectives: the network plumber and the security wonk

From the network plumbers' purview, this sounds like an awesome idea; do what is difficult in non-virtualized environments and dynamically adjust and reallocate the "location" of an asset (and thus flows to/from it) in the network based upon traffic patterns and arbitrary metrics.  Basically, optimize the network for the lowest latency and best performance or availability by moving VM's around and re-allocating them across the virtual switch fabric (nee DVS) rather than adjusting how the traffic gets to the static nodes.

It's a role reversal: the nodes become dynamic and the network becomes more static and compartmentalized.  Funny, huh?

The security wonk is unavailable for comment.  He's just suffered a coronary event.  Segmented network architecture based upon business policy, security, compliance and risk tolerances make it very difficult to perform this level of automation via service governors today, especially in segmented network architecture based upon asset criticality, role or function as expressed as a function of (gulp) compliance, let's say. 

Again, the concept works great in a flat network where asset grouping is, for the most part, irrelevant (hopefully governed by a policy asserting such) where what you're talking about is balancing the compute with network and storage, but the moment you introduce security, compliance and risk management as factors into the decision fabric, things get very, very difficult.

Now, if you're Cisco and VMware, the
models for how the security engines that apply policy consistently
across these fluid virtualized networks is starting to take shape, but what we're
missing are the set of compacts or contracts that consistently define
and enforce these policies no matter where they move (and control *if* they can move) and how they factor these requirements into
the governance layer.

The standardization of governance approaches — even at the network layer — is lacking. 
There are lots of discrete tools available but the level of integration
and the input streams and output telemetry are not complete.

If you take a look, as an example, at CIRBA's exceptional transformational analytics and capacity management solution, replete with their multi-dimensional array of business process, technical infrastructure and resource mapping, they have no input for risk assessment data, compliance or "security" as variables.

When you look at the utility brought forward by the dynamic, agile and flexible capabilities of virtualized infrastructure, it's hard not to extrapolate all the fantastic things we could do. 

Unfortunately, the crushing weight of what happens when we introduce security, compliance and risk management to the dance means we have a more sobering discussion about those realities.

Here's an example reduced to the ridiculous: we have an interesting time architecting networks to maximize throughput, reduce latency and maximize resilience in the face of what can happen with convergence issues and flapping when we have a "routing" problem.

Can you imagine what might happen when you start bouncing VM's around the network in response to maximizing efficiency while simultaneously making unavailable the very resources we seek to maximize the availability of based upon disassociated security policy violations?  Fun, eh?

While we're witnessing a phase shift in how we design and model our networks to support more dynamic resources and more templated networks, we can't continue to mention the benefits and simply assume we'll catch up on the magical policy side later.

So for me, Virtual Routing is the anti-matter of network SECURITY, not network routing…or maybe more succinctly, perhaps security doesn't matter at all?

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Beyond the Sumo Match: Crosby, Herrod, Skoudis and Hoff…VirtSec Death Match @ RSA!

December 15th, 2008 2 comments

Besides the sumo suit wrestling match I'm organizing between myself and Simon Crosby at this year's coming RSA 2009 show, I'm really excited to announce that there will be another exciting virtualization security (VirtSec) event happening at the show.

Thanks to Tim Mather at RSA, much scheming and planning has paid off:

"In this verbal cage match session, two well known critics of virtualization security take on two virtualization company CTOs as they spar over how best to secure virtualization platforms: who should be responsible for securing it, and how that ultimately impacts customers and attackers.  We have Hoff and Skoudis versus Crosby and Herrod.  Refereeing will be respected analyst, Antonopoulos."

Simon Crosby (Citrix CTO), Steve Herrod (VMware CTO), Ed Skoudis (InGuardians) and myself will have a lively debate moderated by Andreas Antonopoulos (Nemertes) that is sure to entertain and educate folks as to the many fascinating issues surrounding the present and future of VirtSec.  I expect to push the discussion toward cloud security also…

WAR! 😉

Stay tuned for further announcements.

/Hoff

GigaOm’s Alistair Croll on Cloud Security: The Sky Is Falling!…and So Is My Tolerance For Absurdity

December 14th, 2008 3 comments
Whatmeworry
I just read the latest blog of Alistair Croll from GigaOm titled "Cloud Security: The Sky Is Falling!" in which he suggests that we pillow-hugging security wonks ought to loosen our death grips on our data because not only are we flapping our worry feathers for nothing, but security in "the Cloud" will result in better security than we have today. 

It's an interesting assertion, really, that despite no innovative changes in the underpinnings of security technology, no advances in security architecture or models and no fundamental security operational enhancements besides the notion of enhanced "monitoring," that simply outsourcing infrastructure to a third party "in the cloud" will in some way make security "better," whatever version of "the Cloud" you may be describing:

I don’t believe that clouds themselves will cause the security breaches and data theft they anticipate; in many ways, clouds will result in better security. Here’s why:

    • Fewer humans – Most computer breaches are the result of human error; only 20-40 percent stem from technical malfunctions. Cloud operators that want to be profitable take humans out of the loop whenever possible.
    • Better tools – Clouds can afford high-end data protection and security monitoring tools, as well as the experts to run them. I trust Amazon’s operational skills far more than my own.
    • Enforced processes – You could probably get a co-worker to change your company’s IT infrastructure. But try doing it with a cloud provider without the proper authorization: You simply won’t be able to.
    • Not your employees — Most security breaches are committed by internal employees. Cloud operators don’t work for you. When it comes to corporate espionage, employees are a much more likely target.

    Of course it takes people to muck things up, it always has and always will.  Rushing to embrace a "new" computing model without the introduction of appropriately compensating controls, adapted risk assessment/management methodologies and practices will absolutely introduce new threats, vulnerabilities and risk at a pace driven by supposed economic incentives that have people initially foaming at their good fortune and then fuming when it all goes bad.

    This comes down to the old maxim: "guns don't kill people, people kill people."  Certainly "the Cloud" alone won't increase breaches and data theft, but using it without appropriate safeguards will.

    This is an issue of squeezing the balloon.  The problem doesn't change in volume, it just changes shape.

    Those of us concerned about security and privacy in cloud computing models have good reason to be concerned; we live with and have lived with these sorts of disruptive innovations and technology before and it really, really screws things up because the security models and technology we can lean on to manage risk is not adapted to this at all and the velocity of change eclipses our ability to do do our jobs competently.

    Further bonking things up is the very definition of "the Cloud(s)" in the first place.

    Despite the obvious differences in business models, use cases, technical architecture as well as the non-existence of a singularity called "The Cloud," this article generalizes and marginalizes the security challenges of cloud computing regardless.  In fact, it emphasizes on one leg of the IT stool (people) to the point of downplaying via the suspension of disbelief that the other two (process and technology) are problems less deserving of attention as they are magically addressed.

    To be fair, I can certainly see Alistair's argument holding water within the context of an SME/SMB with no dedicated expertise in security and little or no existing cost burden in IT infrastructure.  The premise: let your outsourced vendor provide you with the expertise in security you don't have as they have a vested interest to do so and can do it better than you.  

    The argument hinges on two things: that insiders intent on malicious activity by tampering with "infrastructure" are your biggest risk eliminated by "the cloud" and that infrastructure and business automation, heretofore highly sought after elements of enterprise modernization efforts, is readily available now and floating about in the cloud despite its general lack of availability in the enterprise.

    So here's what's amusing to me:
    1. It takes humans to operate the cloud infrastructure.  These human operators, despite automation, still suffer from the same scale and knowledge limitations as those in the real world.  Further the service governance layers that translate business process, context and risk into enforceable policy across a heterogeneous infrastructure aren't exactly mature. 
        
    2. The notion that better tools exist in the cloud that haven't as yet been deployed in the larger enterprise seems a little unbelievable.  Again, I agree that this may be the case in the SME/SMB, but it's simply not the case in larger enterprises.  Given issues such as virtualization (which not all cloud providers depend upon, but bear with me) which can actually limit visibility and reach, I'd like to understand what these tools are why we havent' heard of them before.
    3. The notion that you can get a co-worker to "…change your company's IT infrastructure" but you can't get this same event impact to occur in the cloud is ludicrous.  Besides the fact that the bulk of breaches result from abuse or escalation of privilege in operating systems and applications, not general "infrastructure," and   "the Cloud," having abstracted this general infratructure from view. leaves bare the ability to abuse the application layer just as ripely.
    4. Finally, Alaistair's premise that the bulk of attacks originate internally is misleading. Alistair's article was written a few days ago.  The Intranet Journal article he cites to bolster his first point substantiating his claim was written in 2006 and is based upon a study done by CompTIA in 2005.  2005!  That's a lifetime by today's standards. Has he read the Verizon breach study that empirically refutes many of his points? (*See Below in extended post)
     As someone who has been on both the receiving end as well as designed and operated managed (nee Cloud) security as a service for customers globally, there are a number of exceptions to Alistair's assertions regarding the operational security prowess in "the Cloud" with this being the most important: 

    As "the Cloud" provider adds customers, the capability to secure the infrastructure and the data transiting it, ultimately becomes an issue of scale, too. The more automation that is added, the more false positives show up, especially in light of the fact that the service provider has little or no context of the information, business processes or business impact that their monitoring tools observe.  You can get rid of the low-hanging fruit, but when it comes down to impacting the business, some human gets involved.

    The automation that Alastair asserts is one of the most important reasons why Cloud security will be better than non-Cloud security ultimately suffers from the same  lack of eyeballs problem that the enterprise supposedly has in the first place.

    For all the supposed security experts huddled around glowing monitors in CloudSOC's that are vigilantly watching over "your" applications and data in the Cloud, the dirty little secret is that they rely on basically the same operational and technical capabilities as enterprises deploy today, but without context for what it is they are supposedly protecting.  Some rely on less.  In fact, in some cases, unless they're protecting their own infrastructure, they don't do it at all — it's still *your* job to secure the stacks, they just deal with the "pipes."

    We're not all Chicken Little's, Alistair.  Some of us recognize the train when it's heading toward us at full speed and prefer not to be flattened by it, is all.

    /Hoff

    Read more…

    Oh Great Security Spirit In the Cloud: Have You Seen My WAF, IPS, IDS, Firewall…

    December 10th, 2008 4 comments

    SearchingI'm working on the sequel to my Four Horsemen of the Virtualization Security Apocalypse presentation.

    It's called "The Frogs Who Desperately Wanted a King: An Information Security Fable of Virtualization, RTI and Cloud Computing Security." (Okay, it also has the words "interpretive dance" in it, but that's for another time…)

    Many of the interesting issues from the Four Horsemen regarding the deficiencies of security solutions and models in virtualized environments carries over directly to operationalizing security in the Cloud. 

    As a caveat, let's focus on a cost-burdened "large" enterprise who's involved in moving from physical to virtual to cloud-based services.

    I'm not trying to make a habit of picking on Amazon AWS, but it's just such a fantastic example for my point, which is quite simply:

    While the cloud allows you to obviate the need for physical compute, network and storage infrastructure, it requires a concerted build-out and potential reinvestment in a software-based security infrastructure which, for most large enterprises, does not consist of the same solutions deployed today.

    Why?  Let me paint the picture…

    In non-virtualized environments, we generally use dedicated appliances or integrated solutions that provide one of more discrete security functions. 

    These solutions are built generally on hardened OS's, sometimes using custom hardware and sometimes COTS boxes which are tarted up.  They are plumbed in between discretely segregated (physical or logical) zones to provide security boundaries defined by arbitrary policies based upon asset classification, role, function, access, etc.  We've been doing this for decades.  It's the devil we know.

    In virtualized environments, we currently experience some real pain when it comes to replicating the same levels of security, performance, resiliency, and scale using software-based virtual appliances to take the place of the hardware versions in our physical networks when we virtualize the interconnects within these zones.

    There are lots of reasons for this, but the important part is realizing that many of the same hardware solutions are simply not available as virtual appliances and even when they are, they are often not 1:1 in terms of functionality or capability.  Again, I've covered this extensively in the Four Horsemen.

    So if we abstract this to its cloudy logical conclusion, and use AWS as the "platform" example, we start to face a real problem for an enterprise that has a decade(s) of security solutions, processes and talent that is focused on globalizing, standardizing and optimizing their existing security infrastructure and are now being forced to re-evaluate not only the technology selection but the overall architecture and operational model to support it.

    Now, it's really important that you know, dear reader, that I accept that one can definitely deploy security controls instantiated as both network and host-based instances in AWS.  There are loads of options, including the "firewall" provided by AWS.

    However, the problem is that in the case of building an AMI for AWS supporting a particular function (firewall, WAF, IPS, IDS, etc.) you may not have the same solutions available to you given the lack of support for a particular distro, lack of "port" to a VA/VM, or issues surrounding custome kernels, communication protocols, hardware, etc…  You may be limited in many cases to having to rely on open source solutions.

    In fact, when one looks at most of the examples given when describing securing AWS instances, they almost always reference OSS solutions such as Snort, OSSEC, etc.  There's absolutely NOTHING wrong with that, but it's only one dimension.

    That's going to have a profound effect across many dimensions.  In many cases, enterprises have standardized on a particular solution not because it's special from a "security" perspective, but because it's the easiest to manage when you have lots of them and they are supportable 24/7 by vendors with global reach and SLA's that represent the criticality of their function.

    That is NOT to say that OSS solutions are not easy to manage or supportable in this fashion, but I believe it's a valid representation of the state of things.

    (Why am I getting so defensive about OSS? 😉

    Taking it further, and using my favorite PCI in the Cloud argument, what if the web application firewall that you've spent hundreds of thousands of dollars purchasing, tuning and deploying in support of PCI DSS throughout the corporate enterprise simply isn't available as a software module installable in an AMI in the cloud?  Or the firewall?  Or the IPS? 

    In the short term this is a real problem for customers.  In the long term, it's another potential life preserver for security ISV's and an opportunity for emerging startups to think about new ways of solving this problem.

    /Hoff

    Infrastructure 2.0 and Virtualized/Cloud Networking: Wait, Where’s My DNS/DHCP Server Again?

    December 8th, 2008 5 comments

    I read James Urquhart's first blog post written under the Cisco banner today titled "The network: the final frontier for cloud computing" in which he describes the evolving role of "the network" in virtualized and cloud computing environments.

    The gist of his post, which he backs up with examples from Greg Ness' series on Infrastructure 2.0, is that in order to harness the benefits of virtualization and cloud computing, we must automate; from the endpoint to the underlying platforms — including the network — manual processes need to be replaced by automated capabilities:

    When was the last time you thought “network” when you heard
    “cloud computing”? How often have you found yourself working out
    exactly how you can best utilize network resources in your cloud
    applications?  Probably never, as to date the network hasn’t registered
    on most peoples’ cloud radars.

    This is understandable, of course, as the early cloud efforts try to
    push the entire concept of the network into a simple “bandwidth”
    bucket.  However, is it right? Should the network just play dumb and
    let all of the intelligence originate at the endpoints?


    The writing is on the wall. The next frontier to get explored in
    depth in the cloud world will be the network, and what the network can
    do to make cloud computing and virtualization easier for you and your
    organization

    If you walked away from James' blog as I did initially, you might be left with the impression that this isn't really about "the network" gaining additional functionality or innovative capabilities, but rather just tarting up the ability to integrate with virtualization platforms and automate it all.

    Doesn't really sound all that sexy, does it.  Well, it's really not, which is why even today in non-virtualized environments we don't have very good automation and most processes still come down to Bob at the helpdesk. Virtualization and cloud are simply giving IT a swift kick in the ass to ensure we get a move on to extract as much efficiency and remove as much cost from IT as possible.

    Don't be fooled by the simplicity of James' post, however, because there's a huge moose lurking under the table instead of on top of it and it goes toward the fundamental crux of the battle brewing between all those parties interested in becoming your next "datacenter OS" provider.

    There exists one catalytic element that produces very divergent perspectives in IT around what, where, why and who automates things and how, and that's the very definition of "the network" in virtualized and cloud models.

    How someone might describe "the network" as either just a "bandwidth bucket" of feeds and speeds or an "intelligent, aware, sentient platform for service delivery" depends upon whether you're really talking about "the network" as a subset or a superset of "the infrastructure" at large.

    Greg argues that core network services such as IP adddress management, DNS, DHCP, etc. are part of the infrastructure and I agree, but given what we see today, I would say that they are part-in-parcel NOT a component of "the network" — they're generally separate and run atop the plumbing.  There's interaction, for sure, but one generally relies upon these third party service functions to deliver service.  In fact, that's exactly the sort of thing that Greg's company, Infoblox, sells.

    This contributes to part of this definitional quandary.

    Now we have this new virtualization layer injected between the network and the rest of the infrastructure which provides a true lever and frictionless capability for some of this automation but further confuses the definition of "the network" since so much of the movement and delivery of information is now done at this layer and it's not integrated with the traditional hardware-based network.*

    See what I mean in this post titled "The Network Is the Computer…(Is the Network, Is the Computer…)"

    This is exactly why you see Cisco's investment in bringing technologies such as VN-Link and the Nexus-1000v virtual switch to virtualized environments; it homogenizes "the network." It claws back the access layer so they can allow the network teams to manage the network again (and "automate" it) while also getting their hooks deeper into the virtualization layer itself. 

    And that's where this gets interesting to me because in order to truly automate virtualized and cloud computing environments, this means one of three things as it relates to where core/critical infrastructure services live:

    1. They  will continue to be separate as stand-alone applications/appliances or bundled atop an OS
    2. They become absorbed by "the (traditional) network" and extend into the virtualization layer
    3. They get delivered as part of the virtualization layer

    So if you're like most folks and run Microsoft-based "core network services" for things (at least internally) like DNS, DHCP, etc., what does this mean to you?  Well, either you continue as-is via option #1, you transition to integrated services in "the network" via option #2 or you end up with option #3 by the very virtue that you'll upgrade to Windows Server 2008 and Hyper-V anyway.

    SO, this means that the level of integration between, say, Cisco and Microsoft will have to become as strong as it is with VMware in order to support the integration of these services as a "network" function, else they'll continue — in those environments at least — as being a "bandwidth bucket" that provides an environment that isn't really automated.

    In order to hit the sweet spot here, Cisco (and other network providers) need to then start offering core network services as part of "the network."  This means wrestling it away from the integrated OS solutions or simply buying their way in by acquiring and then integrating these services ($10 Cisco buys Infoblox…)

    We also see emerging vendors such as Arista Networks who are entering the grid/utility/cloud computing network market with high density, high-throughput, lower cost "cloud networking" switches that are more about (at least initially) bandwidth bucketing and high-speed interconnects rather than integrated and virtualized core services.  We'll see how the extensibility of Arista's EOS affects this strategy in the long term.

    There *is* another option and that's where third party automation, provisioning, and governance suites come in that hope to tame this integration wild west by knitting together this patchwork of solutions. 

    What's old is new again.

    /Hoff

    *It should be noted, however, that not all things can or should be
    virtualized, so physical non-virtualized components pose another
    interesting challenge because automating 99% of a complex process isn't
    a win if the last 1% is a gating function that requires human
    interaction…you haven't solved the problem, you've just made it less
    steps that still requires Bob at the helpdesk..

     

    Categories: Cloud Computing, Virtualization Tags: