Home > Cloud Computing, Virtualization > Infrastructure 2.0 and Virtualized/Cloud Networking: Wait, Where’s My DNS/DHCP Server Again?

Infrastructure 2.0 and Virtualized/Cloud Networking: Wait, Where’s My DNS/DHCP Server Again?

I read James Urquhart's first blog post written under the Cisco banner today titled "The network: the final frontier for cloud computing" in which he describes the evolving role of "the network" in virtualized and cloud computing environments.

The gist of his post, which he backs up with examples from Greg Ness' series on Infrastructure 2.0, is that in order to harness the benefits of virtualization and cloud computing, we must automate; from the endpoint to the underlying platforms — including the network — manual processes need to be replaced by automated capabilities:

When was the last time you thought “network” when you heard
“cloud computing”? How often have you found yourself working out
exactly how you can best utilize network resources in your cloud
applications?  Probably never, as to date the network hasn’t registered
on most peoples’ cloud radars.

This is understandable, of course, as the early cloud efforts try to
push the entire concept of the network into a simple “bandwidth”
bucket.  However, is it right? Should the network just play dumb and
let all of the intelligence originate at the endpoints?


The writing is on the wall. The next frontier to get explored in
depth in the cloud world will be the network, and what the network can
do to make cloud computing and virtualization easier for you and your
organization

If you walked away from James' blog as I did initially, you might be left with the impression that this isn't really about "the network" gaining additional functionality or innovative capabilities, but rather just tarting up the ability to integrate with virtualization platforms and automate it all.

Doesn't really sound all that sexy, does it.  Well, it's really not, which is why even today in non-virtualized environments we don't have very good automation and most processes still come down to Bob at the helpdesk. Virtualization and cloud are simply giving IT a swift kick in the ass to ensure we get a move on to extract as much efficiency and remove as much cost from IT as possible.

Don't be fooled by the simplicity of James' post, however, because there's a huge moose lurking under the table instead of on top of it and it goes toward the fundamental crux of the battle brewing between all those parties interested in becoming your next "datacenter OS" provider.

There exists one catalytic element that produces very divergent perspectives in IT around what, where, why and who automates things and how, and that's the very definition of "the network" in virtualized and cloud models.

How someone might describe "the network" as either just a "bandwidth bucket" of feeds and speeds or an "intelligent, aware, sentient platform for service delivery" depends upon whether you're really talking about "the network" as a subset or a superset of "the infrastructure" at large.

Greg argues that core network services such as IP adddress management, DNS, DHCP, etc. are part of the infrastructure and I agree, but given what we see today, I would say that they are part-in-parcel NOT a component of "the network" — they're generally separate and run atop the plumbing.  There's interaction, for sure, but one generally relies upon these third party service functions to deliver service.  In fact, that's exactly the sort of thing that Greg's company, Infoblox, sells.

This contributes to part of this definitional quandary.

Now we have this new virtualization layer injected between the network and the rest of the infrastructure which provides a true lever and frictionless capability for some of this automation but further confuses the definition of "the network" since so much of the movement and delivery of information is now done at this layer and it's not integrated with the traditional hardware-based network.*

See what I mean in this post titled "The Network Is the Computer…(Is the Network, Is the Computer…)"

This is exactly why you see Cisco's investment in bringing technologies such as VN-Link and the Nexus-1000v virtual switch to virtualized environments; it homogenizes "the network." It claws back the access layer so they can allow the network teams to manage the network again (and "automate" it) while also getting their hooks deeper into the virtualization layer itself. 

And that's where this gets interesting to me because in order to truly automate virtualized and cloud computing environments, this means one of three things as it relates to where core/critical infrastructure services live:

  1. They  will continue to be separate as stand-alone applications/appliances or bundled atop an OS
  2. They become absorbed by "the (traditional) network" and extend into the virtualization layer
  3. They get delivered as part of the virtualization layer

So if you're like most folks and run Microsoft-based "core network services" for things (at least internally) like DNS, DHCP, etc., what does this mean to you?  Well, either you continue as-is via option #1, you transition to integrated services in "the network" via option #2 or you end up with option #3 by the very virtue that you'll upgrade to Windows Server 2008 and Hyper-V anyway.

SO, this means that the level of integration between, say, Cisco and Microsoft will have to become as strong as it is with VMware in order to support the integration of these services as a "network" function, else they'll continue — in those environments at least — as being a "bandwidth bucket" that provides an environment that isn't really automated.

In order to hit the sweet spot here, Cisco (and other network providers) need to then start offering core network services as part of "the network."  This means wrestling it away from the integrated OS solutions or simply buying their way in by acquiring and then integrating these services ($10 Cisco buys Infoblox…)

We also see emerging vendors such as Arista Networks who are entering the grid/utility/cloud computing network market with high density, high-throughput, lower cost "cloud networking" switches that are more about (at least initially) bandwidth bucketing and high-speed interconnects rather than integrated and virtualized core services.  We'll see how the extensibility of Arista's EOS affects this strategy in the long term.

There *is* another option and that's where third party automation, provisioning, and governance suites come in that hope to tame this integration wild west by knitting together this patchwork of solutions. 

What's old is new again.

/Hoff

*It should be noted, however, that not all things can or should be
virtualized, so physical non-virtualized components pose another
interesting challenge because automating 99% of a complex process isn't
a win if the last 1% is a gating function that requires human
interaction…you haven't solved the problem, you've just made it less
steps that still requires Bob at the helpdesk..

 

Categories: Cloud Computing, Virtualization Tags:
  1. December 8th, 2008 at 11:25 | #1

    I was an early adopter of DHCP for IOS. I always thought DNS services belonged on it, too, but then Linksys and others popularized the concept outside of the data center. The only reason it doesn’t make sense it because of stability. IOS userland apps have been known to cause spurious reboots. Linksys routers are known to just freeze up, usually a watchdog process never wakes up after something like a DHCP or DNS service dies.
    Technically, everything in a data center should PXE boot, find iSCSI targets, and then boot from iSCSI. SAN doesn’t really support utility-like computing, so it’s going away, right? If this can all be done on one big iron instead of eight servers, two firewalls, two load balancers, two SSL optimization engines, sixteen storage arrays, four SAN switches, four network switches, and two border routers then I’m all for it.
    To further complicate this conversation, we can talk about DNSSEC. What I’d like to know is why, after 25 or so years, Microsoft and ISC still can’t go a year or two without a major vulnerability in their DNS servers? Then there’s additional issues such as RFC1918 (and other network) DNS PTR lookup leakage. Finally, try explaining Anycast to anyone, including CCIEs.
    I agree that it is hard to guess what will ultimately happen to these collapsing layers. However, Lucent/QIP, Solarwinds, and all those others were epic fail for 1990’s TCP/IP. Third-party products are bad. The last thing I want is to have the world dependent on Vizioncore, Veeam, or *eek* Novell for DNS/DHCP infrastructure.

  2. December 8th, 2008 at 17:18 | #2

    Hoff-
    I agree with you that Core Network Services is not "The Network" but a necessary abstraction layer which has been incubating in recent years as networks have become more larger, more complex and have been connecting to more dynamic systems and endpoints. Didn't Mark Fabbi have a Logical Network idea (from 2006?). Maybe we call this notion "connectivity intelligence."
    Last week I had a chat with Andreas A (Nemertes) about what connectivity intelligence would mean for networking and an explosion of innovations. For example, don't you think this would be one of the fundamental differences between tired netsec and dynamic virtsec?
    Greg

  3. alex
    December 9th, 2008 at 17:01 | #3

    that is an interesting point

  4. Matt
    December 9th, 2008 at 19:21 | #4

    Why would Cisco buy Infoblox when they have a perfectly functional offering today?
    http://www.cisco.com/en/US/products/sw/netmgtsw/ps1982/

  5. August 21st, 2011 at 21:32 | #5

    Very interesting article! I agree that even if a company sends all of their applications to the cloud there is a lot of residual network functions that have to remain onsite…routing, L4 port blocking, NAT, DHCP, content filtering, etc. Is it possible to virtualize and automate those functions on a scaled down virtual infrastructure inside the businesses perimeter?

  1. No trackbacks yet.