Home > Cloud Computing, De-Perimeterization, General Rants & Raves, Networking, Virtualization > Are Flat Networkers Like Flat Earthers Of Yore?

Are Flat Networkers Like Flat Earthers Of Yore?

Lori Macvittie is at the Gartner DC conference today and tweeted something extraordinary from one of the sessions focused on SDN (actually there were numerous juicy tidbits, but this one caught my attention:

Amazing, innit?

To which my response was:

Regardless of how one might “feel” about SDN, the notion of agility in service delivery wherein the network can be exposed and consumed as a service versus a trunk port and some VLANs is…the right thing.  Just because the network is “flat” doesn’t mean it’s services are or that the delivery of said services are any less complex.  I just wrote about this here: The Tyranny Of Taming (Network) Traffic: Steering, Service Insertion and Chaining…

“Flat networks” end up being carved right back up into VLANs and thus L3 routing domains to provide for isolation and security boundaries…and then to deal with that we get new protocols to deal with VLAN exhaustion, mobility and L2 stretch and…

It seems like some of the people at the Gartner DC show (from this and other tweets as I am not there) are abjectly allergic to abstraction beyond that which they can physically exercise dominion.

Where have I seen this story before?

/Beaker

  1. Will Hogan
    December 5th, 2012 at 06:12 | #1

    That is such a funny and relevant statement “Are Flat Networkers Like Flat Earthers Of Yore?”. I think of VCD as the upstart of SDN and the introduction into my companies’ various data centers sent shivers down ‘middle-ware’ network engineer’s backs. They were adverse to auto-assigning IP addressing because ‘that’s their job’ and ‘they maintain the spreadsheet’ and how could they be the ‘go to guy’ if they don’t insert themselves by controlling the IPs. They were adverse to the VMWare guys ability to create ad-hoc firewalled subnets with the click of a button – the argument being that if NE’s lose such control then troubleshooting will be to difficult or the VM guys didnt know enough about subnets sizing to click the button correctly.

    I for one was all for it. Any automation that curbs hours and hours of configuration, peer reviews, change notices, cable runs, equipment installs, etc cannot be dismissed.

    Of course my training request this year will be VMWare/VCD/SDN related rather than on some switch or router………….My future goal as NE will be maintaining needed duties as NE while pulling in duties on SDN software such as running VCD or the like.

  2. Donny Parrott
    December 5th, 2012 at 09:33 | #2

    This strikes a common nerve on another front. Why are “VMware Guys” commonly considered to be lesser networking or storage engineers? Many I work with can design, implement, and manage any of the three domains – compute, network, storage…

  3. December 5th, 2012 at 14:27 | #3

    Truly amazing. I guess some people are in denial. I can say after completing a planning meeting with one of the large networking companies yesterday, their entire focus in 2013 is around bringing their SDN solutions to market and integrating and supporting it on their physical devices as well. They are obviously not alone in taking this direction, so not sure why @lmacvittie and the attendees are unable to see the writing on the wall …..

    • beaker
      December 5th, 2012 at 15:17 | #4

      One bit of clarification, Ward… Lori was simply repeating/replaying what she saw/heard, she was not endorsing that concept.

  4. December 5th, 2012 at 15:00 | #5

    This is an interesting thread. We believe that another layer of abstraction (and also encapsulation) is required (i.e. an infrastructure solution) to ease public cloud adoption, especially for enterprises.

    However, everyone (all the SDN oriented companies) seems to be focused on a datacenter view … which has some advantages, but also tremendous adoption related disadvantages due to complexity.

    Our approach is more at the *application* level. The application consists of multiple VMs, networking, storage etc. The VMs that comprise the application have “supplied” services (e.g. http, https, ssh etc.) and “required” (or consumed) services (e.g. database). All that is managed in our “SDN” (what we call IO overlay) layer that is transparent to the application.

    So essentially, you can take your VMware or KVM VMs as is (including their networking configuration – static IPs, DHCP, DNS, you can even do multicast or other proprietary protocols) from your datacenter and simply upload them to any cloud (today we support AWS, Rackspace and HP Cloud). Part of the solution and encapsulation also includes a new hypervisor designed to run in a cloud VM.

    We are in beta at the moment. http://www.ravellosystems.com . Keen to get your thoughts! 🙂

  5. December 5th, 2012 at 15:20 | #6

    @beaker
    Did not mean to throw Lori under the bus or lump her in with the attendees viewpoint; thanks for the clarification Hoff.

  6. December 5th, 2012 at 22:46 | #7

    Maybe it’s just me but I think you can have a flat network and still have VLANs and layer 3. All of the networks I have built have been flat – basically a pair of layer 3 core switches, with edge switches connected to them in a mesh design. I use ESRP as my protocol of choice which combines loop prevention with layer 3 fault tolerance. It’s a snap to manage. The amount of network changes that go into a firewall/VPN/load balancer vs switch for me is about 2000:1.

    For me a network that is not flat involves multiple layers, instead of 3 hops to get from server to server maybe you have 5, or 7..or more.. Now we have big core switches that can connect 768 wire speed 10GbE ports in a quarter rack at wire rate.

    I did some back of the napkin math a few months back on one of my blog posts and and calculated you can get 96 blade chassis(768 quad socket servers) connected to a pair of switches like this(each blade chassis having 8x10GbE – 4 to each switch in active-active if you prefer, I prefer active-passive) and have ~50,000 cpu cores(Opteron 16-core) and ~384TB of memory fairly easily on just one pair of switches (+ the bridging/switching modules in the blade chassis). That’s enough horsepower to drive a lot of stuff, and you can keep the network design really flat. That doesn’t mean you can’t have a few hundred Layer 3 VLANs (in the event that you have tens of thousands of VMs), it’s still fairly flat with one switching layer.

    After all that and your big switches are still only at half capacity from a ports perspective.

    That’s enough memory for 128,000 VMs @ 3GB RAM/ea at a conservative consolidation ratio of roughly 2.5 VMs:CPU core. Too bad you can’t stretch a single vmware cluster even remotely that far. Can’t go beyond 128,000 VMs for this “brick” that is the limit of MAC addresses on the switch, so at that point you add another “layer”/zone and route to it maybe with 4x40Gbps connections or something.

    The point is you can scale really really high and still keep things fairly simple.

    I find SDN completely overblown, much like the “cloud” in general. IT companies see it as another buzz word they can hop onto in order to push new gear(in some cases, in others SDN is being integrated into existing equipment – my switches are SDN aware though I never plan to use the functionality). The really big cloud players have had SDN for years already. SDN-like functionality(e.g. APIs etc) has been available for years on some networking products.

    When I look at SDN, it sort of reminds me of when the storage (& networking alike) folks came out with FCoE/DCB/DCE/etc. That too has been an abortion.

    • beaker
      December 6th, 2012 at 14:29 | #8

      Flat PHYSICAL vs flat LOGICAL.

      This is the question…or rather the point of departure. Conflating the two leads to confusion. And discussions of service insertion/chaining.

      And Platform support for either/both.

  7. December 6th, 2012 at 14:43 | #9

    flat physical or flat virtual – when server 1 needs to talk to server 2, if they aren’t on the same physical switch most likely they have to go to the core. Whether they go to the core via layer 3, or layer 2(same VLAN or different VLANs), for me it doesn’t matter. it’s still flat.

    now if there are multiple layers, e.g. having to go through a layer 3 firewall then it’s no longer flat to me.

  8. beaker
    December 6th, 2012 at 14:45 | #10

    @nate

    …so what if the Layer 3 firewall is virtual?

  9. Donny Parrott
    December 7th, 2012 at 07:56 | #11

    Truely flat (physical) is the target of angst I believe. With a number of the up and coming SDN developments, the decision making, routing, and firewalling is being designed at the edge. This will create a paradigm where all “processing” has occured by the time the packet hits the wire with only delivery remaing. No ARP, no central firewall, no central router for the virtual environment. All of those functions are distributed to the edge for local processing.

    The physical design still remains important, but more focused on scale and latency. This is where virtual fabrics on IB begin to shine. One of IBs benefits is point to point channels. If technology can complete all processing on data transfers, request a link from processor 1 on server 1 to processor 3 on server 2, and then transmit without middleware…

    There are also discussions of automated networks in correlation with business operations. A backup network could be established and delivered to the needed systems between the hours of 1-5 AM with priority and QOS. When the time slot closes, the network is completely removed.

  1. No trackbacks yet.