Search Results

Keyword: ‘infrastructure 2.0’

Ron Popeil and Cloud Computing In Poetic Review…

February 27th, 2009 No comments

Popeil

The uptake of computing
using the cloud,
would make the king of all marketeers
— Ron Popeil — proud

He's the guy who came out
with the canned spray on hair,
the oven you set and forget
without care

He had the bass fishing rod
you could fit in your pocket,
the Veg-O-Matic appliance
with which you could chop it

Mr. Microphone, it seems, 
was ahead of its time
Karaoke meets Facebook
Oh, how divine!

The smokeless ashtray,
the Cap Snaffler, drain buster
selling you all of the crap
Infomercials could muster

His inventions solved problems
some common, some new
If you ordered them quickly
he might send you two!

Back to the Cloud
and how it's related
to the many wonders
that Sir Ron has created

The cloud fulfills promises
that IT has made:
agility, better service
at a lower pay grade

You can scale up, scale down
pay for just what you use
Elastic infrastructure
what you get's what you choose

We've got public and private,

outside and in,

on-premise, off-premise

thick platforms or thin

The offerings are flooding
the wires en masse
Everything, it now seems,
is some sort of *aaS

You've got infrastructure,
platforms, software and storage.
Integration, SOA 
with full vendor whoreage

Some folks equate
virtualization with cloud
The platform providers
shout this vision out loud

'Course the OS contingent
has something to say
that cloud and virt
is part of their play

However you see it,
and whatever its form
the Cloud's getting bigger
it's starting to storm

Raining down on us all
is computational glory
but I wonder, dear friends,
'bout the end of this story

Will the Cloud truly bring value?
Solve problems that matter?
Or is it about 
vendors' wallets a-fatter?

*I* think the Cloud
has wonderful promise
If the low-hanging IT fruit
can be lifted 'way from us

The Cloud is a function
that's forging new thought
Pushing the boundaries
and theories we've bought

It's profoundly game changing

and as long as we focus

and don't buy into the 

hyped hocus pocus

So before we end up
with a Cloud that "slices and dices"
that never gets dull,
mashes, grates, grinds and rices

It's important to state

what problem we're solving

so the Cloud doesn't end up

with its value de-evolving

—-

BTW, if you want to see more of my Cloud and Security poems, just check here.

The Network Is the Computer…(Is the Network, Is the Computer…)

September 17th, 2008 9 comments

Netsec
If there's one motif emerging from VMworld this year, it's very much the maxim "be careful what you ask for, because you might just get it."

The precipitate convergence of virtualized compute, network and storage is really beginning to take significant form; after five hard years of hungering for the materialization of the technology, enterprise architecture, and market/business readiness, the notion of a virtualized datacenter OS with a value proposition larger than just " cost-optimized infrastructure" has now become a deliciously palpable reality.  

The overlap and collision of many leading technology providers' "next generation" datacenter (OS) blueprints is something I have written about before.  In many cases there's reasonable alignment between the overall messaging and promised end result, but the proof is in the implementation pudding. I'm not going to rehash this here because I instead want to pick on something I've been talking about for quite some time.

From a network and security perspective, things are about to (again) fundamentally and profoundly change in terms of how we operationally design, provision, orchestrate, manage, govern and secure our infrastructure, applications and information.  It's important to realize that this goes way beyond just  adding a 'v' to the name of a product. 

What's incredibly interesting is the definition and context of where and what makes up the "network" that transports all our bits and how the resources and transports interact to deliver them securely.

It should be clear that even in a homogenous platform deployment, there exists an overwhelming complex conglomerate of mechanisms that make up the machinery enabling virtualization today.  I think it's fair to say that we're having a difficult time dealing with the non-virtualized model we have today.   Virtualization?  She's a cruel mistress bent on reminding us we've yet to take out the trash as promised.

I'm going to use this post to highlight just how "complexly simple" virtual networking and security have become using as an example the last two days' worth of  announcements, initiatives and demonstrations of technology and solutions from Intel, VMware, Cisco and the dozens of security ISV's we know and love.

Each of the bumps in these virtual wires deserves its own post, which they are going to get, especially VMware's vNetwork/VMsafe, distributed network switch, and Cisco's Nexus 1000v virtual switch announcements.  I'm going to break each of these elemental functions down in much more detail later as they are simply amazing.

Now that networking is abstracted across almost every layer of this model and in many cases managed by separate organizational siloes and technologies, how on earth are we going to instantiate a security policy that is consistent across all strata?  We're used to this problem today in physical appliances, but the isolation and well-definable goesinta/goesouta perimeterized boundaries allows us to easily draw lines around where these

policy differentials intersect. 

 It's used to be the devil you knew.  Now it's eleven different devils in disguise.

As you visualize the model below and examine how it applies to your experience, I challenge you to tell me where the "network" lives in this stack and how,  at a minimum, you think you're going to secure it.   This is where all those vendor roadmaps that are colliding and intersecting start to look like a hodgepodge:

Virtualnetwork-where

In the example model I show here, any one of these elements — potentially present in a single VMware ESX host — can directly or indirectly instantiate a set of networking or security functions and policies that are discrete from one another's purview but ultimately interrelated or even dependent in ways and using  methods we've not enjoyed before.  

In many cases, these layered components are abstracted from one another and managed by separate groups.  We're seeing the re-emergence of  network-centricity, it's just that the  network is camouflaged in all its cloudy goodness.  This isn't a story where we talk about clearly demarcated hosts  that plug into "THE" network, regardless of whether there's a hypervisor in the picture.

 

Here's where it gets fun…

In this model you have agents in the Guest OS interacting with security/networking virtual appliances on the ESX host either inline or via vnetworking APIs (switching or security) which in turn uses a fastpath networking kernel driver  connected to VMware's vSwitch while another VA/VM is connected to a Cisco Nexus 1000v vSwitch implemented as a second distributed virtual network switching fabric which are all running atop an Intel CPU utilizing SR-IOV via VT-d in the chipset which in turn allows VM's to direct attach (bypassing the VMM) to NIC cards with embedded switching connected to your network/storage fabrics…

Mass hysteria, cats and dogs living together…

So I'll ask you again: "Where's the network in that picture?"  Or, more precisely, "where isn't it?"  

This so hugely profound, but that may because I've been exposed to each of the bubbles in this diagram and see how each of them relate or do not.  When you step back and look at how, as an example, Cisco and VMware are both going through strategic sea changes in how they are thinking about networking and security, it's truly 

amazing but I think the notion of network intelligence is a little less cut and dry as some might have us believe.

Is this as mind-blowing to you as it is to me?  If not, wait until I rip open the whole vNetworking and Nexus 1000v stuff.  Very, very cool.

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Storm’s-a-Brewin’: How Many Clouds Are You Going to Need?

July 20th, 2008 1 comment

Stormycloud
For the second time in some months, Amazon’s S3 (Simple Storage Service,) one of the most "invisibly visible" examples of the intersection of Web2.0 and cloud computing, has suffered some noticeable availability hiccups. 

Or, if you prefer to use Amazon’s vernacular "elevated error rates" 😉

Many well-known companies such as Twitter rely upon content hosted via Amazon’s S3 which is billed as offering the following capabilities:

Amazon S3 provides a simple web services interface
that can be used to store and retrieve any amount of data, at any time,
from anywhere on the web. It gives any developer access to the same
highly scalable, reliable, fast, inexpensive data storage
infrastructure that Amazon uses to run its own global network of web
sites. The service aims to maximize benefits of scale and to pass those
benefits on to developers.

It’s not realistic to think that infrastructure as complex as this won’t suffer service disruption, but one has to wonder what companies who rely on the purported resiliency of the "cloud" from a single provider do in cases where like it’s namesake, the skies open up and the service takes a dump?

Amazonfail
I’ll go one further.  If today you happen to use S3 for content hosting and wanted like-for-like functionality and service resiliency with a secondary provider, would your app. stack allow you to pull it off without downtime?

What happens if your apps are hosted in a cloud, too?

Sounds like a high-pressure front to me…

Next up: "CPE Security Is Dead(?): All Hail Security in the Cloud(?)"

😉

/Hoff

Categories: Cloud Computing Tags:

Self Healing Intrusion Tolerance…

June 22nd, 2008 1 comment

Selfhealing
Tim Greene from Computerworld wrote a story last week titled "Security software makes virtual servers a moving target.

This story draws attention to a story on the same topic that popped up a while ago (see Dark Reading) about some research led by George Mason University professor Arun Sood that is being productized and marketed as "Self Cleansing Intrusion Tolerance (SCIT)"

SCIT is based upon the premise that taking machines (within a virtualized environment) in and out of service rapidly and additionally substituting the underlying operating systems/application combinations reduces the exposure of attack and hastens the remediation/mitigation process by introducing the notion of what Sood calls "security by diversity."

Examples are given in the article suggesting the applicability of application types for SCIT:

SCIT is best suited to servers with short transaction times and has been tested with DNS, Web and single-sign-on servers, he says, which can perform effectively even if each virtual server is in use for just seconds.

In today’s multi-tier, SOA, web2.0, cloud-compute, mashup world, with or without the issue of preservation of state across even short-transactional applications, I’m not sure I see the practical utility in this approach.  The high-level concept, yes, the underlying operational reality…not so much.

Some of you might notice the, um, slightly different comparative version of Sood’s acronym reflecting my opinion of this approach in this blog entry’s title… 😉

I think that SCIT’s underlying principles lend themselves well to the notions I champion of resilient and survivable systems, but I think that the mechanical practicality of the proposed solutions — even within the highly dynamic and agile framework of virtualization — simply aren’t realistic today.

Real-time infrastructure with it’s dynamic orchestration, provisioning, governance, and security is certainly evolving and we might get to the point where heterogeneous systems are autonomously secured based upon global policy definitions up and down the stack, but we are quite some time away from being able to realize this vision.

You will no doubt notice that the focal element of SCIT is the concept of a security-centric perspective on lifecycle management of VM’s.  It’s quite obvious that VM lifecycle management is a hotly-contested topic for which many of the large infrastructure players are battling. 

Security will simply be a piece of this puzzle, not the focus of it.

This is not to say that this solution is not worthy of consideration as we look out across the horizon, and from a timing perspective it will likely surface again given it’s "ahead of it’s deployable time" status but I’m forced to consider what box I’d check in describing SCIT today:

  • Feature
  • Solution
  • Future

Neat stuff, but if you’re going to take investment and productize something, it’s got to be realistically deployable.  I’d suggest that baking this sort of functionality into the virtualization platforms themselves and allowing for universal telemetry (sort of like this) to allow for either "self cleansing intrusion tolerance" or even "self healing intrusion tolerance" is probably a more reasonable concept. 

/Hoff

Categories: Virtualization Tags:

All Your COTS Multi-Core CPU’s With Non-Optimized Security Software Are Belong To Us…

September 24th, 2007 3 comments

Intel_quadcore{No, I didn’t forget to spell-check the title.  Thanks for the 20+ emails on the topic, though.  For those of you not familiar with the etymology of "..all your base are belong to us," please see here…}

I’ve been in my fair share of "discussions" regarding the perceived value in the security realm of proprietary custom hardware versus that of COTS (Commercial Off The Shelf) multi-core processor-enabled platforms.

Most of these debates center around what can be described as a philosophical design divergence suggesting that given the evolution and availability of muti-core Intel/AMD COTS processors, the need for proprietary hardware is moot. 

Advocates of the COTS approach are usually pure software-only vendors who have no hardware acceleration capabilities of their own.  They often reminisce about the past industry failures of big-dollar fixed function ASIC-based products (while seeming to ignore re-purposeable FPGA’s) to add weight to the theory that Moore’s Law is all one needs to make security software scream.

What’s really interesting and timely about this discussion is the notion of how commoditized the OEM/ODM appliance market has become when compared to the price/performance ratio offered by COTS hardware platforms such as Dell, HP, etc. 

Combine that with the notion of how encapsulated "virtual appliances" provided by the virtualization enablement strategies of folks like VMWare will be the next great equalizer in the security industry and it gets much more interesting…


There’s a Moore’s Law for hardware, but there’s also one for software…didja know?


Sadly, one of the most overlooked points in the multi-core argument is
that exponentially faster hardware does not necessarily translate to
exponentially improved software
performance.  You may get a performance  bump by forking multiple single-threaded instances of software or even pinning them to CPU/core affinity by spawning networking off to one processor/core and (example) firewalling to another, but that’s just masking the issue.

This is just like tossing a 426 Hemi in a Mini Cooper; your ability to use all that horsepower is limited by what torque/power you can get to the ground when the chassis isn’t designed to efficiently harness it.

For the record, as you’ll see below, I advocate a balanced approach: use proprietary, purpose-built hardware for network processing and offload/acceleration functions where appropriate and ride the Intel multi-core curve on compute stacks for general software execution with appropriately architected software crafted to take advantage of the multi-core technology.

Here’s a good example.

Cbspms
In my last position with Crossbeam, the X-Series platform modules relied on a combination of proprietary network processing featuring custom NPU’s, multi-core MIPS security processors and custom FPGA’s paired with Intel reference design multi-core compute blades (basically COTS-based Woodcrest boards) for running various combinations of security software from leading ISV’s.

What’s interesting about the bulk of the security software from these best-of-breed players you all know and love that run on those Intel compute blades, is that even with two sockets and dual cores per, it is difficult to squeeze large performance gains out of the ISV’s software.

Why?  There are lots of reasons.  Kernel vs. user mode, optimization for specific hardware and kernels, no-packet copy network drivers, memory and data plane architectures and the like.   However, one of the most interesting contributors to this problem is the fact that many of the core components of these ISV’s software were written 5+ years ago.

While these applications were born as tenants in single and dual processors, it has become obvious that developers cannot depend upon the increased clock speeds of processors or the availability of multi-core sockets alone to accelerate their single-threaded applications.

To take advantage of the increase in hardware performance, developers must redesign their applications to
run in a threaded environment as multi-core CPU architectures feature two or more processor compute engines (cores) and provide fully
parallellized hyperthreaded execution of multiple software threads.


Enter the Impending Muti-Core Crisis


But there’s a wrinkle with the pairing of this mutually-affected hardware/software growth curve that demonstrates a potential crisis with multi-core evolution.  This crisis will effect the way in which developers evaluate how to move forward with both their software and the hardware it runs on.

This comes from the blog post titled "Multicore Crisis" from SmoothSpan’s Blog:

Clockspeeds_2

The Multicore Crisis has to do with a shift in the behavior of Moore’s Law.
The law basically says that we can expect to double the number of
transistors on a chip every 18-24 months.  For a long time, it meant
that clock speeds, and hence the ability of the chip to run the same program faster,
would also double along the same timeline.  This was a fabulous thing
for software makers and hardware makers alike.  Software makers could
write relatively bloated software (we’ve all complained at Microsoft
for that!) and be secure in the knowledge that by the time they
finished it and had it on the market for a short while, computers would
be twice as fast anyway.  Hardware makers loved it because with
machines getting so much faster so quickly people always had a good
reason to buy new hardware.

Alas this trend has ground to a halt!  It’s easy to see from the
chart above that relatively little progress has been made since the
curve flattens out around 2002.  Here we are 5 years later in 2007.
The 3GHz chips of 2002 should be running at about 24 GHz, but in fact,
Intel’s latest Core 2 Extreme is running at about 3 GHz.  Doh!  I hate
when this happens!  In fact, Intel made an announcement in 2003 that
they were moving away from trying to increase the clock speed and over
to adding more cores.  Four cores are available today, and soon there
will be 8, 16, 32, or more cores.

What does this mean?  First, Moore’s Law didn’t stop working.  We
are still getting twice as many transistors.  The Core 2 now includes 2
complete CPU’s for the price of one!  However, unless you have software
that’s capable of taking advantage of this, it will do you no good.  It
turns out there is precious little software that benefits if we look at
articles such Jeff Atwood’s comparison of 4 core vs 2 core performance.  Blah!  Intel says that software has to start obeying Moore’s Law.
What they mean is software will have to radically change how it is
written to exploit all these new cores.  The software factories are
going to have to retool, in other words.

With more and more computing moving into the cloud on the twin
afterburners of SaaS and Web 2.0, we’re going to see more and more
centralized computing built on utility infrastructure using commodity
hardware.  That’s means we have to learn to use thousands of these
little cores.  Google did it, but only with some pretty radical new tooling.

This is fascinating stuff and may explain why many of the emerging appliances from leading network security vendors today that need optimized performance and packet processing do not depend solely on COTS multi-core server platforms. 

This is even the case with new solutions that have been written from the ground-up to take advantage of multi-core capabilities; they augment the products (much like the Crossbeam example above) with NPU’s, security processors and acceleration/offload engines.

If you don’t have acceleration hardware, as is the case for most pure software-only vendors, this means that a fundamental re-write is required in order to take advantage of all this horsepower.  Check out what Check Point has done with CoreXL which is their "…multi-core acceleration technology
[that] takes advantage of multi-core processors to provide high levels of
security inspection by dynamically sharing the load across all cores of
a CPU."

We’ll have to see how much more juice can be squeezed from the software and core stacking as the gap narrows on the processor performance increases (see above) as balanced against core density without a  complete re-tooling of software stacks versus doing it in hardware. 

Otherwise, combined with this smoothing/dipping of the Moore’s Law hardware curve, not retooling software will mean that proprietary processors may play an increasing role of importance as the cycle replays.

Interesting, for sure.

/Hoff

 

Categories: Technology Review Tags:

Reflections on Recent Failures in the Fragile Internet Ecosystem Due to Service Monoculture…

September 2nd, 2007 3 comments

Crumblefoundation_2
Our digital lives and the transactions that enable them are based upon crumbling service delivery foundations and we’re being left without a leg to stand on…

I’ve blogged about this subject before, and it’s all a matter of perspective, but the latest high-profile Internet-based service failure which has had a crippling effect on users dependent upon its offerings is PayPal

Due to what looks to be a recent roll-out of code gone bad, subscription payment processing went belly-up. 

On September 1st, PayPal advised those affected that the issue should be fixed "…by September 5 or 6, and that all outstanding subscription payments would be collected."  That’s 4 days on top of the downtime sustained already.

This has been a tough last few weeks for parent company eBay as one of its other famous children, Skype, suffered its own highly-visible flame-outs due to an issue they company blamed upon overwhelmed infrastructure due to a Microsoft Patch Tuesday download.  This outage left several million users who were "dependent" upon Skype for communicating with others without a service to do so.

This is getting to the point that the services we take for granted will always be up are showing their vulnerable side, for lots of different reasons.  Some of these services are free, so that introduces a confusing debate relating to service levels and availability when one doesn’t pay for said service.

The failures are increasing in frequency and downtime.  Scary still is that I now count five recent service failures in the last four months that have affected me directly.  Not all of them are Internet-based, but they indicate a reliance on networked infrastructure that is obviously fragile:

1) United Airlines  Flight Operations Computer System Failure
2) San Francisco Power Grid Failure
3) LAX Passenger Screening System Computer System Failure
4) Skype Down for Days, and finally…
5) PayPal Subscription Processing Down

That’s quite a few, isn’t it?  Did you realize these were all during the last few months?

Most of these failures caused me inconvenience at best; some missed flights, inability to blog, failed subscription processing for web services, inability to communicate with folks…none of them life-threatening, and none of them dramatically impacting my ability to earn a wage.  But that’s me and my "luck."  Other people have not been so lucky.

Some have reasonably argued that these services do not represent "critical" infrastructure and at the level of things such as national defense, health and safety, etc. I’d have to agree.  But they could, and if our dependence on these services increases, they will.

As these services evolve and enable the economic plumbing of an entire generation of folks who expect ever-presence and conduct the bulk of their lives online, this sort of thing will turn from an inconvenience to a disaster. 

Even more interesting is a number of these services are now owned and delivered by what I call service monocultures; eBay provides not only the auction services, but PayPal and Skype, too.  Google gives you mail, apps, search, video, ads and soon wireless and payment.

While the investment these M&A/consolidation activities generates means bigger and better services, it also increases the likelihood of cascading failure domains in an ever-expanding connectedness, especially when they are operated by a single entity.

There’s a lot of run-and-gun architecture servicing these utilities in the software driven world that isn’t as resilient as it ought to be up and down the stack.  We haven’t even scratched the tip of the iceberg on this one folks…it’s going to get nasty.  Web2.0 is just the beginning.

I think we’d have a civil war if YouTube, FaceBook, Orkut or MySpace went down.

What would people do without Google if it were to disappear for 2-3 days.

Yikes.

Knock on (virtual) wood.

/Hoff

Categories: Software as a Service (SaaS) Tags:

Follow-Up to My Cisco/VMWare Commentary

July 28th, 2007 No comments

 

Cisco_2
Thanks very much to whomsoever at Cisco linked to my previous post(s) onVmware_2
Cisco/VMware and the Data Center 3.0 on the Cisco Networkers website! I can tell it was a person because they misnamed my blog as "Regional Security" instead of Rational Security… you can find it under the Blogs section here. 😉

The virtualization.info site had an interesting follow-up to the VMware/Cisco posts I blogged about previously.

DataCenter 3.0 is Actually Old?

Firstly, in a post titled "Cisco announces (old) datacenter automation solution" in which they discuss the legacy of the VFrame product in which they suggest that VFrame is actually a re-branded and updated version software from Cisco’s acquisition of TopSpin back in 2005:

Cisco is well resoluted to make the most out of virtualization hype: it first declares Datacenter 3.0 initiative (more ambitiously than IDC, which claimed Virtualization 2.0), then it re-launches a technology obtained by TopSpin acquisition in April 2005 and offered since September 2005 under new brand: VFrame.

Obviously the press release doesn’t even mention that VFrame just moved
from 3.0 (which exist since May 2004, when TopSpin was developing it)
to 3.1 in more than three years.

In the same posting, the ties between Cisco and VMWare are further highlighted:

A further confirmation is given by fact that VMware is involved in VFrame development program since May 2004, as reported in a Cisco confidential presentation of 2005 (page 35).

Cisco old presentation also adds a detail about what probably will be announced at VMworld, and an interesting claim:

…VFrame can provision ESX Servers over SAN.

…VMWare needs Cisco for scaling on blades…

This starts helping us understand even further as to why Mr. Chambers will be keynoting at VMWorld’07.

Meanwhile, Cisco Puts its Money where its Virtual Mouth Is

Secondly, VMware announced today that Cisco will invest $150 Million in VMware:

Cisco will purchase $150 million of VMware Class A common shares
currently held by EMC Corporation, VMware’s parent company, subject to
customary regulatory and other closing conditions including
Hart-Scott-Rodino (HSR) review. Upon closing of the investment, Cisco
will own approximately 1.6 percent of VMware’s total outstanding common
stock (less than one percent of the combined voting power of VMware’s
outstanding common stock).  VMware has agreed to consider the
appointment of a Cisco executive to VMware’s board of directors at a
future date.

Cisco’s purchase is intended to strengthen inter-company
collaboration towards accelerating customer adoption of VMware
virtualization products with Cisco networking infrastructure and the
development of customer solutions that address the intersection of
virtualization and networking technologies. 

In addition, VMware and Cisco have entered into a routine and
customary collaboration agreement that expresses their intent to expand
cooperative efforts around joint development, marketing, customer and
industry initiatives.  Through improved coordination and integration of
networking and virtualized infrastructure, the companies intend to
foster solutions for enhanced datacenter optimization and extend the
benefits of virtualization beyond the datacenter to remote offices and
end-user desktops.

If should be crystal clear that Cisco and EMC are on a tear with regards to virtualization and that to Cisco, "bits is bits" and virtualizing those bits across the app. stack, network, security and storage departments coupled with a virtualized service management layer is integral to their datacenter strategy.

It’s also no mystery as to why Mr. Chambers is keynoting @ VMWorld now, either.

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Cisco Responds to My Data Center Virtualization Post…

July 24th, 2007 2 comments

Cisco
"…I will squash him like a liiiiittle bug, that Hoff!"

OK, well they weren’t responding directly to my post from last night, but as they say in the big show, "timing is everything."

My last blog entry detailed some navel gazing regarding some interesting long term strategic moves by Cisco to further embrace the virtualized data center and the impact this would have on the current and future product roadmaps.  I found it very telling that Chambers will be keynoting at this year’s VMWorld and what this means for the future.

Not 8 hours after my posting (completely coincidental I’m sure 😉 the PR machine spit out the following set of announcements from Networkers Cisco Live titled "Cisco Unveils Plans to Transform the Data Center."    You can find more detailed information from Cisco’s web here.

This announcement focused on outlining some of the near-term (2 year) proofpoints and touts the introduction of "…New Data Center Products, Services and Programs to Support a Holistic View of the Data Center." 

There’s an enormous amount of data to digest in this announcement, but the interesting bits for me to focus on are the two elements pertaining to security virtualization as well as service composition, provisioning and intelligent virtualized service delivery.   This sort of language is near and dear to my heart.

I’m only highlighting a small subsection of the release as there is a ton of storage, data mobility, multiservice fabric and WAAS stuff in there too.  This is all very important stuff, but I wanted to pay attention to the VFrame Data Center orchestration platform and the ACE XML security gateway functions since they pertain to what I have been writing about recently:

If you can choke back the bile from the  "Data Center v3.0" moniker:

…Cisco announced at a press conference today its
vision for
next-generation data centers, called Data Center 3.0. The
Cisco vision for
Data Center 3.0 entails the real-time, dynamic orchestration
of
infrastructure services from shared pools of virtualized
server, storage
and network resources, while optimizing application performance,
service
levels, efficiency and collaboration.

Over the next 24 months, Cisco will deliver innovative new
products,
programs, and capabilities to help customers realize the
Cisco Data Center
3.0 vision. New products and programs announced today support
that vision,
representing the first steps in helping customers to create
next-generation
data centers.

Cisco VFrame Data Center

VFrame Data Center (VFrame DC) is an orchestration platform
that leverages
network intelligence to provision resources together as
virtualized
services. This industry-first approach greatly reduces application
deployment times, improves overall resource utilization,
and offers greater
business agility. Further, VFrame DC includes an open API,
and easily
integrates with third party management applications, as
well as
best-of-breed server and storage virtualization offerings.

With VFrame DC, customers can now link their compute, networking
and
storage infrastructures together as a set of virtualized
services. This
services approach provides a simple yet powerful way to
quickly view all
the services configured at the application level to improve
troubleshooting
and change management. VFrame DC offers a policy engine
for automating
resource changes in response to infrastructure outages and
performance
changes. Additionally, these changes can be controlled by
external
monitoring systems via integration with the VFrame DC web
services
application programming interface (API).

I think that from my view of the world, these two elements represent a step in the right direction for Cisco.  Gasp!  Yes, I said it.  While Chambers prides himself on hyping Cisco’s sensitivity to "market transitions" it’s clear that Cisco gets that virtualization across both the network, host and storage is actually a real market.  They’re still working the security piecem however they, like Microsoft, mean business when they enter a space and it’s no doubt they’re swinging to fences with VFrame. 

I think the VFrame API is critical and how robust it is will determine the success of VFrame.  It’s interesting that VFrame is productized as an appliance, but I think I get what Chambers is going to be talking about at VMWorld — how VFrame will interoperate/interact with VMWare provisioning and management toolsets. 

Interestingly, the UI and template functionality looks a hell of a lot like some others I’ve blogged about and is meant to provide an umbrella management "layer" that allows for discovery, design, provisioning, deployment and automation of services and virtualized components across resource pools of servers, network components, security and storage:

Cisco VFrame Data Center components include:

  • Cisco VFrame Data Center Appliance: Central controller that connects to Ethernet and Fibre Channel networks
  • Cisco VFrame Data Center GUI: Java-based client that accesses application running on VFrame Data Center Appliance
  • Cisco VFrame Web Services Interface and Software Development Kit:
    Programmable interface that allows scripting of actions for Cisco
    VFrame Data Center
  • Cisco VFrame Host Agent: Host agent that provides server heartbeat,
    capacity utilization metrics, shutdown, and other capabilities
  • Cisco VFrame Data Center Macros: Open interface that allows administrators to create custom provisioning actions

That’s ambitious to say the least.

It’s still a raucous debate with me regarding where a lot of this stuff belongs (in the network or as a service layer) and I maintain the latter.  Innovation driven by companies such as 3Tera demonstrate that the best ideas are always copied by the 800 pound gorillas once they become mainstream.

Enhanced Cisco ACE XML Gateway Software

The new Cisco Application Control Engine (ACE) Extensible
Markup Language
(XML) Gateway software delivers enhanced capabilities for
enabling secure
Web services, providing customers with better management,
visibility, and
performance of their XML applications and Web 2.0 services.
The new
software includes a wide variety of new capabilities and
features plus
enhanced performance monitoring and reporting, providing
improved
operations and capacity planning for Web services secured
by the Cisco ACE
XML Gateway.

I’d say this is a long overdue component for Cisco; since Chambers has been doing nothing but squawking about Web2.0, collaboration, etc., the need to integrate XML security into the security portfolio is a must, especially as we see XML as the Internet-based messaging bus for just about everything these days.

All in all I’d say Cisco is doing a good job of continuing to push the message along and while one shouldn’t see this faint praise as me softening my stance on Cisco’s execution potential, it’s yet to be seen if trying to be everything to everyone will deliver levels of service commensurate with what customers need.

Only time will tell.

/Hoff

 

Categories: Cisco, Networking, Virtualization Tags:

The Evolution of Bipedal Locomotion & the Energetic Economics of Virtualization

July 17th, 2007 5 comments

Evolution_2
By my own admission, this is a stream-of-consciousness, wacky, caffeine-inspired rant that came about while I was listening to a conference call.   It’s my ode to paleoanthropology and how we, the knuckledraggers of IT/Security, evolve.

My apologies to anyone who actually knows anything about or makes an honest living from science; I’ve quite possibly offended all of you with this post…

I came across this interesting article posted today on the ScienceDaily website which discusses a hypothesis by a University of Arizona professor, David Raichlen, who suggests that bipedalism, or walking on two legs, evolved simply because it used less energy than quad knuckle-walking.  If one looks at the force impact expended whilst quad-knuckle walking, it is comparably 4 times that of a bipedal footprint!  That’s huge.

I’m always looking for good tangential analogs for points I want to reinforce within the context of my line of work, and I found this fantastic fodder for such an exercise.

So without a lot of work on my part, I’m going to post some salient points from the article and leave it up to you to determine how, if at all, the "energetic" evolution of virtualization draws interesting parallels to this very interesting hypothesis; that the heretofore theorized complexity associated with this crucial element of human evolution was, in fact, simply an issue derived from energy efficiency which ultimately led to sustainable survivability and not necessarily because of ecological, behavioral or purely anatomical reasons:

From Here:

The origin of bipedalism, a defining feature of hominids, has been 
attributed to several
competing hypothesis. The postural feeding hypothesis
(Hunt 1996) is an ecological model.
The behavioral model (Lovejoy 1981) attributes bipedality to the social, sexual and
reproductive conduct
of early hominids. The thermoregulatory model (Wheeler 1991) views
the
increased heat loss, increased cooling, reduced heat gain and reduced water requirements conferred by a bipedal stance in a hot, tropical
climate as the selective pressure leading to bipedalism.

At its core, server virtualization might be described as a manifestation of how we rationalize and deal with the sliding-window impacts of time and the operational costs associated with keeping pace with the transformation and adaptation of technology in compressed footprints.  One might describe this as the "energy" (figuratively and literally) that it takes to operate our IT infrastructure.

It’s about doing more with less and being more efficient such that the "energy" used to produce and deliver services is small in comparison to the output mechanics of what is consumed.  One could suggest that once the efficiency gains (or savings?) are realized, the energy can be allocated to other more enabling abilities.  Using the ape to human bipedalism analog, one could suggest that bipedalism lead to bigger brains, better hunting/gathering skills, fashioning tools, etc.  Basically the initial step of efficiency gains leads to exponential capabilities over the long term.

So that’s my Captain Obvious declaration relating bipedalism with virtualization.  Ta Da!

From the article as sliced & diced by the fine folks at ScienceDaily:

Raichlen and his colleagues will publish the article, "Chimpanzee
locomotor energetics and the origin of human bipedalism" in the online
early edition of the Proceedings of the National Academy of Sciences
(PNAS) during the week of July 16. The print issue will be published on
July 24.

Bipedalism marks a critical divergence between humans
and other apes and is considered a defining characteristic of human
ancestors. It has been hypothesized that the reduced energy cost of
walking upright would have provided evolutionary advantages by
decreasing the cost of foraging.

"For decades now researchers
have debated the role of energetics and the evolution of bipedalism,"
said Raichlen. "The big problem in the study of bipedalism was that
there was little data out there."

The researches collected
metabolic, kinematic and kenetic data from five chimpanzees and four
adult humans walking on a treadmill. The chimpanzees were trained to
walk quadrupedally and bipedally on the treadmill.

Humans
walking on two legs only used one-quarter of the energy that
chimpanzees who knuckle-walked on four legs did. On average, the
chimpanzees used the same amount of energy using two legs as they did
when they used four legs. However, there was variability among
chimpanzees in how much energy they used, and this difference
corresponded to their different gaits and anatomy.

"We were
able to tie the energetic cost in chimps to their anatomy," said
Raichlen. "We were able to show exactly why certain individuals were
able to walk bipedally more cheaply than others, and we did that with
biomechanical modeling."

The biomechanical modeling revealed
that more energy is used with shorter steps or more active muscle mass.
Indeed, the chimpanzee with the longest stride was the most efficient
walking upright.

"What those results allowed us to do was to
look at the fossil record and see whether fossil hominins show
adaptations that would have reduced bipedal energy expenditures," said
Raichlen. "We and many others have found these adaptations [such as
slight increases in hindlimb extension or length] in early hominins,
which tells us that energetics played a pretty large role in the
evolution of bipedalism."

The point here is not that I’m trying to be especially witty, but rather to illustrate that when we cut through the FUD and marketing surrounding server virtualization and focus on evolution versus revolution, some very interesting discussion points emerge regarding why folks choose to virtualize their server infrastructure.

After I attended the InterOp Data Center Summit, I walked away with a very different view of the benefits and costs of virtualization than I had before.  I think that as folks approach this topic, the realities of how the game changes once we start "walking upright" will provide a profound impact to how we view infrastructure and what the next step might bring.

Server virtualization at its most basic is about economic efficiency (read: energy == power + cooling…) plain and simple.  However, if we look beyond this as the first "step," we’ll see grid and utility computing paired with Web2.0/SaaS take us to a whole different level.  It’s going to push security to its absolute breaking point.

I liked the framing of the problem set with the bipedal analog.  I can’t wait until we come full circle, grow wings and start using mainframes again 😉

Did that make any bloody sense at all?

/Hoff

P.S. I liked Jeremiah’s evolution picture, too:

Evolution2

 

 

Categories: Virtualization Tags:

Does Centralized Data Governance Equal Centralized Data?

June 17th, 2007 4 comments

Cube
I’ve been trying to construct a palette of blog entries over the last few months which communicates the need for a holistic network, host and data-centric approach to information security and information survivability architectures. 

I’ve been paying close attention to the dynamics of the DLP/CMF market/feature positioning as well as what’s going on in enterprise information architecture with the continued emergence of WebX.0 and SOA.

That’s why I found this Computerworld article written by Jay Cline very interesting as it focused on the need for a centralized data governance function within an organization in order to manage risk associated with coping with the information management lifecycle (which includes security and survivability.)  The article went on to also discuss how the roles within the organization, namely the CIO/CTO, will also evolve in parallel.

The three primary indicators for this evolution were summarized as:

1. Convergence of information risk functions
2. Escalating risk of information compliance
3. Fundamental role of information.

Nothing terribly earth-shattering here, but the exclamation point of this article to enable a
centralized data governance  organization is a (gasp!) tricky combination of people, process
and technology:

"How does this all add up? Let me connect the dots: Data must soon become centralized,
its use must be strictly controlled within legal parameters, and information must drive the
business model. Companies that don’t put a single, C-level person in charge of making
this happen will face two brutal realities: lawsuits driving up costs and eroding trust in the
company, and competitive upstarts stealing revenues through more nimble use of centralized
information."

Let’s deconstruct this a little because I totally get the essence of what is proposed, but
there’s the insertion of some realities that must be discussed.  Working backwards:

  • I agree that data and it’s use must be strictly controlled within legal parameters.
  • I agree that a single, C-level person needs to be accountable for the data lifecycle
  • However, I think that whilst I don’t disagree that it would be fantastic to centralize data,
    I think it’s a nice theory but the wrong universe. 

Interesting, Richard Bejtlich focused his response to the article on this very notion, but I can’t get past a couple of issues, some of them technical and some of them business-related.

There’s a confusing mish-mash alluded to in Richard’s blog of "second home" data repositories that maintain copies of data that somehow also magically enforce data control and protection schemes outside of this repository while simultaneously allowing the flexibility of data creation "locally."  The competing themes for me is that centralization of data is really irrelevant — it’s convenient — but what you really need is the (and you’ll excuse the lazy use of a politically-charged term) "DRM" functionality to work irrespective of where it’s created, stored, or used.

Centralized storage is good (and selfishly so for someone like Richard) for performing forensics and auditing, but it’s not necessarily technically or fiscally efficient and doesn’t necessarily align to an agile business model.

The timeframe for the evolution of this data centralization was not really established,
but we don’t have the most difficult part licked yet — the application of either the accompanying
metadata describing the information assets we wish to protect OR the ability to uniformly classify and
enforce it’s creation, distribution, utilization and destruction.

Now we’re supposed to also be able to magically centralize all our data, too?  I know that large organizations have embraced the notion of data warehousing, but it’s not the underlying data stores I’m truly worried about, it’s the combination of data from multiple silos within the data warehouses that concerns me and its distribution to multi-dimensional analytic consumers. 

You may be able to protect a DB’s table, row, column or a file, but how do you apply a policy to a distributed ETL function across multiple datasets and paths?

ATAMO?  (And Then A Miracle Occurs) 

What I find intriguing about this article is that this so-described pendulum effect of data centralization (data warehousing, BI/DI) and resource centralization (data center virtualization, WAN optimization/caching, thin client computing) seem to be on a direct  collision course with the way in which applications and data are being distributed with  Web2.0/Service Oriented architectures and delivery underpinnings such as rich(er) client side technologies such as mash-ups and AJAX…

So what I don’t get is how one balances centralizing data when today’s emerging infrastructure
and information architectures are constructed to do just the opposite; distribute data, processing
and data re-use/transformation across the Enterprise?  We’ve already let the data genie out of the bottle and now we’re trying to cram it back in? 
(*please see below for a perfect illustration)

I ask this again within the scope of deploying a centralized data governance organization and its associated technology and processes within an agile business environment. 

/Hoff

P.S. I expect that a certain analyst friend of mine will be emailing me in T-Minus 10, 9…

*Here’s a perfect illustration of the futility of centrally storing "data."  Click on the image and notice the second bullet item…:

Googlegears