Archive

Posts Tagged ‘Virtualization’

Virtual Networking Battle Heating Up: Citrix Leads $10 Million Investment In Vyatta

June 9th, 2009 No comments

Those crafty Citrix chaps are at it again.

Last month I reported from Citrix Synergy about discussions I had with Simon Crosby and Ian Pratt about the Citrix/Xen Openswitch which is Citrix’s answer to the Cisco Nexus 1000v married to VMware’s vSphere.

Virtualization.com this morning reported that Vyatta — who describe themselves as the “open source alternative to Cisco” — just raised another round of funding, but check out who’s leading it:

Vyatta today announced it has completed its $10 million Series C round of financing led by Citrix Systems. The new funding round also includes existing investors, Comcast Interactive Capital, Panorama Capital, and ArrowPath Venture Partners. As part of the investment, Gordon Payne, senior vice president and general manager of the Delivery Systems Division at Citrix, has joined the Vyatta Board of Directors where he will assist the company in its next phase of development.

Today, Vyatta also announced that it has joined the Citrix Ready product verification program to create solutions for customers deploying cloud computing infrastructures.

Vyatta will use the funds for operating capital as the company scales its sales efforts and accelerates growth across multiple markets.

Vyatta runs on standard x86 hardware and can be virtualized with modern hypervisors, including the Citrix XenServer™ virtualization platform. Vyatta delivers a full set of networking features that allow customers to connect, protect, virtualize, and optimize their networks, improving performance, reducing costs, and increasing manageability and flexibility over proprietary networking solutions. Vyatta has been deployed by hundreds of customers world-wide in both virtual and non-virtual environments.

This is very, very interesting stuff indeed and it’s clear where Citrix has its sights aimed.  This will be good for customers, regardless of platform because it’s going to drive innovation even further.

The virtual networking stacks — and what they enable — are really going to start to drive significant competitive advantage across virtualization and Cloud vendors.  It’s ought to give customers significant pause when it comes to thinking about their choice of platform and integration.

Nicely executed move, Mr. Crosby.

/Hoff

Video Interview – Hoff & Crosby: Who Should Secure Virtual Environments?

May 26th, 2009 No comments

Simon Crosby and I were interviewed by Mike Mimoso of SearchSecurity.com at the RSA conference.  This was after a panel at the America’s Growth Capital conference and prior to our debate which included Steve Herrod of VMware.

It’s a two-part video that got a bit munged when the cameraman let the tape run out about 1/2 way through 😉

hoff-crosby

Part 1 can be found here.

Part 2 can be found here.

Quick Bit: Virtual & Cloud Networking – Where It ISN’T Going…

May 26th, 2009 No comments

In my Four Horsemen presentation, I made reference to one of the challenges with how the networking function is being integrated into virtual environments.  I’ve gone on to highlight how this is exacerbated in Cloud networking, also.

Specifically, as it comes to understanding how the network plays in virtual and Cloud architectures, it’s not where the network *is* in the increasingly complex virtualized, converged and unified computing architectures, it’s where networking *isn’t.*

What do I mean by this?  Here’s a graphical representation that I built about a year ago.  It’s well out-of-date and overly-simplified, but you get the picture:

virtualnetwork-whereThere’s networking at almost every substrate level — in the physical and virtual construct.  In our never-ending quest to balance performance, agility, resiliency and security, we’re ending up with a trade-off I call simplexity: the most complex simplicity in networking we’ve ever seen.   I wrote about this in a blog post last year titled “The Network Is the Computer…(Is the Network, Is the Computer…)

If you take a look at some of the more recent blips to appear on the virtual and Cloud networking  radar, you’ll see examples such as:

This list is far from inclusive.  Yes, I know I’ve left off blade server manufacturers and other players like HP (ProCurve) and Juniper, etc.  as well as ADC vendors like f5.  It’s not that I don’t appreciate their solutions, it’s just that I have a couple of free cycles to write this, and the list above appear on the top of my stack.

I plan on writing in more detail about the impact some of these technologies are having on next generation datacenters and Cloud deployments, as it’s a really interesting subject for me coming from my background at Crossbeam.

The most startling differences are in the approach of either putting the networking (and all its attendant capabilities) back in the hands of the network folks or allowing the server/virtual server admins to continue to leverage their foothold in the space and manage the network as a component of the converged and virtualized solution as a whole.

My friend @aneel (Twitter)  summed it up really well this morning when comparing the Blade Network Technology VMready offering and the Cisco Nexus 100ov:

huh.. where cisco uses nx1kv to put net control more in hands of net ppl, bnt uses vmready to put it further in server/virt admin hands

Looking at just the small sampling of solutions above, we see the diversity in integrated networking, external fabrics, converged fabrics (including storage) and add-on network processors.

It’s going to be a wild ride kids.  Buckle up.

/Hoff

Incomplete Thought: Storage In the Cloud: Winds From the ATMOS(fear)

May 18th, 2009 1 comment

I never metadata I didn’t like…

I first heard about EMC’s ATMOS Cloud-optimized storage “product” months ago:

EMC Atmos is a multi-petabyte offering for information storage and distribution. If you are looking to build cloud storage, Atmos is the ideal offering, combining massive scalability with automated data placement to help you efficiently deliver content and information services anywhere in the world.

I had lunch with Dave Graham (@davegraham) from EMC a ways back and while he was tight-lipped, we discussed ATMOS in lofty, architectural terms.  I came away from our discussion with the notion that ATMOS was more of a platform and less of a product with a focus on managing not only stores of data, but also the context, metadata and policies surrounding it.  ATMOS tasted like a service provider play with a nod to very large enterprises who were looking to seriously trod down the path of consolidated and intelligent storage services.

I was really intrigued with the concept of ATMOS, especially when I learned that at least one of the people who works on the team developing it also contributed to the UC Berkeley project called OceanStore from 2005:

OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers.

Any computer can join the infrastructure, contributing storage or providing local user access in exchange for economic compensation. Users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. The providers automatically buy and sell capacity and coverage among themselves, transparently to the users. The utility model thus combines the resources from federated systems to provide a quality of service higher than that achievable by any single company.

OceanStore caches data promiscuously; any server may create a local replica of any data object. These local replicas provide faster access and robustness to network partitions. They also reduce network congestion by localizing access traffic.

Pretty cool stuff, right?  This just goes to show that plenty of smart people have been working on “Cloud Computing” for quite some time.

Ah, the ‘Storage Cloud.’

Now, while we’ve heard of and seen storage-as-a-service in many forms, including the Cloud, today I saw a really interesting article titled “EMC, AT&T open up Atmos-based cloud storage service:”

EMC Corp.’s Atmos object-based storage system is the basis for two cloud computing services launched today at EMC World 2009 — EMC Atmos onLine and AT&T’s Synaptic Storage as a Service.
EMC’s service coincides with a new feature within the Atmos Web services API that lets organizations with Atmos systems already on-premise “federate” data – move it across data storage clouds. In this case, they’ll be able to move data from their on-premise Atmos to an external Atmos computing cloud.

Boston’s Beth Israel Deaconess Medical Center is evaluating Atmos for its next-generation storage infrastructure, and storage architect Michael Passe said he plans to test the new federation capability.

Organizations without an internal Atmos system can also send data to Atmos onLine by writing applications to its APIs. This is different than commercial graphical user interface services such as EMC’s Mozy cloud computing backup service. “There is an API requirement, but we’re already seeing people doing integration” of new Web offerings for end users such as cloud computing backup and iSCSI connectivity, according to Mike Feinberg, senior vice president of the EMC Cloud Infrastructure Group. Data-loss prevention products from RSA, the security division of EMC, can also be used with Atmos to proactively identify confidential data such as social security numbers and keep them from being sent outside the user’s firewall.

AT&T is adding Synaptic Storage as a Service to its hosted networking and security offerings, claiming to overcome the data security worries many conservative storage customers have about storing data at a third-party data center.

The federation of data across storage clouds using API’s? Information cross-pollenization and collaboration? Heavy, man.

Take plays like Cisco’s UCS with VMware’s virtualization and stir in VN-Tag with DLP/ERM solutions and sit it on top of ATMOS…from an architecture perspective, you’ve got an amazing platform for service delivery that allows for some slick application of policy that is information centric.  Sure, getting this all to stick will take time, but these are issues we’re grappling with in our discussions related to portability of applications and information.

Settling Back Down to Earth

This brings up a really important set of discussions that I keep harping on as the cold winds of reality start to blow.

From a security perspective, storage is the moose on the table that nobody talks about.  In virtualized environments we’re interconnecting all our hosts to islands of centralized SANs and NAS.  We’re converging our data and storage networks via CNAs and unified fabrics.

In multi-tenant Cloud environments all our data ends up being stored similarly with the trust that segregation and security are appropriately applied.  Ever wonder how storage architectures never designed to do these sorts of things at scale can actually do so securely? Whose responsibility is it to manage the security of these critical centerpieces of our evolving “centers of data.”

So besides my advice that security folks need to run out and get their CCIE certs, perhaps you ought to sign up for a storage security class, too.  You can also start by reading this excellent book by Himanshu Dwivedi titled “Securing Storage.”

What are YOU doing about securing storage in your enterprise our Cloud engagements?  If your answer is LUN masking, here’s four Excedrin, call me after the breach.

/Hoff

Security and the Cloud – What Does That Even Mean?

May 18th, 2009 1 comment

I was chatting with Pete Lindstrom this morning about how difficult it is to frame meaningful discussion around what security and Cloud Computing means.

In my Four Horsemen presentation I reflected on the same difficulty as it relates to security and virtualization.  I arrived at separating the discussion into three parts:

virtsec-points017Securing virtualization refers to what we need to do in order to ensure the security of the underlying virtualization platform itself.

Virtualizing security refers to how we operationalize and virtualize security capabilities — those we already have and new, evolving solutions — in order to secure our virtualized resources

Security via virtualization refers to what security benefits above and beyond what we might expect from non-virtualized environments we gain through the deployment of virtualization.

In reality, we need to break down the notion of security and Cloud computing into similar chunks.  The reason for this is that much like in the virtualization realm, we’re struggling less with security technology solutions (as there really are few) but rather with the operational, organizational and compliance issues that come with this new unchartered (or pooly chartered) territory.

Further, it’s important that we abstract offering security services from the Cloud as a platform versus how we secure the Cloud as a platform…I’ve chatted about that previously.

Thus we need to understand what it means to secure — or have a provider secure — the underlying Cloud platform, how we can then apply solutions from a collective catalog of compensating controls to apply security to our Cloud resources and ultimately how we can achieve parity or even better security through Cloud Computing.

I find it disturbing that folks often have the opinion of me that I am anti-Cloud. That’s something I must obviously work on, but suffice it to say that I am incredibly passionate about Cloud Computing and ensuring that we achieve an appropriate balance of security and survivability with its myriad of opportunity.

To illustrate this, I offer the talking slide from my Frogs presentation of security benefits that Cloud presents to an organization as a forcing function as they think about embracing Cloud Computing.  I present this slide before the security issues slide.  Why?  because I think Cloud can be harnessed as a catalyst for moving things forward in the security realm and used as lever to get things done:

cloudsec-benefits059Looking at the list of benefits, they actually highlight what I think are the the top three concerns organizations have with Cloud computing.  I believe they revolve around understanding how Cloud services provide for the following:

  • Preserving confidentiality, integrity and availability
  • Maintaining appropriate levels of identity and access Control
  • Ensuring appropriate audit and compliance capability

These aren’t exactly new problems.  They are difficult problems, especially when combined with new business models and technology, but ones we need to solve.  Cloud can help.

So, what does “securing the Cloud” mean and how do we approach discussing it?

I think the most rational approach is the one the Cloud Security Alliance is taking by framing the issues around the things that matter most, pointing out how these issues with which we are familiar are both similar and different when talking about Cloud Computing.  While others still argue with defining the Cloud, we’re busy trying to get in front of the issues we know we already have.

If you haven’t had a chance to take a look at the guidance, please do!  You can discuss it here on our Google Group.

In the meantime, ponder this: Valeo utilizing Google Apps across it’s 30,000 users. Funny, I remember talking about CapGemini and Google doing this very thing back in 2007: Google Makes Its Move To The Corporate Enterprise Desktop – Can It Do It Securely?

Check out some of the comments in that post. Crow, anyone?

/Hoff

The UFC and UCS: Cisco Is Brock Lesnar

March 17th, 2009 7 comments

Lesnar vs. Mir...My favorite sport is mixed martial arts (MMA.)

MMA is a combination of various arts and features athletes who come from a variety of backgrounds and combine many disciplines that they bring to the the ring.  

You’ve got wrestlers, boxers, kickboxers, muay thai practitioners, jiu jitsu artists, judoka, grapplers, freestyle fighters and even the odd karate kid.

Mixed martial artists are often better versed in one style/discipline than another given their strengths and background but as the sport has evolved, not being well-rounded means you run the risk of being overwhelmed when paired against an opponent who can knock you out, take you down, ground and pound you, submit you or wrestle/grind you into oblivion.  

The UFC (Ultimate Fighting Championship) is an organization which has driven the popularity and mainstream adoption of MMA as a recognizable and sanctioned sport and has given rise to some of the most notable MMA match-ups in recent history.

One of those match-ups included the introduction of Brock Lesnar — an extremely popular “professional” wrestler — who has made the  transition to MMA.  It should be noted that Brock Lesnar is an aberration of nature.  He is an absolute monster:  6’3″ and 276 pounds.  He is literally a wall of muscle, a veritable 800 pound gorilla.

In his first match, he was paired up against a veteran in MMA and former heavyweight champion, Frank Mir, who is an amazing grappler known for vicious submissions.  In fact, he submitted Lesnar with a nasty kneebar as Lesnar’s ground game had not yet evolved.  This is simply part of the process.  Lesnar’s second fight was against another veteran, Heath Herring, who he manhandled to victory.  Following the Herring fight, Lesnar went on to fight one of the legends of the sport and reigning heavyweight champion, Randy Couture.  

Lesnar’s skills had obviously progressed and he looked great against Couture and ultimately won by a TKO.

So what the hell does the UFC have to do with the Unified Computing System (UCS?)

Cisco UCS Components

Cisco UCS Components

 

Cisco is to UCS as Lesnar is to the UFC.

Everyone wrote Lesnar off after he entered the MMA world and especially after the first stumble against an industry veteran.

Imagine the surprise when his mass, athleticism, strength, intelligence and tenacity combined with a well-versed strategy paid off as he’s become an incredible force to be reckoned with in the MMA world as his skills progressed.  Oh, did I mention that he’s the World Heavyweight Champion now?

Cisco comes to the (datacenter) cage much as Lesnar did; an 800 pound gorilla incredibly well-versed in one  set of disciplines, looking to expand into others and become just as versatile and skilled in a remarkably short period of time.  Cisco comes to win, not compete. Yes, Lesnar stumbled in his first outing.  Now he’s the World Heavyweight Champion.  Cisco will have their hiccups, too.

The first elements of UCS have emerged.  The solution suite with the help of partners will refine the strategy and broaden the offerings into a much more well-rounded approach.  Some of Cisco’s competitors who are bristling at Cisco’s UCS vision/strategy are quick to criticize them and reduce UCS to simply an ill-executed move “…entering the server market.”  

I’ve stated my opinions on this short-sighted perspective:

Yes, yes. We’ve talked about this before here. Cisco is introducing a blade chassis that includes compute capabilities (heretofore referred to as a ‘blade server.’)  It also includes networking, storage and virtualization all wrapped up in a tidy bundle.

So while that looks like a blade server (quack!,) walks like a blade server (quack! quack!) that doesn’t mean it’s going to be positioned, talked about or sold like a blade server (quack! quack! quack!)
What’s my point?  What Cisco is building is just another building block of virtualized INFRASTRUCTURE. Necessary infrastructure to ensure control and relevance as their customers’ networks morph.

My point is that what Cisco is building is the natural by-product of converged technologies with an approach that deserves attention.  It *is* unified computing.  It’s a solution that includes integrated capabilities that otherwise customers would be responsible for piecing together themselves…and that’s one of the biggest problems we have with disruptive innovation today: integration.

 

The knee-jerk dismissals witnessed since yesterday by the competition downplaying the impact of UCS are very similar to how many people reacted to Lesnar wherein they suggested he was one dimensional and had no core competencies beyond wrestling, discounting his ability to rapidly improve and overwhelm the competition.  

Everyone seems to be focused on the 5100 — the “blade server” — and not the solution suite of which it is a single piece; a piece of a very innovative ecosystem, some Cisco, some not.  Don’t get lost in the “but it’s just a blade server and HP/IBM/Dell can do that” diatribe.  It’s the bigger picture that counts.

The 5100 is simply that — one very important piece of the evolving palette of tools which offer the promise of an integrated solution to a radically complex set of problems.

Is it complete?  Is it perfect?  Do we have all the details? Can they pull it off themselves?  The answer right now is a simple “No.”  But it doesn’t have to be.  It never has.

There’s a lot of work to do, but much like a training camp for MMA, that’s why you bring in the best partners with which to train and improve and ultimately you get to the next level.

All I know is that I’d hate to be in the Octagon with Cisco just like I would with Lesnar.

/Hoff

NWC’s Wittmann: Security in Virtualized Environments Overstated: Just Do It!

April 30th, 2007 2 comments

Virtualprotection_dog
In the April, 2007 edition of Network Computing magazine, Art Wittmann talks about server virtualization, its impact on data center consolidation and the overall drivers and benefits virtualization offers. 

What’s really interesting is that while he rambles on about the benefits of power, cooling and compute cycle-reclamation, he completely befuddled me with the following statement in which he suggests that:

    "While the security threat inherent in virtualization is
     real, it’s also overstated."

I’ll get to the meaty bits in a minute as to why I think this is an asinine comment, but first a little more background on the article.

In addition to illustrating everything wrong with the way in which IT has traditionally implemented security — bolting it on after the fact rather than baking it in — it shows the recklessness with which evangelizing the adoption of technology without an appropriate level of security is cavalierly espoused without an overall understanding of the impact of risk such a move creates.

Whittmann manages to do this with an attitude that seeks to suggest that the speed-bump security folks and evil vendors (or in his words: nattering nabobs of negativity) are just intent on making a mountain out of a molehill.

It seems that NWC approaches the evaluation of technology and products in terms of five areas: performance, manageability, scalability, reliability and security.  He lists how virtualization has proven itself in the first four categories, but oddly sums up the fifth category (security) by ranting not about the security things that should or have been done, but rather how it’s all overblown and a conspiracy by security folks to sell more kit and peddle more FUD:

"That leaves security as the final question.  You can bet that everyone who can make a dime on questioning the security of virtualization will be doing so; the drumbeat has started and is increasing in volume. 

…I think it’s funny that he’s intimating that we’re making this stuff up.  Perhaps he’s only read the theoretical security issues and not the practical.  While things like Blue Pill are sexy and certainly add sizzle to an argument, there are some nasty security issues that are unique to the virtualized world.  The drumbeat is increasing because these threats and vulnerabilities are real and so is the risk that companies that "just do it" are going to discover.

But while the security threat is real –and you should be concerned about it — it’s also overstated.  If you can eliminate 10 or 20 servers running outdated versions of NT in favor of a single consolidated pair of servers, the task of securing the environment should be simpler or at least no more complex.  If you’re considering a server consolidation project, do it.  Be mindful of security, but don’t be dissuaded by the nattering nabobs of negativity."

As far as I am concerned, this is irresponsible and reckless journalism and displays an ignorance of the impact that technology can have when implemented without appropriate security baked in. 

Look, if we don’t have security that works in non-virtualized environments, replicating the same mistakes in a virtualized world isn’t just as bad, it’s horrific.   While it should be simpler or at least no more complex, the reality is that it is not.  The risk model changes.  Threat vectors multiply.  New vulnerabilities surface.  Controls multiply.  Operational risk increases.

We end up right back where we started; with a mess that the lure of cost and time savings causes us to rush into without doing security right from the start.

Don’t just do it. Understand the risk associated with what a lack of technology, controls, process, and policies will have on your business before your held accountable for what Whittmann suggests you do today with reckless abandon.  Your auditors certainly will. 

/Hoff