Archive

Archive for the ‘Infrastructure 2.0’ Category

Incomplete Thought: Storage In the Cloud: Winds From the ATMOS(fear)

May 18th, 2009 1 comment

I never metadata I didn’t like…

I first heard about EMC’s ATMOS Cloud-optimized storage “product” months ago:

EMC Atmos is a multi-petabyte offering for information storage and distribution. If you are looking to build cloud storage, Atmos is the ideal offering, combining massive scalability with automated data placement to help you efficiently deliver content and information services anywhere in the world.

I had lunch with Dave Graham (@davegraham) from EMC a ways back and while he was tight-lipped, we discussed ATMOS in lofty, architectural terms.  I came away from our discussion with the notion that ATMOS was more of a platform and less of a product with a focus on managing not only stores of data, but also the context, metadata and policies surrounding it.  ATMOS tasted like a service provider play with a nod to very large enterprises who were looking to seriously trod down the path of consolidated and intelligent storage services.

I was really intrigued with the concept of ATMOS, especially when I learned that at least one of the people who works on the team developing it also contributed to the UC Berkeley project called OceanStore from 2005:

OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers.

Any computer can join the infrastructure, contributing storage or providing local user access in exchange for economic compensation. Users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. The providers automatically buy and sell capacity and coverage among themselves, transparently to the users. The utility model thus combines the resources from federated systems to provide a quality of service higher than that achievable by any single company.

OceanStore caches data promiscuously; any server may create a local replica of any data object. These local replicas provide faster access and robustness to network partitions. They also reduce network congestion by localizing access traffic.

Pretty cool stuff, right?  This just goes to show that plenty of smart people have been working on “Cloud Computing” for quite some time.

Ah, the ‘Storage Cloud.’

Now, while we’ve heard of and seen storage-as-a-service in many forms, including the Cloud, today I saw a really interesting article titled “EMC, AT&T open up Atmos-based cloud storage service:”

EMC Corp.’s Atmos object-based storage system is the basis for two cloud computing services launched today at EMC World 2009 — EMC Atmos onLine and AT&T’s Synaptic Storage as a Service.
EMC’s service coincides with a new feature within the Atmos Web services API that lets organizations with Atmos systems already on-premise “federate” data – move it across data storage clouds. In this case, they’ll be able to move data from their on-premise Atmos to an external Atmos computing cloud.

Boston’s Beth Israel Deaconess Medical Center is evaluating Atmos for its next-generation storage infrastructure, and storage architect Michael Passe said he plans to test the new federation capability.

Organizations without an internal Atmos system can also send data to Atmos onLine by writing applications to its APIs. This is different than commercial graphical user interface services such as EMC’s Mozy cloud computing backup service. “There is an API requirement, but we’re already seeing people doing integration” of new Web offerings for end users such as cloud computing backup and iSCSI connectivity, according to Mike Feinberg, senior vice president of the EMC Cloud Infrastructure Group. Data-loss prevention products from RSA, the security division of EMC, can also be used with Atmos to proactively identify confidential data such as social security numbers and keep them from being sent outside the user’s firewall.

AT&T is adding Synaptic Storage as a Service to its hosted networking and security offerings, claiming to overcome the data security worries many conservative storage customers have about storing data at a third-party data center.

The federation of data across storage clouds using API’s? Information cross-pollenization and collaboration? Heavy, man.

Take plays like Cisco’s UCS with VMware’s virtualization and stir in VN-Tag with DLP/ERM solutions and sit it on top of ATMOS…from an architecture perspective, you’ve got an amazing platform for service delivery that allows for some slick application of policy that is information centric.  Sure, getting this all to stick will take time, but these are issues we’re grappling with in our discussions related to portability of applications and information.

Settling Back Down to Earth

This brings up a really important set of discussions that I keep harping on as the cold winds of reality start to blow.

From a security perspective, storage is the moose on the table that nobody talks about.  In virtualized environments we’re interconnecting all our hosts to islands of centralized SANs and NAS.  We’re converging our data and storage networks via CNAs and unified fabrics.

In multi-tenant Cloud environments all our data ends up being stored similarly with the trust that segregation and security are appropriately applied.  Ever wonder how storage architectures never designed to do these sorts of things at scale can actually do so securely? Whose responsibility is it to manage the security of these critical centerpieces of our evolving “centers of data.”

So besides my advice that security folks need to run out and get their CCIE certs, perhaps you ought to sign up for a storage security class, too.  You can also start by reading this excellent book by Himanshu Dwivedi titled “Securing Storage.”

What are YOU doing about securing storage in your enterprise our Cloud engagements?  If your answer is LUN masking, here’s four Excedrin, call me after the breach.

/Hoff

The VM Mobility Myth

April 25th, 2009 11 comments

It finally dawned on me that if I have a few hundred to a thousand people sitting in front of me at one of my presentations, I should take advantage of that collective intelligence to perform a little selfish information gathering.

I’ve had an opinion for quite some time that the rampant squawking and generalizations regarding hyper-mobility suggesting VM sprawl and uncontrolled instance spawning was nothing more than FUD given where we are today with the technology and platforms that supposedly enable it.

We constantly hear how organizations big and small are suffering (or will) from the evils of virtualization by way of VM’s and information turning up everywhere, putting your data and assets at risk. It gets worse with the multi-tenancy issues surrounding moving to “The Cloud,” they say.

So in a couple of my panels at RSA, I asked for some sanity and fact checking.

Informally, 95% of those in attendance at the two RSA panels I engaged run VMware in production. I asked that in cases OTHER than failure, how many of those in the audience take advantage of VM mobility (such as VMotion) or some other technological capability to provide autonomic mobility of VM’s in their enterprises.

About 5 people (in crowds of 100+ and 500+ respectively) raised their hands.  Given that I asked this question the second time in front of a huge audience at RSA sitting next to the CTO’s of Citrix and VMware, I’m sure they were pretty surprised by the answer, too.

The reality is that in these environments — even extremely complex and large examples — there simply isn’t that much mobility and customers are more interested in resilience than they are agility in terms of what this feature brings. That’s a really interesting and important point.

The reason for this is pretty simple; the capability to provide for integrated networking and virtualization coupled with governance and autonomics simply isn’t mature at this point. Most people are simply replicating existing zoned/perimertized non-virtualized network topologies in their consolidated virtualized environments and waiting for the platforms to catch up. We’re really still seeing the effects of what virtualization is doing to the classical core/distribution/access design methodology as it relates to how shackled much of this mobility is to critical components like DNS and IP addressing and layer 2 VLANs.  See Greg Ness and Lori Macvittie’s scribblings.

Furthermore, Workload distribution is simply impractical for anything other than monolithic stacks because the virtualization platforms, the applications and the networks aren’t at a point where from a policy or intelligence perspective they can easily and reliably self-orchestrate.

Don’t get me wrong, autonomics and business process/governance feedback loops are most definitely coming — and are absolutely required for Cloud — but they’re not here and not used much today.  This is the hard stuff we’ve skipped over because it’s really freaking hard.  Don’t believe me?  See how long folks like HP have been at their “Adaptive Enterprise” solutions.  That’s why unified fabrics make so much sense; you can get your arms around automating much, much more with a consistent set of enforceable policies and SLAs.

So the next time someone brings up this epidemic of runaway VM’s, ask them to kindly provide you with empirical data demonstrating such as just because it *might* happen, doesn’t mean it *does* happen.

So much of the purported risks associated with virtualization and Cloud are things based on what might happen. There’s a huge difference between possibility and probability. One of them is used for prudent analysis and risk assessment, the other for selling you something. I’ll let you figure out which is which.

The management, visibility and security tools and capabilities are arriving on our doorsteps. When and if this sort of problem actually becomes a problem, it’s quite likely we’ll have a good set of solutions to deal with it.

Until then, challenge these assertions and fears, and ask for proof not pandering to panic.

I Can Haz TCG IF-MAP Support In Your Security Product, Please…

November 10th, 2008 3 comments

Quantumlolcat
In my previous post titled "Cloud Computing: Invented By Criminals, Secured By ???" I described the need for a new security model, methodology and set of technologies in the virtualized and cloud computing realms built to deal with the dynamic and distributed nature of evolving computing:

This
basically means that we should distribute the sampling, detection and
prevention functions across the entire networked ecosystem, not just to
dedicated security appliances; each of the end nodes should communicate
using a standard signaling and telemetry protocol so that common
threat, vulnerability and effective disposition can be communicated up
and downstream to one another and one or more management facilities.

Greg Ness from Infoblox reminded me in the comments of that post of something I was very excited about when it
became news at InterOp this last April: the Trusted Computing Group's (TCG) extension to the Trusted Network Connect (TNC) architecture called IF-MAP.

IF-MAP is a standardized real-time publish/subscribe/search mechanism which utilizies a client/server, XML-based SOAP protocol to provide information about network security objects and events including their state and activity:

IF-MAP extends the TNC architecture to support standardized, dynamic data interchange among a wide variety of networking and security components, enabling customers to implement multi-vendor systems that provide coordinated defense-in-depth.
 
Today’s security systems – such as firewalls, intrusion detection and prevention systems, endpoint security systems, data leak protection systems, etc. – operate as “silos” with little or no ability to “see” what other systems are seeing or to share their understanding of network and device behavior. 

This limits their ability to support coordinated defense-in-depth. 
In addition, current NAC solutions are focused mainly on controlling
network access, and lack the ability to respond in real-time to
post-admission changes in security posture or to provide visibility and
access control enforcement for unmanaged endpoints.  By extending TNC
with IF-MAP, the TCG is providing a standard-based means to address
these issues and thereby enable more powerful, flexible, open network
security systems.

While the TNC was initially designed to support NAC solutions, extending the capabilities to any security product to subscribe to a common telemetry and information exchange/integration protocol is a fantastic idea.

TNC-IFMAP

I'm really interested in how many vendors outside of the NAC space are including IF-MAP in their roadmaps. While IF-MAP has potential in convential non-virtualized infrastructure, I see a tremendous need for it in our move to Infrastructure 2.0 with virtualization and Cloud Computing. 

Integrating, for example, IF-MAP with VM-Introspection capabilities (in VMsafe, XenAccess, etc.) would be fantastic as you could tie the control planes of the hypervisors, management infrastructure, and provisioning/governance engines with that of security and compliance in near-time.

You can read more about the TCG's TNC IF-MAP specification here.

/Hoff