Archive

Archive for December, 2008

CloudSQL – Accessing Datastores in the Sky using SQL…

December 2nd, 2008 5 comments
Zohosql-angled
I think this is definitely a precursor of things to come and introduces some really interesting security discussions to be had regarding the portability, privacy and security of datastores in the cloud.

Have you heard of Zoho?  No?  Zoho is a SaaS vendor that describe themselves thusly:

Zoho is a suite of online applications (services) that you sign up for and access from our Website. The applications are free for individuals and some have a subscription fee for organizations. Our vision is to provide our customers (individuals, students, educators, non-profits, small and medium sized businesses) with the most comprehensive set of applications available anywhere (breadth); and for those applications to have enough features (depth) to make your user experience worthwhile.

Today, Zoho announced the availability of CloudSQL which is middleware that allows customers who use Zoho's SaaS apps to "…access their data on Zoho SaaS
applications using SQL queries."
 

From their announcement:

Zoho CloudSQL is a technology that allows developers to interact with business data stored across Zoho Services using the familiar SQL language. In addition, JDBC and ODBC database drivers make writing code a snap – just use the language construct and syntax you would use with a local database instance. Using the latest Web technology no longer requires throwing away years of coding and learning.

Zoho CloudSQL allows businesses to connect and integrate the data and applications they have in Zoho with the data and applications they have in house, or even with other SaaS services. Unlike other methods for accessing data in the cloud, CloudSQL capitalizes on enterprise developers’ years of knowledge and experience with the widely‐used SQL language. This leads to faster deployments and easier (read: less expensive) integration projects.

Basically, CloudSQL is interposed between the suite of Zoho applications and the backend datastores and functions as an intermediary receiving SQL queries against the pooled data sets using standard SQL commands and dialects. Click on the diagram below for a better idea of what this looks like.

Zoho-cloud-sql
What's really interesting about allowing native SQL access is the ability to then allow much easier information interchange between apps/databases on an enterprises' "private cloud(s)" and the Zoho "public" cloud.

Further, it means that your data is more "portable" as it can be backed up, accessed, and processed by applications other than Zoho's.  Imagine if they were to extend the SQL exposure to other cloud/SaaS providers…this is where it will get really juicy. 

This sort of thing *will* happen.  Customers will see the absolute utility of exposing their cloud-based datastores and sharing them amongst business partners, much in the spirit of how it's done today, but with the datastores (or chunks of them) located off-premises.

That's all good and exciting, but obviously security questions/concerns immediately surface regarding such things as: authentication, encryption, access control, input sanitation, privacy and compliance…

Today our datastores typically live inside the fortress with multiple
layers of security and proxied access from applications, shielded from
direct access and yet we still have basic issues with attacks such as
SQL injection.  Imagine how much fun we can have with this!

The best I could find regarding security and Zoho came from their FAQ which doesn't exactly inspire confidence given the fact that they address logical/software security by suggesting that anti-virus software is the best line of defense ffor protecting your data and that "data encryption" will soon be offered as an "option" and (implied) SSL will make you secure:

6. Is my data secured?

Many people ask us this question. And rightly so; Zoho has invested alot of time and money to ensure that your information is secure and private. We offer security on multiple levels including the physical, software and people/process levels; In fact your data is more secure than walking around with it on a laptop or even on your corporate desktops.

Physical: Zoho servers and infrastructure are located in the most secure types of data centers that have multiple levels of restrictions for access including: on-premise security guards, security cameras, biometric limited access systems, and no signage to indicate where the buildings are, bullet proof glass, earthquake ratings, etc.

Hardware: Zoho employs state of the art firewall protection on multiple levels eliminating the possibility of intrusion from outside attacks

Logical/software protection: Zoho deploys anti-virus software and scans all access 24 x7 for suspicious traffic and viruses or even inside attacks; All of this is managed and logged for auditing purposes.

Process: Very few Zoho staff have access to either the physical or logical levels of our infrastructure. Your data is therefore secure from inside access; Zoho performs regular vulnerability testing and is constantly enhancing its security at all levels. All data is backed up on multiple servers in multiple locations on a daily basis. This means that in the worst case, if one data center was compromised, your data could be restored from other locations with minimal disruption. We are also working on adding even more security; for example, you will soon be able to select a "data encryption" option to encrypt your data en route to our servers, so that in the unlikely event of your own computer getting hacked while using Zoho, your documents could be securely encrypted and inaccessible without a "certificate" which generally resides on the server away from your computer.

Fun times ahead, folks.

/Hoff

Virtual Jot Pad: The Cloud As a Fluffy Offering In the Consumerization Of IT?

December 2nd, 2008 1 comment

This a post that's bigger than a thought on Twitter but almost doesn't deserve a blog, but for some reason, I just felt the need to write it down.  This may be one of those "well, duh" sorts of posts, but I can't quite verbalize what is tickling my noggin here.

As far as I can tell, the juicy bits stem from the intersection of cloud cost models, cloud adopter profile by company size/maturity and the concept of the consumerization of IT.

I think 😉

This thought was spawned by a couple of interesting blog posts:

  1. James Urquhart's blog titled "The Enterprise barrier-to-exit in cloud computing" and "What is the value of IT convenience" which led me to…
  2. Billy Marshall from rPath and his blog titled "The Virtual Machine Tsunami."

These blogs are about different things entirely but come full circle around to the same point.

James first shed some interesting light on the business taxonomy, the sorts of IT use cases and classes of applications and operations that drive businesses and their IT operations to the cloud, distinguishing between what can be described as the economically-driven early adopters of the cloud in SMB's versus mature larger enterprises in his discussion with George Reese from O'Reilly via Twitter:

George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can't justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal.

I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

This existing investment in infrastructure therefore acts almost as a "barrier-to-exit" for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that

That's a really interesting differentiation that hasn't been made as much as it should, quite honestly.  In the marketing madness that has ensued, you get the feeling that everyone, including large enterprises, are rushing willy-nilly to the cloud and outsourcing the majority of their compute loads, not the cloudbursting overflow.

Billy Marshall's post offers some profound points including one that highlights the oft-reported and oft-harder-to-prove concept of VM sprawl and the so-called "frictionless" model of IT, but with a decidedly cloud perspective. 

What was really interesting was the little incandescent bulb that began to glow when I read the following after reading James' post:

Amazon EC2
demand continues to skyrocket. It seems that business units are quickly
sidestepping those IT departments that have not yet found a way to say
“yes” to requests for new capacity due to capital spending constraints
and high friction processes for getting applications into production
(i.e. the legacy approach of provisioning servers with a general
purpose OS and then attempting to install/configure the app to work on
the production implementation which is no doubt different than the
development environment).

I heard a rumor that a new datacenter in
Oregon was underway to support this burgeoning EC2 demand. I also saw
our most recent EC2 bill, and I nearly hit the roof. Turns out when you
provide frictionless capacity via the hypervisor, virtual machine
deployment, and variable cost payment, demand explodes. Trust me.

I've yet to figure out if the notion of frictionless capacity is a good thing or not if your ability to capacity plan is outpaced by a consumption model and a capacity yield that can just continue to climb without constraint.  At what point does the crossover between cost savings from infrastructure that bounded costs by resource constraints of physical servers become eclipsed by runaway use?

I guess I'll have to wait to see his bill 😉

Back to James' post, he references an interchange on Twitter with George Reese (whose post on "20 Rules for Amazon Cloud Security" I am waiting to fully comment on) in which George commented:

"IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier."

…which is basically the same thing Billy said in a Nick Carr kind of way.  The key question here is for whom?  As it relates to the SMB, I'd agree with this statement, but the thing that really sunk it was that statement just doesn't yet jive for larger enterprises.  In James' second post, he drives this home:

I think these examples demonstrate an important decision point for IT organizations, especially during these times of financial strife. What is the value of IT convenience? When is it wise to choose to pay more dollars (or euros, or yen, or whatever) to gain some level of simplicity or focus or comfort? In the case of virtualization, is it always wise to leverage positive economic changes to expand service coverage? In the case of cloud computing, is it always wise to accept relatively high price points per CPU hour over managing your own cheaper compute loads?

Is the cloud about convenience or true business value?  Is any opportunity to eliminate a barrier — whether that barrier actually acts as a logical check and balance within the system — simply enough to drive business to the cloud?

I know the side-stepping IT bit has been spoken about ad nauseum within the context of cloud; namely when describing agility, flexibility, and economics, but it never really occurred to me that the cloud — much in the way you might talk about an iPhone — is now being marketed itself as another instantiation of the democratization, commoditization and consumerization of IT — almost as an application — and not just a means to an end.

I think the thing that was interesting to me in looking at this issue from two perspectives is that differentiation between the SMB and the larger enterprise and their respective "how, what and why" cloud use cases are very much different.  That's probably old news to most, but I usually don't think about the SMB in my daily goings-on.

Just like the iPhone and its adoption for "business use," the larger enterprise is exercising discretion in what's being dumped onto the cloud with a more measured approach due, in part, to managing risk and existing sunk costs, while the SMB is running to embrace it it at full speed, not necessarily realizing the hidden costs.

/Hoff

Categories: Cloud Computing Tags:

Application Delivery Control: More Hardware Or Function Of the Hypervisor?

December 1st, 2008 3 comments

CrisisoutoforderUpdate: Ooops.  I forgot to announce that I'm once again putting on my Devil's Advocacy cap. It fits nicely and the contrasting color makes my eyes pop.;)

It should be noted that obviously I recognize that dedicated
hardware offers performance and scale capabilities
that in many cases
are difficult (if not impossible) to replicate in virtualized software
instantiations of the same functionality. 

However, despite spending the best part of two years raising
awareness as to the issues surrounding scalability, resiliency,
performance, etc. of security software solutions in virtualized
environments via my Four Horsemen of the Virtualization Security Apocalypse presentation, perception is different
than reality and many network capabilities will simply consolidate into the virtualization platforms until the next big swing of the punctuated equlibrium.

This is another classic example of "best of breed" versus "good enough" and in many cases this debate becomes a corner-case argument of speeds and
feeds and the context/location of the network topology you're talking
about. There's simply no way to sprinkle enough specialized hardware around to get the pervasive autonomics across the entire fabric/cloud without a huge chunk of it existing in the underlying virtualization platform or underlying network infrastructure.

THIS is the real scaling problem that software can address (by penetration) that specialized hardware cannot.

There will always be a need for dedicated hardware for specific needs, and if you have an infrastructure service issue that requires massive hardware to support traffic loads until the sophistication and technology within the virtualization layer catches up, by all means use it!  In fact, just today after writing this piece Joyent announced they use f5 BigIP's to power their IaaS cloud service…

In the longer term, however, application delivery control (ADC) will ultimately become a feature of the virtual networking stack provided by software as part of a larger provisioning/governance/autonomics challenge provided by the virtualization layer.  If you're going to get as close to this new atomic unit of measurement in the VM, you're going to have to decide where the network ends and the virtualization layer begins…across every cloud you expect to host your apps and those they may transit.


I've been reading Lori McVittie's f5 DevCentral blog for quite some time.  She and Greg Ness have been feeding off one another's commentary in their discussion on "Infrastructure 2.0" and the unique set of challenges that the dynamic nature of virtualization and cloud computing place on "the network" and the corresponding service layers that tie applications and infrastructure together.

The interesting thing to me is that why I do not disagree that that the infrastructure must adapt to the liquidity, agility and flexibility enabled by virtualization and become more instrumented as to the things running atop it, much of the functionality Greg and Lori allude to will ultimately become a function of the virtualization and cloud layers themselves*.

One of the more interesting memes is the one Lori summarized this morning in her post titled "Managing Virtual Infrastructure Requires an Application Centric Approach," wherein the she lays the case for the needs of infrastructure becoming "application" centric based upon the "highly dynamic" nature of virtualized and cloud computing environments:

…when applications are decoupled from the servers on which they are deployed and the network infrastructure that supports and delivers them, they cannot be effectively managed unless they are recognized as individual components themselves.

Traditional infrastructure and its associated management intrinsically ties applications to servers and servers to IP addresses and IP addresses to switches and routers. This is a tightly coupled model that leaves very little room to address the dynamic nature of a virtual infrastructure such as those most often seen in cloud computing models.

We've watched as SOA was rapidly adopted and organizations realized the benefits of a loosely coupled application architecture. We've watched the explosion of virtualization and the excitement of de-coupling applications from their underlying server infrastructure. But in the network infrastructure space, we still see applications tied to servers tied to IP addresses tied to switches and routers.

That model is broken in a virtual, dynamic infrastructure because applications are no longer bound to servers or IP addresses. They can be anywhere at any time, and infrastructure and management systems that insist on binding the two together are simply going to impede progress and make managing that virtual infrastructure even more painful.

It's all about the application. Finally.

…and yet the applications themselves, despite how integrated they may be, suffer from the same horizontal management problem as the network today does.  So I'm not so sure about the finality of the "it's all about the application" because we haven't even solved the "virtual infrastructure management" issues yet.

Bridging the gap between where we are today and the infrastructure 2.0/application-centric focus of tomorrow is illustrated nicely by Billy Marshall from rPath in his post titled "The Virtual Machine Tsunami," in which he describes how we're really still stuck being VM-centric as the unit measure of application management:

Bottom line, we are all facing an impending tsunami of VMs unleashed by
an unprecedented liquidity in system capacity which is enabled by
hypervisor based cloud computing. When the virtual machine becomes the
unit of application management
, extending the legacy, horizontal
approaches for management built upon the concept of a physical host
with a general purpose OS simply will not scale. The costs will
skyrocket.

The new approach will have vertical management
capability based upon the concept of an application as a coordinated
set of version managed VMs.
This approach is much more scalable for 2
reasons. First, the operating system required to support an application
inside a VM is one-tenth the size of an operating system as a general
purpose host atop a server. One tenth the footprint means one tenth the
management burden – along with some related significant decrease in the
system resources required to host the OS itself (memory, CPU, etc.).
Second, strong version management across the combined elements of the
application and the system software that supports it within the VM
eliminates the unintended consequences associated with change. These
unintended consequences yield massive expenses for testing and
certification when new code is promoted from development to production
across each horizontal layer (OS, middleware, application). Strong
version management across these layers within an isolated VM eliminates
these massive expenses.

So we still have all the problems of managing the applications atomically, but I think there's some general agreement between these two depictions.

However, where it gets interesting is where Lori essentially paints the case that "the network" today is unable to properly provide for the delivery of applications:

And that's what makes application delivery focused solutions so important to both virtualization and cloud computing models in which virtualization plays a large enabling role.

Because application delivery controllers are more platforms than they are devices; they are programmable, adaptable, and internally focused on application delivery, scalability, and security.They are capable of dealing with the demands that a virtualized application infrastructure places on the entire delivery infrastructure. Where simple load balancing fails to adapt dynamically to the ever changing internal network of applications both virtual and non-virtual, application delivery excels.

It is capable of monitoring, intelligently, the availability of applications not only in terms of whether it is up or down, but where it currently resides within the data center. Application delivery solutions are loosely coupled, and like SOA-based solutions they rely on real-time information about infrastructure and applications to determine how best to distribute requests, whether that's within the confines of a single data center or fifteen data centers.

Application delivery controllers focus on distributing requests to applications, not servers or IP addresses, and they are capable of optimizing and securing both requests and responses based on the application as well as the network.

They are the solution that bridges the gap that lies between applications and network infrastructure, and enables the agility necessary to build a scalable, dynamic delivery system suitable for virtualization and cloud computing.

This is where I start to squint a little because Lori's really taking the notion of "application intelligence" and painting what amounts to a router/switch in an appliction delivery controller as a "platform" as she attempts to drive wedge between an ADC and "the network."

Besides the fact that "the network" is also rapidly evolving to adapt to this more loosely-coupled model and the virtualization layer, the traditional networking functions and the infrastructure service layers are becoming more integrated and aware thanks to the homgenizing effect of the hypervisor, I'll ask the question I asked Lori on Twitter this morning:

ADC-Question

Why won't this ADC functionality simply show up in the hypervisor?  If you ask me, that's exactly the goal.  vCloud, anyone?  Amazon EC2?  Azure?

If we take the example of Cisco and VMware, the coupled vision of the networking and virtualization 800 lb gorillas is exactly the same as she pens above; but it goes further because it addresses the end-to-end orchestration of infrastructure across the network, compute and storage fabrics.

So, why do we need yet another layer of network routers/switches called "application delivery controllers" as opposed to having this capability baked into the virtualization layer or ultimately the network itself?

That's the whole point of cloud computing and virtualization, right?  To decouple the resources from the hardware delivering it but putting more and more of that functionality into the virtualization layer?

So, can you really make the case for deploying more "application-centric" routers/switches (which is what an application delivery controller is) regardless of how aware it may be?

/Hoff