Archive

Posts Tagged ‘Cloud Computing’

Cloud Computing, Open* and the Integrator’s Dilemma

April 11th, 2011 4 comments

My esteemed co-tormentor of Twitter, Christian Reilly (@reillyusa,) did a fantastic job of describing the impact — or more specifically the potential lack thereof — of Facebook’s OpenCompute initiative on the typical enterprise as compared to the real target audience, the service provider and manufacturers of equipment for service providers:

…I genuinely believe that for traditional service providers who are making investments in new areas and offerings, XaaS providers, OEM hardware vendors and those with plans to become giants in the next generation(s) of Systems Integrators that the OpenCompute project is a huge step forward and will be a fantastic success story over the next few years as the community and its innovations grow and tangible benefits emerge.

I think Christian has it dead on; the trickle-down effect with large service providers leveraging innovation in facilities and compute construction looking to squeeze maximum cost efficiencies (based on power, density, cooling, and space efficiency) from their services will be good for everyone, but that it’s quite important to recognize why and how:

…consider that today’s public cloud services and co-location providers are today’s equivalent of commercial airlines, providing their own multi-tenant services, price structures and user experiences on top of just a handful of airframe and engine manufacturers. OpenCompute has the potential to influence the efficiency and effectiveness of those manufacturers by helping to contribute towards ideas and potentially standards that can be adopted across the industry.

Specific to the adoption of OpenCompute as an enterprise blueprint, he widened the bifurcation between “private clouds operated by service providers as public clouds” (my words) and “private clouds operated by enterprises for their own use” with a telling analog:

Bottom line ? To today’s large corporate IT shops; those who either have, or will continue to operate on-premise or co-located “private cloud” environments, the excitement levels around the OpenCompute project (if anyone actually hears of it at all) will be all-to-familiarly low as sadly, to wake some of these sleeping giants, it will take more than a poke from the very same company who’s website their IT teams are trying to prevent employees from accessing.

This is the point of departure for OpenCompute — it’s not framed for or designed for enterprise consumption.  In an altogether fascinating description of why Facebook open-sourced its data center design, the Huffington Post summarized it thus:

“[The Open Compute Project] really is a big deal because it constitutes a general shift in terms of what how we look at technology as a competitive advantage,” O’Grady said. “For Facebook, the evidence is piling up that they don’t consider technology to be a competitive advantage. They view their competitive advantage in the marketplace to be their users.”

Here we see the general abstraction of technology in line with Nick Carr’s premise that “IT Doesn’t Matter:”

“Sharing its blueprints may gain Facebook not only free manpower, but cheaper equipment. The company’s bet, analysts say, is that giving away intellectual property will help it foster an ecosystem of competing vendors that will drive down the cost of parts.”

With that in mind, I am just as worried about the fate of OpenStack and its enterprise versus service provider audience and how it’s being perceived as they watch the mad scramble by tech companies to add value and get a seat at the table.

Each of these well-intentioned projects are curated by public cloud operators and technology vendors and are indirectly positioned for the benefit of enterprises, but not really meant for their consumption — at least not if they don’t end up putting enterprises right back where they were trying to escape from in the first place with cloud computing: the integrator’s dilemma.

If you look at the underlying premise of OpenStack — it’s modularity, flexibility and open design — what you get is the ability to craft a solution finely tuned to an operating environment of your design. Integrate solutions into the stack as you see fit.  Contribute code.  Develop an ecosystem. Integrate, manage, maintain…

This is as much a problem as it is a solution for an enterprise.  This is why, in many cases, enterprises choose to use a single vendor with a single neck to choke in order to avoid having to act as an integrator in the first place or simply look to outsource to one or more public cloud providers and avoid this in the first place.

Chances are, most are realistically caught up somewhere in the nether-regions in between the two.

I wish to make it clear that I am very much a proponent of Open* but I realize that the lack of direct enterprise involvement in standards bodies, “open” initiatives and a lack of information sharing and experience for fear of losing competitive advantage is what drives enterprises to Closed* in the first place; they want to lessen their developmental and integration burdens and the Lego erector-set approach in many ways scares conservative, risk-averse CxO’s away from projects like this.

I think this is where we’ll see more of these “clouds in a box” being paired with managed services to keep it all humming, regardless of where it lives. [See infrastructure solutions from: Dell, VCE, HP, Oracle, etc. paired with “Cloud” distributions layered atop]

Let’s hope we see enterprise success stories built on leveraging OpenCompute and OpenStack…it will be good for all of us.

/Hoff

Update: I just saw that my colleague, James Urquhart, wrote a blog titled “Cloud disrupts, creates channel opportunities” in which he details the channel’s role in this integration challenge. Spot on.

Related articles

Enhanced by Zemanta

OpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security

April 8th, 2011 3 comments

As facetious as the introductory premise of my Commode Computing presentation is, the main message — the automation of security capabilities up and down the stack — really is something I’m passionate about.

Ultimately, I made the point that “security” needs to be as programmatic/programmable, agile, scaleable and flexible as the workloads (and stacks) it is designed to protect. “Security” in this contexts extends well beyond the network, but the network provides such a convenient way of defining templated containers against which we can construct and enforce policies across a wide variety of deployment and delivery models.

So as I watch OpenFlow (and Software Defined Networking) mature, I’m really, really excited to recognize the potential for a slew of innovative ways we can leverage and extend this approach to networking [monitoring and enforcement] in order to achieve greater visibility, scale, agility, performance, efficacy and reduced costs associated with security.  The more programmatic and instrumented the network becomes, the more capable our security options will become also.

I’m busy reading many of the research activities associated with OpenFlow security and digesting where vendors are in terms of their approach to leveraging this technology in terms of security.  It may be just my perspective, but it’s a little sparse today — not disappointingly so — with a huge greenfield opportunity for really innovative stuff when paired with advancements we’re seeing in virtualization and cloud computing.

I’ll relate more of my thoughts and discoveries as time goes on.  If you’ve got some cool ideas/concepts/products in this area (I don’t care who you work for,) post ’em here in the comments, please!

In the meantime, check out: www.openflow.org to get your feet wet.

/Hoff

Reminders to self to perform more research on (I think I’m going to do my next presentation series on this):

  • AAA for messages between OpenFlow Switch and Controllers
  • Flood protection for controllers
  • Spoofing/MITM between switch/controllers (specifically SSL/TLS)
  • Flow-through (ha!)/support of OpenFlow in virtual switches (see 1000v and Open vSwitch)
  • (per above) Integration with VN-Tag (like) flow-VM (workload) tagging
  • Integration of Netflow data from OpenFlow flow tables
  • State/flow-table convergence for security decisions with/without cut-through given traffic steering
  • Service insertion overlays for security control planes
  • Integration with 802.1x (and protocol extensions such as TrustSec)
  • Telemetry integration with NAC and vNAC
  • Anti-DDoS implications
Enhanced by Zemanta

Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit…

April 5th, 2011 6 comments

My wife is in the midst of an extended multi-phasic, multi-day delivery process of our fourth child.  In between bouts of her moaning, breathing and ultimately sleeping, I’m left to taunt people on Twitter and think about Cloud.

Reviewing my hot-button list of terms that are annoying me presently, I hit upon a favorite: Cloudbursting.

It occurred to me that this term brings up a visceral experience that makes me want to punch kittens.  It’s used by people to describe a use case in which workloads that run first and foremost within the walled gardens of an enterprise, magically burst forth into public cloud based upon a lack of capacity internally and a plethora of available capacity externally.

I call bullshit.

Now, allow me to qualify that statement.

Ben Kepes suggests that cloud bursting makes sense to an enterprise “Because you’ve spent a gazillion dollars on on-prem h/w that you want to continue using. BUT your workloads are spiky…” such that an enterprise would be focused on “…maximizing returns from on-prem. But sending excess capacity to the clouds.”  This implies the problem you’re trying to solve is one of scale.

I just don’t buy this.

Either you build a private cloud that gives you the scale you need in the first place in which you pattern your operational models after public cloud and/or design a solid plan to migrate, interconnect or extend platforms to the public [commodity] cloud using this model, therefore not bursting but completely migrating capacity, but you don’t stop somewhere in the middle with the same old crap internally and a bright, shiny public cloud you “burst things to” when you get your capacity knickers in a twist:

The investment and skillsets needed to rectify two often diametrically-opposed operational models doesn’t maximize returns, it bifurcates and diminishes efficiencies and blurs cost allocation models making both internal IT and public cloud look grotesquely inaccurate.

Christian Reilly suggested I had no legs to stand on making these arguments:

Fair enough, but…

Short of workloads such as HPC in which scale really is a major concern, if a large enterprise has gone through all of the issues relevant to running tier-1 applications in a public cloud, why on earth would you “burst” to the public cloud versus execute on a strategy that has those workloads run there in the first place.

Christian came up with another ringer during this exchange, one that I wholeheartedly agree with:

Ultimately, the reason I agree so strongly with this is because of the architectural, operational and compliance complexity associated with all the mechanics one needs to allow for interoperable, scaleable, secure and manageable workloads between an internal enterprise’s operational domain (cloud or otherwise) and the public cloud.

The (in)ability to replicate capabilities exactly across these two models means that gaps arise — gaps that unfairly amplify the immaturity of cloud for certain things and it’s stellar capabilities in others.  It’s no wonder people get confused.  Things like security, networking, application intelligence…

NOTE: I make a wholesale differentiaton between a strategy that includes a structured hybrid cloud approach of controlled workload placement/execution versus  a purely overflow/capacity movement of workloads.*

There are many workloads that simply won’t or can’t *natively* “cloudburst” to public cloud due to a lack of supporting packaging and infrastructure.**  Some of them are pretty important.  Some of them are operationally mission critical. What then?  Without an appropriate way of understanding the implications and complexity associated with this issue and getting there from here, we’re left with a strategy of “…leave those tier-1 apps to die on the vine while we greenfield migrate new apps to public cloud.”  That doesn’t sound particularly sexy, useful, efficient or cost-effective.

There are overlay solutions that can allow an enterprise to leverage utility computing services as an abstracted delivery platform and fluidly interconnect an enterprise with a public cloud, but one must understand what’s involved architecturally as part of that hybrid model, what the benefits are and where the penalties lay.  Public cloud needs the same rigor in its due diligence.

[update] My colleague James Urquhart summarized well what I meant by describing the difference in DC-DC (cloud or otherwise) workload execution as what I see as either end of a spectrum: VM-centric package mobility or adopting a truly distributed application architecture.  If you’re somewhere in the middle, things like cloudbursting get really hairy.  As we move from IaaS -> PaaS, some of these issues may evaporate as the former (VM’s) becomes less relevant and the latter (Applications deployed directly to platforms) more prevalent.

Check out this zinger from JP Morgenthal which much better conveys what I meant:

If your Tier-1 workloads can run in a public cloud and satisfy all your requirements, THAT’S where they should run in the first place!  You maximize your investment internally by scaling down and ruthlessly squeezing efficiency out of what you have as quickly as possible — writing those investments off the books.

That’s the point, innit?

Cloud bursting — today — is simply a marketing term.

Thoughts?

/Hoff

* This may be the point that requires more clarity, especially in the wake of examples that were raised on Twitter after I posted this such as using eBay and Netflix as examples of successful “cloudbursting” applications.  My response is that these fine companies hardly resemble a typical enterprise but that they’re also investing in a model that fundamentally changes they way they operate.

** I should point out that I am referring to the use case of heterogeneous cloud platforms such as VMware to AWS (either using an import/conversion function and/or via VPC) versus a more homogeneous platform interlock such as when the enterprise runs vSphere internally and looks to migrate VMs over to a VMware vCloud-powered cloud provider using something like vCloud Director Connector, for example.  Either way, the point still stands, if you can run a workload and satisfy your requirements outright on someone else’s stack, why do it on yours?

Enhanced by Zemanta

Budget Icebergs, Fiscal Anchors and a Boat (Fed)RAMP to Nowhere?

April 4th, 2011 No comments
United States Capitol in daylight

Image via Wikipedia

It’s often said that in order to make money, you have to spend money; invest in order to succeed.

However, when faced with the realities of budget shortfalls and unsavory economic pressure, it seems the easiest thing to do is simply hunt around for top-line blips on the budget radar and kill them, regardless of the long term implications that ceasing investment in efficiency and transparency programs have in reducing bottom line pain.

To wit, Alex Howard reports “Congress weighs deep cuts to funding for federal open government data platforms“:

Data.gov, IT.USASpending.gov, and other five other websites that offer platforms for open government transparency are facing imminent closure. A comprehensive report filed by Jason Miller, executive editor of Federal News Radio, confirmed that the United States of Office of Management and Budget is planning to take open government websites offline over the next four months because of a 94% reduction in federal government funding in the Congressional budget…

Cutting these funds would also shut down the Fedspace federal social network and, notably, the FedRAMP cloud computing cybersecurity programs. Unsurprisingly, open government advocates in the Sunlight Foundation and the larger community have strongly opposed these cuts.

Did you catch the last paragraph?  They’re kidding, right?

After demonstrable empirical data that shows how Vivek Kundra and his team’s plans for streamlining government IT is already saving money (and will continue to do so,) this is what we get?  Slash and burn?  I attempted to search for the investment made thus far in FedRAMP using the IT Dashboard, but couldn’t execute an appropriate search.  Anyone know that answer?

Read more on the topic by Daniel Schuman “Budget Technopocalypse: Proposed Congressional Budgets Slash Funding for Data Transparency

Now, I’m not particularly fond of how the initial FedRAMP drafts turned out, but I’m optimistic that the program will evolve, will ultimately make a difference and lead to more assured, cost-efficient and low-friction adoption of Cloud Computing. We need FedRAMP to succeed and we need continued investment in it to do so.  Let’s not throw the baby out with the Cloud water.

We literally cannot afford for FedRAMP (or these other transparency programs) to be cut — these are the very programs that will lead to long term efficiency and fiscally-responsible programs across the U.S. Government.  They will ultimately make us more competitive.

Vote with your clicks.  Support the Sunlight Foundation.

/Hoff

Enhanced by Zemanta

FYI: New NIST Cloud Computing Reference Architecture

March 31st, 2011 No comments
logo of National Institute of Standards and Te...

Image via Wikipedia

In case you weren’t aware, NIST has a WIKI for collaboration on Cloud Computing.  You can find it here.

They also have a draft of their v1.0 Cloud Computing Reference Architecture, which builds upon the prior definitional work we’ve seen before and has pretty graphics.  You can find that, here (dated 3/30/2011)

/Hoff

Enhanced by Zemanta

AWS’ New Networking Capabilities – Sucking Less ;)

March 15th, 2011 1 comment
A 6-node clique is a 5-component, structural c...

Image via Wikipedia

I still haven’t had my coffee and this is far from being complete analysis, but it’s pretty darned exciting…

One of the biggest challenges facing public Infrastructure-as-a-Service cloud providers has been balancing the flexibility and control of  datacenter networking capabilities against that present in traditional data center environments.

I’m not talking about complex L2/L3 configurations or converged data/storage networking topologies; I’m speaking of basic addressing and edge functionality (routing, VPN, firewall, etc.)  Furthermore, interconnecting public cloud compute/storage resources in a ‘private, non-Internet facing role) to a corporate datacenter has been less than easy.

Today Jeff Barr ahsploded another of his famous blog announcements which goes a long way solving not only these two issues, but clearly puts AWS on-track for continuing to press VMware on the overlay networking capabilities present in their vCloud Director vShield Edge/App model.

The press release (and Jeff’s blog) were a little confusing because they really focus on VPC, but the reality is that this runs much, much deeper.

I rather liked Shlomo Swidler’s response to that same comment to me on Twitter 🙂

This announcement is fundamentally about the underlying networking capabilities of EC2:

Today we are releasing a set of features that expand the power and value of the Virtual Private Cloud. You can think of this new collection of features as virtual networking for Amazon EC2. While I would hate to be innocently accused of hyperbole, I do think that today’s release legitimately qualifies as massive, one that may very well change that way that you think about EC2 and how it can be put to use in your environment.

The features include:

  • A new VPC Wizard to streamline the setup process for a new VPC.
  • Full control of network topology including subnets and routing.
  • Access controls at the subnet and instance level, including rules for outbound traffic.
  • Internet access via an Internet Gateway.
  • Elastic IP Addresses for EC2 instances within a VPC.
  • Support for Network Address Translation (NAT).
  • Option to create a VPC that does not have a VPC connection.

You can now create a network topology in the AWS cloud that closely resembles the one in your physical data center including public, private, and DMZ subnets. Instead of dealing with cables, routers, and switches you can design and instantiate your network programmatically. You can use the AWS Management Console (including a slick new wizard), the command line tools, or the APIs. This means that you could store your entire network layout in abstract form, and then realize it on demand.

That’s pretty bad-ass and goes along way toward giving enterprises a not-so-gentle kick in the butt regarding getting over the lack of network provisioning flexibility.  This will also shine whcn combined with the VMDK import capabilities — which are albeit limited today from a preservation of networking configuration.  Check out Christian Reilly’s great post “AWS – A Wonka Surprise” regarding how the VMDK-Import and overlay networking elements collide.  This gets right to the heart of what we were discussing.

Granted, I have not dug deeply into the routing capabilities (support for dynamic protocols, multiple next-hop gateways, etc.) or how this maps (if at all) to VLAN configurations and Shlomo did comment that there are limitations around VPC today that are pretty significant: “AWS VPC gotcha: No RDS, no ELB, no Route 53 in a VPC and “AWS VPC gotcha: multicast and broadcast still doesn’t work inside a VPC,” and “No Spot Instances, no Tiny Instances (t1.micro), and no Cluster Compute Instances (cc1.*)” but it’s an awesome first step that goes toward answering my pleas that I highlighted in my blog titled “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye

Thank you, Santa. 🙂

On Twitter, Dan Glass’ assessment was concise, more circumspect and slightly less enthusiastic — though I’m not exactly sure I’d describe my reaction as that bordering on fanboi:

…to which I responded that clearly there is room for improvement in L3+ and security.  I expect we’ll see some 😉

In the long term, regardless of how this was framed from an announcement perspective, AWS’ VPC as a standalone “offer” should just go away – it will just become another networking configuration option.

While many of these capabilities are basic in nature, it just shows that AWS is paying attention to the fact that if it wants enterprise business, it’s going to have to start offering service capabilities that make the transition to their platforms more like what enterprises are used to using.

Great first step.

Now, about security…while outbound filtering via ACLs is cool and all…call me.

/Hoff

P.S. As you’ll likely see emerging in the comments, there are other interesting solutions to this overlay networking/connectivity solution – CohesiveF/T and CloudSwitch come to mind…

Enhanced by Zemanta

Incomplete Thought: Cloud Capacity Clearinghouses & Marketplaces – A Security/Compliance/Privacy Minefield?

March 11th, 2011 2 comments
Advertisement for the automatic (dial) telepho...

Image via Wikipedia

With my focus on cloud security, I’m always fascinated when derivative business models arise that take the issues associated with “mainstream” cloud adoption and really bring issues of security, compliance and privacy into even sharper focus.

To wit, Enomaly recently launched SpotCloud – a Cloud Capacity Clearinghouse & Marketplace in which cloud providers can sell idle compute capacity and consumers may purchase said capacity based upon “…location, cost and quality.”

Got a VM-based workload?  Want to run it cheaply for a short period of time?

…Have any security/compliance/privacy requirements?

To me, “quality” means something different that simply availability…it means service levels, security, privacy, transparency and visibility.

Whilst one can select the geographic location where your VM will run, as part of offering an “opaque inventory,” the identity of the cloud provider is not disclosed.  This begs the question of how the suppliers are vetted and assessed for security, compliance and privacy.  According to the SpotCloud FAQ, the answer is only a vague “We fully vet all market participants.”

There are two very interesting question/answer pairings on the SpotCloud FAQ that relate to security and service availability:

How do I secure my SpotCloud VM?

User access to VM should be disabled for increased security. The VM package is typically configured to automatically boot, self configure itself and phone home without the need for direct OS access. VM examples available.

Are there any SLA’s, support or guarantees?

No, to keep the costs as low as possible the service is offered without any SLA, direct support or guarantees. We may offer support in the future. Although we do have a phone and are more than happy to talk to you…

:: shudder ::

For now, I would assume that this means that if your workloads are at all mission critical, sensitive, subject to compliance requirements or traffic in any sort of sensitive data, this sort of exchange option may not be for you. I don’t have data on the use cases for the workloads being run using SpotCloud, but perhaps we’ll see Enomaly make this information more available as time goes on.

I would further assume that the criteria for provider selection might be expanded to include certification, compliance and security capabilities — all the more reason for these providers to consider something like CloudAudit which would enable them to provide supporting materials related to their assertions. (*wink*)

To be clear, from a marketplace perspective, I think this is a really nifty idea — sort of the cloud-based SETI-for-cost version of the Mechanical Turk.  It takes the notion of “utility” and really makes one think of the options.  I remember thinking the same thing when Zimory launched their marketplace in 2009.

I think ultimately this further amplifies the message that we need to build survivable systems, write secure code and continue to place an emphasis on the security of information deployed using cloud services. Duh-ja vu.

This sort of use case also begs the interesting set of questions as to what these monolithic apps are intended to provide — surely they transit in some sort of information — information that comes from somewhere?  The oft-touted massively scaleable compute “front-end” overlay of public cloud often times means that the scale-out architectures leveraged to deliver service connect back to something else…

You likely see where this is going…

At any rate, I think these marketplace offerings will, for the foreseeable future, serve a specific type of consumer trafficking in specific types of information/service — it’s yet another vertical service offering that cloud can satisfy.

What do you think?

/Hoff

Enhanced by Zemanta

App Stores: From Mobile Platforms To VMs – Ripe For Abuse

March 2nd, 2011 4 comments
Android Market

Image via Wikipedia

This CNN article titled “Google pulls 21 apps in Android malware scare” describes an alarming trend in which malicious code is embedded in applications which are made available for download and use on mobile platforms:

Google has just pulled 21 popular free apps from the Android Market. According to the company, the apps are malware aimed at getting root access to the user’s device, gathering a wide range of available data, and downloading more code to it without the user’s knowledge.

Although Google has swiftly removed the apps after being notified (by the ever-vigilant “Android Police” bloggers), the apps in question have already been downloaded by at least 50,000 Android users.

The apps are particularly insidious because they look just like knockoff versions of already popular apps. For example, there’s an app called simply “Chess.” The user would download what he’d assume to be a chess game, only to be presented with a very different sort of app.

Wow, 50,000 downloads.  Most of those folks are likely blissfully unaware they are owned.

In my Cloudifornication presentation, I highlighted that the same potential for abuse exists for “virtual appliances” which can be uploaded for public consumption to app stores and VM repositories such as those from VMware and Amazon Web Services:

The feasibility for this vector was deftly demonstrated shortly afterward by the guys at SensePost (Clobbering the Cloud, Blackhat) who showed the experiment of uploading a non-malicious “phone home” VM to AWS which was promptly downloaded and launched…

This is going to be a big problem in the mobile space and potentially just as impacting in cloud/virtual datacenters as people routinely download and put into production virtual machines/virtual appliances, the provenance and integrity of which are questionable.  Who’s going to police these stores?

(update: I loved Christian Reilly’s comment on Twitter regarding this: “Using a public AMI is the equivalent of sharing a syringe”)

/Hoff

Enhanced by Zemanta

Video Of My CSA Presentation: “Commode Computing: Relevant Advances In Toiletry & I.T. – From Squat Pots to Cloud Bots – Waste Management Through Security Automation”

February 19th, 2011 13 comments

This is probably my most favorite presentation I have given.  It was really fun.  I got so much positive feedback on what amounts to a load of crap. 😉

This video is from the Cloud Security Alliance Summit at the 2011 RSA Security Conference in San Francisco.  I followed Mark Benioff from Salesforce and Vivek Kundra, CIO of the United States.

Here is a PDF of the slides if you are interested.

Part 1:

Part 2:

Enhanced by Zemanta

My Warm-Up Acts at the RSA/Cloud Security Alliance Summit Are Interesting…

February 8th, 2011 2 comments
Red and Yellow, two of M&M's

Besides a panel or two and another circus-act talk with Rich Mogull, I’m thrilled to be invited to present again at the Cloud Security Alliance Summit at RSA this year.

One of my previous keynotes at a CSA event was well received: Cloudersize – A cardio, strength & conditioning program for a firmer, more toned *aaS

Normally when negotiating to perform at such a venue, I have my people send my diva list over to the conference organizers.  You know, the normal stuff: only red M&M’s, Tupac walkout music, fuzzy blue cashmere slippers and Hoffaccinos on tap in the green room.

This year, understanding we are all under pressure given the tough economic climate, I relaxed my requirements and instead took a deal for a couple of ace warm-up speakers to goose the crowd prior to my arrival.

Here’s who Jim managed to scrape up:

9:00AM – 9:40AM // Keynote: “Cloud 2: Proven, Trusted and Secure”
Speaker: Marc Benioff, CEO, Salesforce.com

9:40AM – 10:10AM // Keynote: Vivek Kundra, CIO, White House

10:10AM – 10:30AM // Presentation: “Commode Computing: Relevant Advances In Toiletry – From Squat Pots to Cloud Bots – Waste Management Through Security Automation”
Presenting: Christofer Hoff, Director, Cloud & Virtualization Solutions, Cisco Systems

I guess I can’t complain 😉

See you there. Bring rose petals and Evian as token gifts to my awesomeness, won’t you?

/Hoff

Enhanced by Zemanta