Archive

Archive for the ‘Microsoft’ Category

Video Of My Cloudifornication Presentation [Microsoft BlueHat v9]

August 16th, 2010 2 comments

In advance of publishing a more consolidated compilation of various recordings of my presentations, I thought I’d post this one.

This is from Microsoft’s BlueHat v9 and is from my “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure” presentation.

The direct link is here in case you have scripting disabled.

The follow-on to this is my latest presentation – “Cloudinomicon: Idempotent Infrastructure, Building Survivable Systems, and Bringing Sexy Back To Information Centricity.

Related articles by Zemanta

Enhanced by Zemanta

Microsoft Azure Going “Down Stack,” Adding IaaS Capabilities. AWS/VMware WAR!

February 4th, 2010 4 comments

It’s very interesting to see that now that infrastructure-as-a-service (IaaS) players like Amazon Web Services are clawing their way “up the stack” and adding more platform-as-a-service (PaaS) capabilities, that Microsoft is going “down stack” and providing IaaS capabilities by way of adding RDP and VM capabilities to Azure.

From Carl Brooks’ (@eekygeeky) article today:

Microsoft is expected to add support for Remote Desktops and virtual machines (VMs) to Windows Azure by the end of March, and the company also says that prices for Azure, now a baseline $0.12 per hour, will be subject to change every so often.

Prashant Ketkar, marketing director for Azure, said that the service would be adding Remote Desktop capabilities as soon as possible, as well as the ability to load and run virtual machine images directly on the platform. Ketkar did not give a date for the new features, but said they were the two most requested items.

This move begins a definite trend away from the original concept for Azure in design and execution. It was originally thought of as a programming platform only: developers would write code directly into Azure, creating applications without even being aware of the underlying operating system or virtual instances. It will now become much closer in spirit to Amazon Web Services, where users control their machines directly. Microsoft still expects Azure customers to code for the platform and not always want hands on control, but it is bowing to pressure to cede control to users at deeper and deeper levels.

One major reason for the shift is that there are vast arrays of legacy Windows applications users expect to be able to run on a Windows platform, and Microsoft doesn’t want to lose potential customers because they can’t run applications they’ve already invested in on Azure. While some users will want to start fresh, most see cloud as a way to extend what they have, not discard it.

This sets the path to allow those enterprise customers running HyperV internally to take those VMs and run them on (or in conjunction with) Azure.

Besides the obvious competition with AWS in the public cloud space, there’s also a private cloud element. As it stands now, one of the primary differentiators for VMware from the private-to-public cloud migration/portability/interoperability perspective is the concept that if you run vSphere in your enterprise, you can take the same VMs without modification and move them to a service provider who runs vCloud (based on vSphere.)

This is a very interesting and smart move by Microsoft.

/Hoff

Reblog this post [with Zemanta]

Just A Reflective Bookmark: Microsoft’s Azure…The Dark Horse Emergeth…

November 17th, 2009 3 comments

darkhorseI’ve said it before, I’ll say it again:

Don’t underestimate Microsoft and the potential disruption Azure will deliver.*

You might not get Microsoft’s strategy for Azure. Heck, much of Microsoft may not get Microsoft’s strategy for Azure, but one thing is for sure: Azure will be THE platform for products, solutions and services across all mediums from Redmond moving forward. Ray Ozzie said it best at PDC:

The vision of Azure, said Ozzie, is “three screens and a cloud,” meaning internet-based data and software that plays equally well on PCs, mobile devices, and TVs.

I think the underlying message here is that while we often think of Cloud from the perspective of interacting with “data,” we should not overlook how mobility, voice and video factor into the equation…

According to Ozzie, Azure will become production live on January 1st and “six data centers in North America, Europe, and Asia will come online.” (I wonder when Amazon will announce APAC support…)

Azure will be disruptive, especially for Windows-heavy development shops and the notion of secure data access/integration between public/private clouds is not lost on them, either:

Microsoft also announced another of its city-based code names. Sydney is a security mechanism that lets businesses exchange data between their servers and the Azure cloud. Entering testing next year, Sydney should allow a local application to talk to a cloud application. It will help businesses that want to run most of an application in Microsoft’s data center, but that want to keep some sensitive parts running on their own servers.

It will be interesting to see how “Sydney” manifests itself as compared to AWS’s Virtual Private Cloud.

Competitors know the Azure is no joke, either, which is why we see a certain IaaS provider adding .NET framework support as well as Cloud Brokers (bridges) such as RightScale announcing support for Azure. Heck, even GoGrid demo’d “interoperability” with Azure. Many others are announcing support, including the Federal Government via Vivek Kundra who joined Ozzie to announce that the 2009 Pathfinder Innovation Challenge will be hosted on Azure.

Stir in the fact that Microsoft is also extending its ecosystem of supported development frameworks and languages, at PDC Matt Mullenwegg from WordPress (Automattic to be specific) is developing on Azure. This shows how Azure will support things like PHP, MySQL as well as .NET (now called AppFabric Access Control.)

Should be fun.

Hey, I wonder (*wink*) if Microsoft will be interested in participating in the A6 Working Group to provide transparency and visibility that some of their IaaS/PaaS competitors (*cough* Amazon *cough*) who are clawing their way up the stack do not…

/Hoff

*To be fair a year ago when Azure was announced, I don’t think any of us got Azure and I simply ignored it for the most part. Not the case any longer; it makes a ton of sense if they can execute.

Don’t Hassle the Hoff: Recent Press & Podcast Coverage & Upcoming Speaking Engagements

October 26th, 2009 1 comment

Microphone

Here is some of the recent coverage from the last month or so on topics relevant to content on my blog, presentations and speaking engagements.  No particular order or priority and I haven’t kept a good record, unfortunately.

Press/Technology & Security eZines/Website/Blog Coverage/Meaningful Links:

Podcasts/Webcasts/Video:

Recent Speaking Engagements/Confirmed to  speak at the following upcoming events:

  • Enterprise Architecture Conference, D.C.
  • Intel Security Summit 2009, Hillsboro OR
  • SecTor 2009, Toronto CA
  • EMC Innovation Forum, Franklin MA
  • NY Technology Forum, NY, NY
  • Microsoft Bluehat v9, Redmond WA
  • Office of the Comptroller & Currency, San Antonio TX
  • Intercloud Working Group, GooglePlex CA 😉
  • CSC Leading Edge Forum, VA
  • DojoCon, VA

I also forgot to thank Eric Siebert for putting together the VMware Top 20 blog list and putting me on it as well as the fact that Rational Survivability made the Datamation 2009 Top 200 Tech Blogs list.

/Hoff

The Cloud For Clunkers Program…Security, Portability, Interoperability and the Economics of Cloud Providers

August 8th, 2009 1 comment

Introducing the “Cloud For Clunkers Program”cash-for-clunkers

Cloud providers are advertising the equivalent of the U.S. Government’s “Cash for Clunkers” program:

“You give up your tired, inefficient, polluting, hard to maintain and costly data centers and we’ll give you PFM in the form of a global, seamless, elastic computing capability for less money and with free undercoating.  The value proposition is fantastic: cost-savings, agility, the illusion of infinite scale, flexibility, reliability, and “green.”

There are some truly amazing Cloud offerings making their way to market and it’s interesting to see that the parallels offered up by the economic incentives in both examples are generating a tremendous amount of interest.

The case remains to be seen as to whether or not this increase in interest is a short-term burst that’s simply shortening the cycle for early adopters or if it will deliver sustainable attention over time and drive people to the “showroom floor” that weren’t considering kicking the tires in the first place.

As compelling as the offer of Cloud may be, in order to pull off incentivizing large enterprises to think differently, it requires an awful lot going on under the covers to provide this level of abstracted awesomeness; a ton of heavy lifting and the equipment and facilities to go with it.

To get ready for the gold rush, most of the top-tier IaaS/PaaS Cloud providers are building data processing MegaCenters around the globe in order to provide these services, investing billions of dollars to do so…all supposedly so you don’t have to.

Remember, however, that service providers make money by squeezing the most out of you while providing as little as they need to in order to ensure the circle of life continues.  Note, this is not an indictment of that practice, as $deity knows I’ve done enough of that myself, but just because it has the word “Cloud” in front of it does not make it any different from a business case.  Live by the ARPU, die by the ARPU.

Cloudiness Is Next To Godliness…

What happens then, when something outside of the providers’ control changes the ability or desire to operate from one of these billion-dollar Cloud centers?  No, I don’t mean like a natural disaster or an infrastructure failure.  I mean something far more insidious.

Like what you say?  Funny you should ask.  The Data Center Knowledge blog details how Microsoft is employing the teleportation equivalent of vMotion by pMotioning (physically) an entire Azure Cloud data center to deal with changing tax codes thanks to a game of chicken with a local state government:

“Due to a change in local tax laws, we’ve decided to migrate Windows Azure applications out of our northwest data center prior to our commercial launch this November,” Microsoft say on its Windows Azure blog (link via OakLeaf Systems). ” This means that all applications and storage accounts in the “USA – Northwest” region will need to move to another region in the next few months, or they will be deleted.” Azure applications will shift to the USA – Southwest region, which is housed in Microsoft’s 470,000 square foot San Antonio data center, which opened last September.

The move underscores how the economics of data center site location can change quickly – and how huge companies are able to rapidly shift operations to chase the lowest operating costs

Did you see the part that said “…all applications and storage accounts in the “USA – Northwest” region will need to move to another region in the next few months, or they will be deleted.”  Sounds rather Un-Cloudlike, no?  Remember the Coghead shutdown?

Large scale providers and their MegaCenters face some amazing challenges such as the one presented above.  As these issues become public and exposed to due diligence, they are in turn causing enterprises to take stock in how they evaluate their migration to Cloud.  They aren’t particularly new issues, it’s just that people are having a hard time reconciling reality from the confusing anecdote of Cloudy goodness that requires zero-touch and just works…always.

Om Malik chronicled some of these challenges:

And while cloud computing is all the rage in Washington D.C., it seems the state of Washington doesn’t much care for cloud computing. Instead of buying cloud computing services from home-grown cloud computing giant Amazon, (or newly emergent cloud player, Microsoft), the state has opted to build a brand-new, $180 million data center, despite reservations from some state representatives. Microsoft is moving the data center that houses its Azure cloud services to San Antonio, Texas, from Quincy, Wash. — mostly because of unfavorable tax policies. Apparently, the data centers are no longer covered by sales tax rebates — a costly proposition for Microsoft, which plans to spend many millions on new hardware for the Azure-focused data center.

By the way, Washington is the second state that has decided to build its own data center. In June, Massachusetts decided that it was going to build a $100 million data center. The Sox Nation is home to Nick Carr, author of “The Big Switch,” arguably the most influential book on cloud computing and its revolutionary capabilities.

These aforementioned states are examples of a bigger trend: Most large organizations are still hesitant to go all in when it comes to cloud computing. That’s partly because the cloud revolution still has a long way to go. But much of it is fear of the unknown.

Some of that “unknown” is more about being “unsolved” since we understand many of the challenges but simply don’t have solutions to them yet.

But I Don’t Want My Data In Hoboken!

I’ve spoken about this before, but while a provider may be pressured to move an entire datacenter (or even workloads within it) for their own selfish needs, what might that mean to customers in terms of privacy, security, SLA and compliance requirements?

We have no doubt all heard of requirements that prevent certain data from leaving geographic boundaries.  What if one of these moves came into conflict with regulations such as these?  What happens if the location chosen to replace the existing once causes a legal exception?

This is clearly an inflection point for Cloud and underscores the need to drive for policy-driven portability and interoperability sooner than later.

Even if we have the technical capability to make portable our workloads, we’re not in a position to instantiate policy as an expression of business logic need to govern whether they should, can, or ought to be moved.

If we can’t/dont’/won’t work to implement open standards to provide for workload security, portability & interoperability with the functionality for “consumers” to assert requirements and “providers” to attest to their capabilities based upon a common expression of such, this will surely add to the drive for large enterprises to consider either wholly-private or virtual private Clouds in order to satisfy their needs under an umbrella they can control.

I’ll Take “Go With What You Know” For $200, Alex

In the short term, customers who are mature in their consolidation, virtualization, optimization and automation practices and are looking to move to utilize IaaS/PaaS services from third party providers will likely demand homogeneity from 1-2 key providers with a global footprint in potential combination with their own footprint to pull this off whilst they play the waiting game for open standards.

The reason for the narrowing of providers and platforms is simple: continuity of service across all dimensions and the ability to control one’s fate, even if it means vendor lock-in driven by feature/function maturity.

Randy Bias alluded to this in a recent post titled “Bifurcating Clouds” in which he highlighted some of the differences in the spectrum of Cloud providers and the platforms they operate from.  There are many choices when it comes to virtualization and Cloud operating platforms, but customers are becoming much more educated about what those choices entail and often times arrive at the fact that cost isn’t always the most pressing driver.  The Total Cloud Ownership* calculation is a multi-dimensional problem…

This poses an interesting set of challenges for service providers looking to offer IaaS/PaaS Cloud services: build your own or re-craft available OSS platforms and drive for truly open standards or latch on to a market leader’s investment and roadmap and adopt it as such.

Ah, Lock-In.  Smells Like Teen Spirit…

From the enterprises’ perspective,  many are simply placing bets that the provider they chose for their “internal” virtualization and consolidation platform will also be the one to lead them to Cloud as service providers adopt the same solution.

This would at least — in the absence of an “open standard” — give customers the ability to provide for portability should a preferred provider decide to move operations to somewhere which may or may not satisfy business requirements; they could simply pick another provider that runs on the same platform instead.  You get De Facto portability…and the ever-present “threat” of vendor lock-in.

It’s what happens when you play spin the bottle with your data, I’m afraid.

So before you trade in your clunker, it may make sense to evaluate whether it’s simply cheaper in the short term to keep on paying the higher gas tax and drive it into the ground, pull the motor for a rebuild and get another 100,000 miles out of the old family truckster or go for broke and get the short term cash back without knowing what it might really cost you down the road.

This is why private Cloud and virtual private Clouds make sense.  It’s not about location, it’s about control.

Both hands on the wheel…10 and 2, kids….10 and 2.

/Hoff

*I forgot to credit Vinnie Mirchandani from Deal Architect and his blog entry here for the Total Cloud Ownership coolness. Thanks to @randybias for the reminder.

Categories: Cloud Computing, Microsoft Tags:

Observations on “Securing Microsoft’s Cloud Infrastructure”

June 1st, 2009 1 comment

notice-angleI was reading a blog post from Charlie McNerney, Microsoft’s GM, Business & Risk Management, Global Foundation Services on “Securing Microsoft’s Cloud Infrastructure.”

Intrigued, I read the white paper to first get a better understanding of the context for his blog post and to also grok what he meant by “Microsoft’s Cloud Infrastructure.”  Was he referring to Azure?

The answer is per the whitepaper that Microsoft — along with everyone else in the industry — now classifies all of its online Internet-based services as “Cloud:”

Since the launch of MSN® in 1994, Microsoft has been building and running online services. The GFS division manages the cloud infrastructure and platform for Microsoft online services, including ensuring availability for hundreds of millions of customers around the world 24 hours a day, every day. More than 200 of the company’s online services and Web portals are hosted on this cloud infrastructure, including such familiar consumer-oriented services as Windows Live™ Hotmail® and Live Search, and business-oriented services such as Microsoft Dynamics® CRM Online and Microsoft Business Productivity Online Standard Suite from Microsoft Online Services. 

Before I get to the part I found interesting, I think that the whitepaper (below) does a good job of providing a 30,000 foot view of how Microsoft applies lessons learned over its operational experience and the SDL to it’s “Cloud” offerings.  It’s something designed to market the fact that Microsoft wants us to know they take security seriously.  Okay.

Here’s what I found interesting in Charlie’s blog post, it appears in the last two sentences (boldfaced): 

The white paper we’re releasing today describes how our coordinated and strategic application of people, processes, technologies, and experience with consumer and enterprise security has resulted in continuous improvements to the security practices and policies of the Microsoft cloud infrastructure.  The Online Services Security and Compliance (OSSC) team within the Global Foundation Services division that supports Microsoft’s infrastructure for online services builds on the same security principles and processes the company has developed through years of experience managing security risks in traditional software development and operating environments. Independent, third-party validation of OSSC’s approach includes Microsoft’s cloud infrastructure achieving both SAS 70 Type I and Type II attestations and ISO/IEC 27001:2005 certification. We are proud to be one of the first major online service providers to achieve ISO 27001 certification for our infrastructure. We have also gone beyond the ISO standard, which includes some 150 security controls. We have developed 291 security controls to date to account for the unique challenges of the cloud infrastructure and what it takes to mitigate some of the risks involved.

I think it’s admirable that Microsoft is sharing its methodologies and ISMS objectives and it’s a good thing that they have adopted ISO standards and secured SAS70 as a baseline.  

However, I would be interested in understanding what 291 security controls means to a security posture versus, say 178.  It sounds a little like Twitter follower counts.

I can’t really explain why those last two sentences stuck in my craw, but they did.

I’d love to know more about what Microsoft considers those “unique challenges of the cloud infrastructure” as well as the risk assessment framework(s) used to manage/mitigate them — I’m assuming they’ve made great STRIDEs in doing so. 😉

/Hoff

No, Mary Jo, Private Cloud is NOT Just A Euphemism For On-Premise Datacenter…

April 29th, 2009 2 comments

Mary Jo Foley asked the question in her blog titled: ‘Private cloud’ = just another buzzword for on-premise datacenter?

What’s really funny is that she’s not really asking.  She’s already made her mind up:

Whether or not they admit it publicly (or just express their misgivings relatively privately), Microsoft officials know the “private cloud” is just the newest way of talking about an on-premise datacenter. Sure, it’s not exactly the same mainframe-centric datacenter IT admins may have found themselves outfitting a few years ago. But, in a nutshell, server + virtualization technology + integrated security/management/billing  = private cloud.

Microsoft’s “official” description of the distinction between private and public clouds basically says as much. From a press release the company issued this morning:

The private cloud: “By employing techniques like virtualization, automated management, and utility-billing models, IT managers can evolve the internal datacenter into a ‘private cloud’ that offers many of the performance, scalability, and cost-saving benefits associated with public clouds. Microsoft provides the foundation for private clouds with infrastructure solutions to match a range of customer sizes, needs and geographies.

The public cloud: “Cloud computing is expanding the traditional web-hosting model to a point where enterprises are able to off-load commodity applications to third-party service providers (hosters) and, in the near future, the Microsoft Azure Services Platform. Using Microsoft infrastructure software and Web-based applications, the public cloud allows companies to move applications between private and public clouds.”

Firstly, Microsoft defines their notion of Public and Private Clouds based upon the limits of their product offerings.  In their terms, Private Clouds = Hyper-V, Public Clouds = Azure.  Never the two shall meet. So using these definitions, sure, Private Clouds are just “on-premise datacenters.”  She ought to know.  She wrote about it here and I responded in a post titled “Incomplete Thought: Looking At An “Open & Interoperable Cloud” Through Azure-Colored Glasses

Private Clouds aren’t just virtualized datacenters with chargeback/billing.

As I’ve said here many, many times, this sort of definition is short-sighted, inaccurate and limiting:

Private Clouds: Even A Blind Squirrel Finds A Nut Once In A While
The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…
Internal v. External/Private v. Public/On-Premise v. Off- Premise: It’s all Cloud But How You Get There Is Important.
Private Clouds: Your Definition Sucks
Mixing Metaphors: Private Clouds Aren’t Defined By Their Location…

Can we stop butchering this term now, please?

So no, Private Cloud is NOT just a euphemism for on-premise datacenters.

/Hoff

I’m Sorry, But Did Someone Redefine “Open” and “Interoperable” and Not Tell Me?

February 26th, 2009 3 comments

3-stooges-football
I've got a problem with the escalation of VMware's marketing abuse of the terms "open," "interoperable," and "standards."  I'm a fan of VMware, but this is getting silly.


When a vendor like VMware crafts an architecture, creates a technology platform, defines an API, gets providers to subscribe to offering it as a service and does so with the full knowledge that it REQUIRES their platform to really function, and THEN calls it "open" and "interoperable," because an API exists, it is intellectually dishonest and about as transparent as saran wrap to call that a "standard" to imply it is available regardless of platform.


We are talking about philosophically and diametrically-opposed strategies between virtualization platform players here, not minor deltas along the bumpy roadmap highway.  What's at stake is fundamentally the success or failure of these companies.  Trying to convince the world that VMware, Microsoft, Citrix, etc. are going to huddle for a group hug is, well, insulting.

This recent article in the Register espousing VMware's strategy really highlighted some of these issues as it progressed. Here's the first bit which I agree with:

There is, they fervently say, no other enterprise server and data centre virtualisation play in town. Businesses wanting to virtualise their servers inside a virtualising data centre infrastructure have to dance according to VMware's tune. Microsoft's Hyper-V music isn't ready, they say, and open source virtualisation is lagging and doesn't have enterprise credibility.

Short of the hyperbole, I'd agree with most of that.  We can easily start a religious debate here, but let's not for now.  It gets smelly where the article starts talking about vCloud which, given VMware's protectionist stance based on fair harbor tactics, amounts to nothing more (still) than a vision.  None of the providers will talk about it because they are under NDA.  We don't really know what vCloud means yet: 

Singing the vcloud API standard song is very astute. It reassures all people already on board and climbing on board the VMware bandwagon that VMware is open and not looking to lock them in. Even if Microsoft doesn't join in this standardisation effort with a whole heart, it doesn't matter so long as VMware gets enough critical mass.

How do you describe having to use VMware's platform and API as VMware "…not looking to lock them in?" Of course they are!  

To fully leverage the power of the InterCloud in this model, it really amounts to either an ALL VMware solution or settling for basic connectors for coarse-grained networked capability.

Unless you have feature-parity or true standardization at the hypervisor and management layers, it's really about interconnectivity not interoperability.  Let's be honest about this.

By having external cloud suppliers and internal cloud users believe that cloud federation through VMware's vCloud infrastructure is realistic then the two types of cloud user will bolster and reassure each other. They want it to happen and, if it does, then Hyper-V is locked out unless it plays by the VMware-driven and VMware partner-supported cloud standardisation rules, in which case MIcrosoft's cloud customers are open to competitive attack. It's unlikely to happen.

"Federation" in this context really only applies to lessening/evaporating the difference between public and private clouds, not clouds running on different platforms.  That's, um, "lock-in."


Standards are great, especially when they're yours. Now we're starting to play games.  VMware should basically just kick their competitors in the nuts and say this to us all:

"If you standardize on VMware, you get to leverage the knowledge, skills, and investment you've already made — regardless of whether you're talking public vs. private.  We will make our platforms, API's and capabilities as available as possible.  If the other vendors want to play, great.  If not, your choice as a customer will determine if that was a good decision for them or not."

Instead of dancing around trying to muscle Microsoft into playing nice (which they won't) or insulting our intelligence by handwaving that you're really interested in free love versus world domination, why don't you just call a spade a virtualized spade.

And by the way, if it weren't for Microsoft, we wouldn't have this virtualization landscape to begin with…not because of the technology contributions to virtualization, but rather because the inefficiencies of single app/OS/hardware affinity using Microsoft OS's DROVE the entire virtualization market in the first place!

Microsoft is no joke.  They will maneuver to outpace VMware. HyperV and Azure will be a significant threat to VMware in the long term, and this old Microsoft joke will come back to haunt to VMware's abuse of the words above:

Q: How many Microsoft engineers does it take to change a lightbulb?  
A: None, they just declare darkness a standard.

is it getting dimmer in here?


/Hoff

Xen.Org Launches Community Project To Bring VM Introspection to Xen

October 29th, 2008 No comments

Hat-tip to David Marshall for the pointer.

In what can only be described as the natural evolution of Xen's security architecture, news comes of a Xen community project to integrate a VM Introspection API and accompanying security functionality into Xen.  Information is quite sparse, but I hope to get more information from the project leader, Stephen Spector, shortly. (*Update: Comments from Stephen below)

This draws naturally obvious parallels to VMware's VMsafe/vNetwork API's which will yield significant differentiation and ease in integrating security capabilities with VMware infrastructure when solutions turn up starting in Q1'09.

From the Xen Introspection Project wiki:

The
purpose of the Xen Introspection Project is to design an API for
performing VM introspection and implement the necessary functionality
into Xen. It is anticipated that the project will include the following
activities (in loose order): (1) identification of specific
services/functions that introspection should support, (2) discussion of
how that functionality could be achieved under the Xen architecture,
(3) prioritization of functionality and activities, (4) API definition,
and (5) implementation.

Some potential applications of VM introspection include security, forensics, debugging, and systems management.


It is important to note that this is not the first VMI project for Xen. 
There is also the Georgia Tech XenAccess project lead by Bryan Payne which is a library which allows a privileged domain to gain access to the runtime state of another domain.  XenAccess focuses (initially) on memory introspection but is adaptable to disk I/O also:

XenAccess

I wonder if we'll see XenAccess fold into the VMI Xen project?

Astute readers will also remember my post titled "The Ghost of Future's Past: VirtSec Innovation Circa 2002" in which I reviewed work done by Mendel Rosenblum and Tal Garfinkel (both of VMware fame) on the LiveWire project which outlined VMI for isolation and intrusion detection:


Vmi-diagram

What's old is new again.

Given my position advocating VMI and the need for inclusion of this capacity in all virtualization platforms versus that of Simon Crosby, Citrix's (XenSource) CTO in our debate on the matter, I'll be interested to see how this project develops and if Citrix contributes. 

Microsoft desperately needs a similar capability in Hyper-V if they are to be successful in ensuring security beyond VMM integrity in their platform and if I were a betting man, despite their proclivity for open-closedness, I'd say we'll see something to this effect soon.

I look forward to more information and charting the successful evolution of both the Xen Introspection Project and XenAccess.

/Hoff

Update: I reached out to Stephen Spector and he was kind enough to respond to a couple of points raised in this blog (paraphrased from a larger email):

Bryan Payne from Georgia tech will be participating in the project and there is some other work going on at the University of Alaska at Fairbanks. The leader for the project is Stephen Brueckner from NYC-AT.

As for participation, Citrix has people already committed and I have 14 people who have asked to take part.

Sounds like the project is off to a good start! 

Categories: Citrix, Microsoft, Virtualization, VMware Tags:

Secure Services in the Cloud (SSaaS/Web2.0) – InternetOS Service Layers

July 13th, 2007 2 comments

Internet
The last few days of activity involving Google and Microsoft have really catalyzed some thinking and demonstrated some very intriguing indicators as to how the delivery of applications and services is dramatically evolving. 

I don’t mean the warm and fuzzy marketing fluff.  I mean some real anchor technology investments by the big-boys putting their respective stakes in the ground as they invest hugely in redefining their business models to setup for the future.

Enterprises large and small are really starting to pay attention to the difference between infrastructure and architecture and this has a dramatic effect on the service providers and supply chain who interact with them.

It’s become quite obvious that there is huge business value associated with divorcing the need for "IT" to focus on physically instantiating and locating "applications" on "boxes" and instead  delivering "services" with the Internet/network as the virtualized delivery mechanism.

Google v. Microsoft – Let’s Get Ready to Rumble!

My last few posts on Google’s move to securely deliver a variety of applications and services represents the uplift of the "traditional" perspective of backoffice SaaS offerings such as Salesforce.com but also highlights the migration of desktop applications and utility services to the "cloud" also.

This is really executing on the vision of the thin-client Internet-centric vision from back in the day o’ the bubble when we saw a ton of Internet-borne services such as storage, backup, etc.  using the "InternetOS" as the canvas for service.

So we’ve talked about Google.  I maintain that their strategy is to ultimately take on Microsoft — including backoffice, utility and desktop applications.  So let’s look @ what the kids from Redmond are up to.

What Microsoft is developing towards with their vision of CloudOS was just recently expounded upon by one Mr. Ballmer.

Not wanting to lose mindshare or share of wallet, Microsoft is maneuvering to give the customer control over how they want to use applications and more importantly how they might be delivered.  Microsoft Live bridges the gap between the traditional desktop and puts that capability into the "cloud."

Let’s explore that a little:

In addition to making available its existing services, such as mail and
instant messaging, Microsoft also will create core infrastructure
services, such as storage and alerts, that developers can build on top
of. It’s a set of capabilities that have been referred to as a "Cloud OS," though it’s not a term Microsoft likes to use publicly.

Late last month, Microsoft introduced two new Windows Live Services,
one for sharing photos and the other for all types of files. While
those services are being offered directly by Microsoft today, they
represent the kinds of things that Microsoft is now promising will be
also made available to developers.

Among the other application and infrastructure components,
Microsoft plans to open are its systems for alerts, contact management,
communications (mail and messenger) and authentication.

As it works to build out the underlying core services, Microsoft is
also offering up applications to partners, such as Windows Live
Hotmail, Windows Live Messenger and the Spaces blogging tool.

Combine the emerging advent of "thinner" end-points (read: mobility products) with high-speed, lower latency connectivity and we can see why this model is attractive and viable.  I think this battle is heating up and the consumer will benefit.

A Practical Example of SaaS/InternetOS Today?

So if we take a step back from Google and Microsoft for a minute, let’s take a snapshot of how one might compose, provision, and deploy applications and data as a service using a similar model over the Internet with tools other than Live or GoogleGear.

Let me give you a real-world example — deliverable today — of this capability with a functional articulation of this strategy; on-demand services and applications provided via virtualized datacenter delivery architectures using the Internet as the transport.  I’m going to use a mashup of two technologies: Yahoo Pipes and 3tera’s AppLogic.

Yahoo Pipes is  "…an interactive data aggregator and manipulator that lets you mashup your favorite online data sources."  Assuming you have data from various sources you want to present an application environment such as Pipes will allow you to dynamically access, transform and present this information any way you see fit.

This means that you can create what amounts to application and services on demand. 

Let’s agree however that while you have the data integration/presentation layer, in many cases you would traditionally require a complex collection of infrastructure from which this source data is housed, accessed, maintained and secured. 

However, rather than worry about where and how the infrastructure is physically located, let’s use the notion of utility/grid computing to make available dynamically an on-demand architecture that is modular, reusable and flexible to make my service delivery a reality — using the Internet as a transport.

Enter 3Tera’s AppLogic:

3Tera’s AppLogic is used by hosting providers to offer true utility computing. You get all the control of having your own virtual datacenter, but without the need to operate a single server.

Deploy and operate applications in your own virtual private datacenter

Set up infrastructure, deploy apps and manage operations with just a browser    
Scale from a fraction of a server to hundreds of servers in days

Deploy and run any Linux software without modifications

Get your life back: no more late night rushes to replace failed equipment

In fact, BT is using them as part of the 21CN project which I’ve written about many times before.

So check out this vision, assuming the InternetOS as a transport.  It’s the drag-and-drop, point-and-click Metaverse of virtualized application and data combined with on-demand infrastructure.

You first define the logical service composition and provisioning through 3Tera with a visual drag-drop canvas, defining firewalls, load-balancers, switches, web servers, app. servers, databases, etc.  Then you click the "Go" button.  AppLogic provisions the entire thing for you without you even necessarily knowing where these assets are.

Then, use something like Pipes to articulate how data sources can be accessed, consumed and transformed to deliver the requisite results.  All over the Internet, transparent to you securely.

Very cool stuff.

Here are some screen-caps of Pipes and 3Tera.

Yahoopipes

3tera