Archive

Archive for June, 2009

Virtual Networking Battle Heating Up: Citrix Leads $10 Million Investment In Vyatta

June 9th, 2009 No comments

Those crafty Citrix chaps are at it again.

Last month I reported from Citrix Synergy about discussions I had with Simon Crosby and Ian Pratt about the Citrix/Xen Openswitch which is Citrix’s answer to the Cisco Nexus 1000v married to VMware’s vSphere.

Virtualization.com this morning reported that Vyatta — who describe themselves as the “open source alternative to Cisco” — just raised another round of funding, but check out who’s leading it:

Vyatta today announced it has completed its $10 million Series C round of financing led by Citrix Systems. The new funding round also includes existing investors, Comcast Interactive Capital, Panorama Capital, and ArrowPath Venture Partners. As part of the investment, Gordon Payne, senior vice president and general manager of the Delivery Systems Division at Citrix, has joined the Vyatta Board of Directors where he will assist the company in its next phase of development.

Today, Vyatta also announced that it has joined the Citrix Ready product verification program to create solutions for customers deploying cloud computing infrastructures.

Vyatta will use the funds for operating capital as the company scales its sales efforts and accelerates growth across multiple markets.

Vyatta runs on standard x86 hardware and can be virtualized with modern hypervisors, including the Citrix XenServer™ virtualization platform. Vyatta delivers a full set of networking features that allow customers to connect, protect, virtualize, and optimize their networks, improving performance, reducing costs, and increasing manageability and flexibility over proprietary networking solutions. Vyatta has been deployed by hundreds of customers world-wide in both virtual and non-virtual environments.

This is very, very interesting stuff indeed and it’s clear where Citrix has its sights aimed.  This will be good for customers, regardless of platform because it’s going to drive innovation even further.

The virtual networking stacks — and what they enable — are really going to start to drive significant competitive advantage across virtualization and Cloud vendors.  It’s ought to give customers significant pause when it comes to thinking about their choice of platform and integration.

Nicely executed move, Mr. Crosby.

/Hoff

SQUIRREL! I’m joining Cisco.

June 9th, 2009 10 comments

squirrel-xsmallFrom the Cisco Data Center Networks Blog:

So, for me, one of the best parts of working here at Cisco is the opportunity to work with some incredibly smart folks.  Today, I can add one more person to that group of folks—Christofer Hoff is joining the Cisco Data Center Solutions team.  Chris has built a solid reputation in the industry for domain expertise, forward thinking and incisive commentary blended with a healthy dose of wit.  I know Chris has the tenacity of a squirrel chasing an acorn, and I am personally quite pleased to welcome Chris to the team as I see he will add both depth and breadth to our efforts.  So, if you are not familiar with Chris, definitely check out his blog, Rational Survivability and you can also follow him on Twitter as @Beaker.

Thanks for the warm welcome, Omar.  I’m beyond psyched. Besides getting to work with some awesome friends, I finally get to hug a Nexus 7000.  Getting my fingers back in the pie with cutting-edge technology, partners and customers should translate into even more interesting things to discuss when appropriate.  I can’t wait.

To answer your question before you ask it: “Yes, Same blog time. Same blog channel. Now with extra datacenter fu.”

/Hoff

Categories: Career, Cisco Tags:

The Nines Have It…

June 8th, 2009 4 comments

fiveninesThere are numerous cliches and buzzwords we hear daily that creep into our lexicon without warrant of origin or meaning.

One of them that you’re undoubtedly used to hearing relates to the measurement of availability expressed as a percentage: the dreaded “nines.”

I read a story this morning on the launch of the “Stratus Trusted Cloud” that promises the following:

Since it is built on the industry’s most robust, scalable, fully redundant architecture, Stratus delivers unmatched performance, availability and security with 99.99% SLAs.

It’s interesting to note what 99.99% availability means within the context of an SLA — “four nines” means you have the equivalent of 52.6 minutes of resource unavailability per year.  That may sound perfectly wonderful and may even lead some to consider that this exceeds what many enterprises can deliver today (I’m interested in the veracity of these claims.)  However, I would ask you to consider this point:

I don’t have access to the contract/SLA to know whether this metric refers to total availability that includes both planned and unplanned downtime or only planned downtime.

This is pretty important, especially in light of what we’ve seen with other large and well-established Cloud service providers who offer similar or better  SLA’s (with or without real fiscal repercussion) and have experienced unplanned outages for hours on end.

Is four nines good enough for your most critical applications?  Do you measure this today?  Does it even matter?

/Hoff


Here’s a handy Wikipedia reference on availability table you can print out:

Availability % Downtime per year Downtime per month* Downtime per week
90% 36.5 days 72 hours 16.8 hours
95% 18.25 days 36 hours 8.4 hours
98% 7.30 days 14.4 hours 3.36 hours
99% 3.65 days 7.20 hours 1.68 hours
99.5% 1.83 days 3.60 hours 50.4 minutes
99.8% 17.52 hours 86.23 minutes 20.16 minutes
99.9% (“three nines”) 8.76 hours 43.2 minutes 10.1 minutes
99.95% 4.38 hours 21.56 minutes 5.04 minutes
99.99% (“four nines”) 52.6 minutes 4.32 minutes 1.01 minutes
99.999% (“five nines”) 5.26 minutes 25.9 seconds 6.05 seconds
99.9999% (“six nines”) 31.5 seconds 2.59 seconds 0.605 seconds

* For monthly calculations, a 30-day month is used.

Most CIO’s Not Sold On Cloud? Good, They Shouldn’t Be…

June 7th, 2009 13 comments

I find it amusing that there is so much drama surrounding the notion of Cloud adoption.

There are those who paint Cloud as the savior of today’s IT great unwashed and others who claim it’s simply hype and not ready for prime time.

They’re both right and Cloud adoption is exactly where it should be today.

Here’s a great illustration: “Cloud or Fog? Two-Thirds of UK CIOs and CFOs Not Yet Sold on Cloud“:

Sixty-seven per cent of Chief Information Officers and Chief Financial Officers in UK enterprises say they are either not planning to adopt cloud computing (35 per cent) or are unsure (32 per cent) of whether their company will adopt cloud computing during the next two years, according to a major new report from managed hosting (http://www.ntteuropeonline.com/) specialists NTT Europe Online.

Whose perspective you share comes down to well-established market dynamics relating to technology adoption and should not come as a surprise to anyone.

One of the best-known examples of this can be visualized a by a graphical representation of what Geoffrey Moore wrote about it in his book “Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers“:

techadoptioncurve

Because I’m lazy, I’ll just refer you to the Wikipedia entry which describes “the Chasm” and the technology adoption lifecycle:

In Crossing the Chasm, Moore begins with the diffusion of innovations theory from Everett Rogers, and argues there is a chasm between the early adopters of the product (the technology enthusiasts and visionaries) and the early majority (the pragmatists). Moore believes visionaries and pragmatists have very different expectations, and he attempts to explore those differences and suggest techniques to successfully cross the “chasm,” including choosing a target market, understanding the whole product concept, positioning the product, building a marketing strategy, choosing the most appropriate distribution channel and pricing.

Crossing the Chasm is closely related to the Technology adoption lifecycle where five main segments are recognized; innovators, early adopters, early majority, late majority and laggards. According to Moore, the marketer should focus on one group of customers at a time, using each group as a base for marketing to the next group. The most difficult step is making the transition between visionaries (early adopters) and pragmatists (early majority). This is the chasm that he refers to. If a successful firm can create a bandwagon effect in which the momentum builds and the product becomes a de facto standard. However, Moore’s theories are only applicable for disruptive or discontinuous innovations. Adoption of continuous innovations (that do not force a significant change of behavior by the customer) are still best described by the original Technology adoption lifecycle. Confusion between continuous and discontinuous innovation is a leading cause of failure for high tech products.

Cloud is firmly entrenched in the Chasm, clawing its way out as the market matures*.

It will, over the next 18-24 months by my estimates arrive at the early majority phase.

Those who are today evangelizing Cloud Computing are the “technology enthusiasts” and “visionaries” in the “innovator” and “early adopter” phases respectively.  If you look at the article I quoted at the top of the blog, CIO’s are generally NOT innovators or early adopters, so…

So don’t be put off or overly excited when you see hyperbolic references to Cloud adoption because depending upon who you are and who you’re talking about, you’ll likely always get a different perspective for completely natural reasons.

/Hoff

* To be clear, I wholeheartedly agree with James Urquhart that “Cloud” is not a technology, it’s an operational model. So as not to confuse people, within the context of the “technology adoption curve” above you can likewise see how “model” or “paradigm” works, also.  It doesn’t really have to be limited to a pure technology.

The Six Worst Cloud Security Mistakes? I Can Do You One Better…

June 6th, 2009 2 comments

I recently read a story from Kelly Jackson Higgins of Dark Reading outlining what are described as the “Six Worst Cloud Security Mistakes:

  1. Assuming the cloud is less secure than your data
  2. Not verifying, testing, or auditing the security of your cloud-based service provider.
  3. Failing to vet your cloud provider’s viability as a business.
  4. Assuming you’re no longer responsible for securing data once it’s in the cloud.
  5. Putting insecure apps in the cloud and expecting that to make them more secure.
  6. Having no clue that your business units are already using some cloud-based services.

A very interesting list, for sure, and a reasonable set of potential “mistakes” to ponder, but I’m really having trouble with one in particular.

The one that’s getting my goose honking is #1: Assuming the cloud is less secure than your data.

Really? I maintain that this generalization about Cloud being more or less secure (in regards to one’s own capabilities) is a silly thing to argue; let’s see why.

We start off with what I think is a strange bit of contradiction:

It’s only natural for security pros to be control freaks. Being charged with securing a company’s data and intellectual property requires a healthy dose of paranoia and protectionism. But sometimes that leads to false impressions about cloud security. “One common mistake is that as soon as you talk about the cloud, [organizations] assume it’s less secure than their own IT security operation,” says Chenxi Wang, principal analyst at Forrester Research. “More control does not necessarily lead to more security.”

Assuming that one of the reasons a company might consider outsourcing their IT security operations to a third party [Cloud] provider IS the fact that they have more control or at least equal to what a company can provide themselves, it occurs to me this sort of statement can be interpreted many ways.  Here’s one, for example.

I find myself confused by the highlighted sentence regarding control and security within the context of what is written.  In fact, if you read the next paragraph, it seems to imply that the because a Cloud provider has more control they can offer better security:

In fact, with services such as Google’s SaaS, data loss is less likely because the information is accessible from anywhere and anytime without saving it to an easily lost or stolen USB stick or CD, according to Eran Feigenbaum, director of security for Google Apps. And Google’s security-patching process is more streamlined than a typical enterprise because its server architecture is homogeneous, he says. “Many attacks [come from a] lack of patch management and server misconfiguration…For Google, when the time comes to patch, we can do so across the entire platform in a uniform fashion,” he said.

I’ll say it again: SaaS is a convenient way of dumbing down “Cloud Computing” to a singular instance/application/service but it completely obviates Platform and Infrastructure as a Service offerings, which are wildly different animals, especially from a security perspective.  Please see my latest commentary about this in my response to Bruce Schneier’s equation of SaaS with Cloud Computing to the exclusion of PaaS/IaaS.

I’ve made the point before that comparing managing/patching a single application and its supporting infrastructure in a SaaS offering to an enterprise that would otherwise have to support not only that service but potentially hundreds more is a completely unfair comparison.  If you want to compare apples to apples, I’d maintain that any organization with a mature security program whose only charter was to support (securely) a single application could do it just as well as a SaaS provider, all other things being equal.

The differences here become scale and multi-tenancy in the case of the Cloud provider, I think these issues actually make a Cloud environment more difficult to secure.

Also, suggesting with the Google example that “data loss is less likely” because it’s “accessible from anywhere” and doesn’t involve “…lost or stolen USB stick(s) or CD(s)” seems an awfully arbitrary one given the fact that one of the most interesting data loss/leakage incidents in recent Cloud history came from Google’s Docs offering due to an operator (Google) system misconfiguration.  USB sticks and CDs are also a very narrow definition of data loss/leakage.

Then there’s the more global view SaaS and other cloud providers have, Feigenbaum says. “As an enterprise, you only see a small slice of what’s affecting you [threat-wise],” Feigenbaum said during a panel on cloud security at the RSA Conference in April. “A cloud provider can have the economy of scale for a holistic vision…the cloud shifts security and also makes it better,” he said.

I don’t have anything to argue about here; a wider perspective and better visibility is a good thing.  Again, however, this depends upon the type of service, what is being monitored and protected, on behalf of whom and from whom.

But that doesn’t mean you should blindly trust your cloud provider, though the larger ones do tend to have a better handle on threats due to their size, Forrester’s Wang says. “These people deal with security issues at more complex levels than your own IT team sees on a daily basis,” Wang says. “It’s a misconception to say cloud security is definitely less capable or more problematic.”

No, you shouldn’t blindly trust your providers but that last statement suggests we should similary trust that providers do a better job and deal with security issues at more complex levels?  What does that even mean? Please do NOT tell me that a SAS70 Type II is your answer.  Just as “It’s a misconception to say cloud security is definitely less capable or more problematic,” I can just as easily suggest the converse is true without evidence.

I would like to see the empirical data that backs that set of statements up and the common metrics I can use to measure across providers and enterprises alike.  Thought so.

Thus far, security has been one of the main hurdles to adoption of cloud-based services, says Michelle Dennedy, chief governance officer for cloud computing at Sun Microsystems. “Trust in the cloud, more than technical abilities, has been hindering adoption,” Dennedy says. “But the cloud can be more secure than a private environment in many cases.”

Michelle is definitely correct; trust represents a fundamental issue with Cloud adoption, and it rolls both ways.  Asking us to “trust but verify” when what we’re being asked to verify can’t easily be trusted poses a very difficult scenario indeed.

By the way, I think the worst Cloud Security mistake is not knowing what Cloud Security even means.

/Hoff

What Do You Mean When You Say “Open” ?

June 6th, 2009 1 comment

openI saw a great post from Seth Godin wherein he highlighted  many interpretations of “open.” Here are some of them:

  • open source : a program whose source code is made available for use or modification as users or other developers see fit. If a car goes open source, then you’re permitting others to copy your engine and body design, improve it, put their improvements back into the pool and share some more.
  • open infrastructure: Amazon’s cloud is an example of this. You build the pipes and allow people to rent them to build their own systems on.
  • open architecture: A system (hardware or software) where people can learn how it works and then build things to plug in to extend it. The IBM PC had an open architecture, which meant that people could build sound cards or other devices to plug in (without asking IBM’s permission).
  • open standards: relying on rules that are widely used, consensus based, published and maintained by recognized industry standards organizations. It means that you’re not in charge, the standards guys are. Bluetooth is an example of attempting this, so is USB.
  • open access: APIs that make it easy for people to get at the data on your platform (twitter is a great example, so is Google maps.)

These are just a few.

I hear this word a lot in our industry.  It’s one that people need to stop abusing or at least better clarifying in terms of context; much like “free” or “Cloud.”

As Seth asked “What kind of open are you looking for?”

/Hoff
*image from mag3737’s Flickr Photostream

Categories: Jackassery Tags:

Dear Mr. Schneier, If Cloud Is Nothing New, Why Are You Talking So Much About It?

June 3rd, 2009 13 comments

squidly

Update: Please see this post if you’re wondering why I edited this piece.

I read a recent story in the Guardian from Bruce Schneier titled “Be Careful When You Come To Put Your Trust In the Clouds” in which he suggests that Cloud Computing is “…nothing new.”

Fundamentally it’s hard to argue with that title as clearly we’ve got issues with security and trust models as it relates to Cloud Computing, but the byline seems to be at odds with Schneier’s ever-grumpy dismissal of Cloud Computing in the first place.  We need transparency and trust: got it.

Many of the things Schneier says make perfect sense whilst others just make me scratch my head in abstract.  Let’s look at a couple of them:

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

Clearly there is a lot of hype around Cloud Computing, but I believe it’s important — especially as someone who spends a lot of time educating and evangelizing — that people like myself and Schneier effectively separate the hype from the hope and try and paint a clearer picture of things.

To that point, Schneier does his audience a disservice by dumbing down Cloud Computing to nothing more than outsourcing via SaaS.  Throwing the baby out with the rainwater seems a little odd to me and while it’s important to relate to one’s audience, I keep sensing a strange cognitive dissonance whilst reading Schneier’s opining on Cloud.

Firstly, and as I’ve said many times, Cloud Computing is more than just Software as a Service (SaaS.)  SaaS is clearly the more mature and visible set of offerings in the evolving Cloud Computing taxonomy today, but one could argue that players like Amazon with their Infrastructure as a Service (IaaS) or even the aforementioned Google and Salesforce.com with the Platform as a Service (PaaS) offerings might take umbrage with Schneier’s suggestion that Cloud is simply some “…software over the internet” accessed “…via a browser.”

Overlooking IaaS and PaaS is clearly a huge miss here and it calls into question the point Schneier makes when he says:

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing – network infrastructure, security monitoring, remote hosting – is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

<sigh> Welcome to the evolution of technology and disruptive innovation.  What’s the point?

Fundamentally, as we look beyond speeds and feeds, Cloud Computing — at all layers and offering types — is driving huge headway and innovation in the evolution of automation, autonomics and the applied theories of dealing with massive scale in compute, network and storage realms.  Sure, the underlying problems — and even some of the approaches — aren’t new in theory, but they are in practice.  The end result may very well be that a consumer of service may not see elements that are new technologically as they are abstracted, but the economic, cultural, business and operational differences are startling.

If we look at what makes up Cloud Computing, the five elements I always point to are:

cloud-keyingredients018

Certainly the first three are present today — and have been for some while — in many different offerings.  However, combining the last two: on-demand, self-service scale and dynamism with new economic models of consumption and allocation are quite different, especially when doing so at extreme levels of scale with multi-tenancy.

So let’s get to the meat of the matter: security and trust.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors – and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further – you now have to also trust your software service vendors – but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

Fair enough.  So let’s chalk one up here to “Cloud is nothing new — we still have to put our faith and trust in someone else.”  Got it.  However, by again excluding the notion of PaaS and IaaS, Bruce fails to recognize the differences in both responsibility and accountability that these differing models brings; limiting Cloud to SaaS while simple for cute argument does not a complete case make:

cloud-lower030

To what level you are required to and/or feel comfortable transferring responsibility depends upon the provider and the deployment model; the risks associated with an IaaS-based service can be radically different than that of one from a SaaS vendor. With SaaS, security can be thought of from a monolithic perspective — that of the provider; they are responsible for it.  In the case of PaaS and IaaS, this trade-off’s become more apparent and you’ll find that this “outsourcing” of responsibility is diminished whilst the mantle of accountability is not.  This is pretty important if you want ot be generic in your definition of “Cloud.”

Here’s where I see Bruce going off the rails from his “Cloud is nothing new” rant, much in the same way I’d expect he would suggest that virtualization is nothing new, either:

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.


Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

So therefore I see a huge contradiction.  How we secure — or allow others to — our data is very different in Cloud, it *is* something new in its practical application.   There are profound operational, business and technical (let alone regulatory, legal, governance, etc.) differences that do pose new challenges. Yes, we should take our best practices related to “outsourcing” that we’ve built over time and apply them to Cloud.  However, the collision course of virtualization, converged fabrics and Cloud Computing are pushing the boundaries of all we know.

Per the examples above, our challenges are significant.  The tech industry thrives on the ebb and flow of evolutionary punctuated equilibrium; what’s old is always new again, so it’s important to remember a couple of things:

  1. Harking back (a whopping 60 years) to the “dawn of time” in the IT/Computing industry making the case that things “aren’t new” is sort of silly and simply proves you’re the tallest and loudest guy in a room full of midgets.  Here’s your sign.
  2. I don’t see any suggestions for how to make this better in all these rants about mainframes, only FUD
  3. If “outsourcing is the future of computing” and we are to see both evolutionary and revolutionary disruptive innovation, shouldn’t we do more than simply hope that “…eventually we’ll get this right?”

The past certainly repeats itself, which explains why every 20 years bell-bottoms come back in style…but ignoring the differences in application, however incremental, is a bad idea.  In many regards we have not learned from our mistakes or fail to recognize patterns, but you can’t drive forward by only looking in the rear view mirror, either.

Regards,

/Hoff

Observations on “Securing Microsoft’s Cloud Infrastructure”

June 1st, 2009 1 comment

notice-angleI was reading a blog post from Charlie McNerney, Microsoft’s GM, Business & Risk Management, Global Foundation Services on “Securing Microsoft’s Cloud Infrastructure.”

Intrigued, I read the white paper to first get a better understanding of the context for his blog post and to also grok what he meant by “Microsoft’s Cloud Infrastructure.”  Was he referring to Azure?

The answer is per the whitepaper that Microsoft — along with everyone else in the industry — now classifies all of its online Internet-based services as “Cloud:”

Since the launch of MSN® in 1994, Microsoft has been building and running online services. The GFS division manages the cloud infrastructure and platform for Microsoft online services, including ensuring availability for hundreds of millions of customers around the world 24 hours a day, every day. More than 200 of the company’s online services and Web portals are hosted on this cloud infrastructure, including such familiar consumer-oriented services as Windows Live™ Hotmail® and Live Search, and business-oriented services such as Microsoft Dynamics® CRM Online and Microsoft Business Productivity Online Standard Suite from Microsoft Online Services. 

Before I get to the part I found interesting, I think that the whitepaper (below) does a good job of providing a 30,000 foot view of how Microsoft applies lessons learned over its operational experience and the SDL to it’s “Cloud” offerings.  It’s something designed to market the fact that Microsoft wants us to know they take security seriously.  Okay.

Here’s what I found interesting in Charlie’s blog post, it appears in the last two sentences (boldfaced): 

The white paper we’re releasing today describes how our coordinated and strategic application of people, processes, technologies, and experience with consumer and enterprise security has resulted in continuous improvements to the security practices and policies of the Microsoft cloud infrastructure.  The Online Services Security and Compliance (OSSC) team within the Global Foundation Services division that supports Microsoft’s infrastructure for online services builds on the same security principles and processes the company has developed through years of experience managing security risks in traditional software development and operating environments. Independent, third-party validation of OSSC’s approach includes Microsoft’s cloud infrastructure achieving both SAS 70 Type I and Type II attestations and ISO/IEC 27001:2005 certification. We are proud to be one of the first major online service providers to achieve ISO 27001 certification for our infrastructure. We have also gone beyond the ISO standard, which includes some 150 security controls. We have developed 291 security controls to date to account for the unique challenges of the cloud infrastructure and what it takes to mitigate some of the risks involved.

I think it’s admirable that Microsoft is sharing its methodologies and ISMS objectives and it’s a good thing that they have adopted ISO standards and secured SAS70 as a baseline.  

However, I would be interested in understanding what 291 security controls means to a security posture versus, say 178.  It sounds a little like Twitter follower counts.

I can’t really explain why those last two sentences stuck in my craw, but they did.

I’d love to know more about what Microsoft considers those “unique challenges of the cloud infrastructure” as well as the risk assessment framework(s) used to manage/mitigate them — I’m assuming they’ve made great STRIDEs in doing so. 😉

/Hoff