Archive

Posts Tagged ‘Cloud Security’

Comments on the PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

February 16th, 2010 2 comments

I saw a very interesting post on LinkedIn with the title PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

PricewaterhouseCoopers are working with the Technology Strategy Board (part of BIS) on a high profile research project which aims to identify future technology and cyber security trends. These statements are forward looking and are intended to purely start a discussion around emerging/possible future trends. This is a great chance to be involved in an agenda setting piece of research. The findings will be released in the Spring at Infosec. We invite you to offer your thoughts…

The cloud/thin computing will fundamentally change the nature of cyber security…

The nature of cyber security threats will fundamentally change as the trend towards thin computing grows. Security updates can be managed instantly by the solution provider so every user has the latest security solution, the data leakage threat is reduced as data is stored centrally, systems can be scanned more efficiently and if Botnets capture end-point computers, the processing power captured is minimal. Furthermore, access to critical data can be centrally managed and as more email is centralised, malware can be identified and removed more easily. The key challenge will become identity management and ensuring users can only access their relevant files. The threat moves from the end-point to the centre.

What are your thoughts?

My response is simple.

Cloud Computing or “Thin Computing” as described above doesn’t change the “nature” of (gag) “cyber security” it simply changes its efficiency, investment focus, capital model and modality. As to the statement regarding threats with movement “…from the end-point to the centre,” the surface area really becomes amorphous and given the potential monoculture introduced by the virtualization layers underpinning these operations, perhaps expands.

Certainly the benefits described in the introduction above do mean changes to who, where and when risk mitigation might be applied, but those activities are, in most cases, still the same as in non-Cloud and “thick” computing.  That’s not a “fundamental change” but rather an adjustment to a platform shift, just like when we went from mainframe to client/server.  We are still dealing with the remnant security issues (identity management, AAA, PKI, encryption, etc.) from prior  computing inflection points that we’ve yet to fix.  Cloud is a great forcing function to help nibble away at them.

But, if you substitute “client server” in relation to it’s evolution from the “mainframe era” for “cloud/thin computing” above, it all sounds quite familiar.

As I alluded to, there are some downsides to this re-centralization, but it is important to note that I do believe that if we look at what PaaS/SaaS offerings and VDI/Thin/Cloud computing offers, it makes us focus on protecting our information and building more survivable systems.

However, there’s a notable bifurcation occurring. Whilst the example above paints a picture of mass re-centralization, incredibly powerful mobile platforms are evolving.  These platforms (such as the iPhone) employ a hybrid approach featuring both native/local on-device applications and storage of data combined with the potential of thin client capability and interaction with distributed Cloud computing services.*

These hyper-mobile and incredibly powerful platforms — and the requirements to secure them in this mixed-access environment — means that the efficiency gains on one hand are compromised by the need to once again secure  diametrically-opposed computing experiences.  It’s a “squeezing the balloon” problem.

The same exact thing is occurring in the Private versus Public Cloud Computing models.

/Hoff

* P.S. Bernard Golden also commented via Twitter regarding the emergence of Sensor nets which also have a very interesting set of implications on security as it relates to both the examples of Cloud and mobile computing elements above.

Reblog this post [with Zemanta]

The Automated Audit, Assertion, Assessment, and Assurance API (A6) Becomes: CloudAudit

February 12th, 2010 No comments

I’m happy to announce that the Automated Audit, Assertion, Assessment, and Assurance API (A6) working group is organizing under the brand of “CloudAudit.”  We’re doing so to enable reaching a broader audience, ensure it is easier to find us in searches and generally better reflect the mission of the group.  A6 remains our byline.

We’ve refined how we are describing and approaching solving the problems of compliance, audit, and assurance in the cloud space and part of that is reflected in our re-branding.  You can find the original genesis for A6 here in this series of posts. Meanwhile, you can keep track of all things CloudAudit at our new home: http://www.CloudAudit.org.

The goal of CloudAudit is to provide a common interface that allows Cloud providers to automate the Audit, Assertion, Assessment, and Assurance (A6) of their environments and allow authorized consumers of their services to do likewise via an open, extensible and secure API.  CloudAudit is a volunteer cross-industry effort from the best minds and talent in Cloud, networking, security, audit, assurance, distributed application and system architecture backgrounds.

Our execution mantra is to:

  • Keep it simple, lightweight and easy to implement; offer primitive definitions & language structure using HTTP(S)
  • Allow for extension and elaboration by providers and choice of trusted assertion validation sources, checklist definitions, etc.
  • Not require adoption of other platform-specific APIs
  • Provide interfaces to Cloud naming and registry services

The benefits to the cloud provider are clear: a single reference model that allows automation of many functions that today incurs large costs in both manpower and time and costs business.  The base implementation is being designed to require little to no programmatic changes in order for implementation.  For the consumer and interested/authorized third parties, it allows on-demand examination of the same set of functions.

Mapping to compliance, regulatory, service level, configuration, security and assurance frameworks as well as third party trust brokers is part of what A6 will also deliver.  CloudAudit is working closely with other alliance and standards body organizations such as the Cloud Security Alliance and ENISA.

If you want to know who’s working on making this a reality, there are hundreds of interested parties; consumers as well as providers such as: Akamai, Amazon Web Services, Microsoft, NetSuite, Rackspace, Savvis, Terremark, Sun, VMware, and many others.

If you would like to get involved, please join the CloudAudit Working Group or visit the homepage here.

Here is the slide deck from the 2/12/10 working group call (our second) and a link to the WebEx playback of the call.

Reblog this post [with Zemanta]

Microsoft Azure Going “Down Stack,” Adding IaaS Capabilities. AWS/VMware WAR!

February 4th, 2010 4 comments

It’s very interesting to see that now that infrastructure-as-a-service (IaaS) players like Amazon Web Services are clawing their way “up the stack” and adding more platform-as-a-service (PaaS) capabilities, that Microsoft is going “down stack” and providing IaaS capabilities by way of adding RDP and VM capabilities to Azure.

From Carl Brooks’ (@eekygeeky) article today:

Microsoft is expected to add support for Remote Desktops and virtual machines (VMs) to Windows Azure by the end of March, and the company also says that prices for Azure, now a baseline $0.12 per hour, will be subject to change every so often.

Prashant Ketkar, marketing director for Azure, said that the service would be adding Remote Desktop capabilities as soon as possible, as well as the ability to load and run virtual machine images directly on the platform. Ketkar did not give a date for the new features, but said they were the two most requested items.

This move begins a definite trend away from the original concept for Azure in design and execution. It was originally thought of as a programming platform only: developers would write code directly into Azure, creating applications without even being aware of the underlying operating system or virtual instances. It will now become much closer in spirit to Amazon Web Services, where users control their machines directly. Microsoft still expects Azure customers to code for the platform and not always want hands on control, but it is bowing to pressure to cede control to users at deeper and deeper levels.

One major reason for the shift is that there are vast arrays of legacy Windows applications users expect to be able to run on a Windows platform, and Microsoft doesn’t want to lose potential customers because they can’t run applications they’ve already invested in on Azure. While some users will want to start fresh, most see cloud as a way to extend what they have, not discard it.

This sets the path to allow those enterprise customers running HyperV internally to take those VMs and run them on (or in conjunction with) Azure.

Besides the obvious competition with AWS in the public cloud space, there’s also a private cloud element. As it stands now, one of the primary differentiators for VMware from the private-to-public cloud migration/portability/interoperability perspective is the concept that if you run vSphere in your enterprise, you can take the same VMs without modification and move them to a service provider who runs vCloud (based on vSphere.)

This is a very interesting and smart move by Microsoft.

/Hoff

Reblog this post [with Zemanta]

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…

January 31st, 2010 15 comments

Allan Leinwand from GigaOm wrote a great article asking “Where are the network virtual appliances?” This was followed up by another excellent post by Rich Miller.

Allan sets up the discussion describing how we’ve typically plumbed disparate physical appliances into our network infrastructure to provide discrete network and security capabilities such as load balancers, VPNs, SSL termination, firewalls, etc.  He then goes on to describe the stunted evolution of virtual appliances:

To be sure, some networking devices and appliances are now available in virtual form.  Switches and routers have begun to move toward virtualization with VMware’s vSwitch, Cisco’s Nexus 1000v, the open source Open vSwitch and routers and firewalls running in various VMs from the company I helped found, Vyatta.  For load balancers, Citrix has released a version of its Netscaler VPX software that runs on top of its virtual machine, XenServer; and Zeus Systems has an application traffic controller that can be deployed as a virtual appliance on Amazon EC2, Joyent and other public clouds.

Ultimately I think it prudent for discussion’s sake to separate routing, switching and load balancing (connectivity) from functions such as DLP, firewalls, and IDS/IPS (security) as lumping them together actually abstracts the problem which is that the latter is completely dependent upon the capabilities and functionality of the former.  This is what Allan almost gets to when describing his lament with the virtual appliance ecosystem today:

Yet the fundamental problem remains: Most networking appliances are still stuck in physical hardware — hardware that may or may not be deployed where the applications need them, which means those applications and their associated VMs can be left with major gaps in their infrastructure needs. Without a full-featured and stateful firewall to protect an application, it’s susceptible to various Internet attacks.  A missing load balancer that operates at layers three through seven leaves a gap in the need to distribute load between multiple application servers. Meanwhile, the lack of an SSL accelerator to offload processing may lead to performance issues and without an IDS device present, malicious activities may occur.  Without some (or all) of these networking appliances available in a virtual environment, a VM may find itself constrained, unable to take full advantage of the possible economic benefits.

I’ve written about this many, many times. In fact almost three years ago I created a presentation called  “The Four Horsemen of the Virtualization Security Apocalypse” which described in excruciating detail how network virtual appliances were a big ball of fail and would be for some time. I further suggested that much of the “best-of-breed” products would ultimately become “good enough” features in virtualization vendor’s hypervisor platforms.

Why?  Because there are some very real problems with virtualization (and Cloud) as it relates to connectivity and security:

  1. Most of the virtual network appliances, especially those “ported” from the versions that usually run on dedicated physical hardware (COTS or proprietary) do not provide feature, performance, scale or high-availability parity; most are hobbled or require per-platform customization or re-engineering in order to function.
  2. The resilience and high availability options from today’s off-the-shelf virtual connectivity does not pair well with the mobility and dynamism of de-coupled virtual machines; VMs are ultimately temporal and networks don’t like topological instability due to key components moving or disappearing
  3. The performance and scale of virtual appliances still suffer when competing for I/O and resources on the same physical hosts as the guests they attempt to protect
  4. Virtual connectivity is a generally a function of the VMM (or a loadable module/domain therein.) The architecture of the VMM has dramatic impact upon the architecture of the software designed to provide the connectivity and vice versa.
  5. Security solutions are incredibly topology sensitive.  Given the scenario in #1 when a VM moves or is distributed across the pooled infrastructure, unless the security capabilities are already present on the physical host or the connectivity and security layers share a control plane (or at least can exchange telemetry,) things will simply break
  6. Many virtualization (and especially cloud) platforms do not support protocols or topologies that many connectivity and security virtual appliances require to function (such as multicast for load balancing)
  7. It’s very difficult to mimic the in-line path requirements in virtual networking environments that would otherwise force traffic passing through the connectivity layers (layers 2 through 7) up through various policy-driven security layers (virtual appliances)
  8. There is no common methodology to express what security requirements the connectivity fabrics should ensure are available prior to allowing a VM to spool up let alone move
  9. Virtualization vendors who provide solutions for the enterprise have rich networking capabilities natively as well as with third party connectivity partners, including VM and VMM introspection capabilities. As I wrote about here, mass-market Cloud providers such as Amazon Web Services or Rackspace Cloud have severely crippled networking.
  10. Virtualization and cloud vendors generally force many security vs. performance tradeoffs when implementing introspection capabilities in their platforms: third party code running in the kernel, scheduler prioritization issues, I/O limitations, etc.
  11. Much of the basic networking capabilities are being pushed lower into silicon (into the CPUs themselves) which makes virtual appliances even further removed from the guts that enable them
  12. Physical appliances (in the enterprise) exist en-mass.  Many of them provide highly scalable solutions to the specific functions that Alan refers to.  The need exists, given the limitations I describe above, to provide for integration/interaction between them, the VMM and any virtual appliances in order to offload certain functions as well as provide coverage between the physical and the logical.

What does this mean?  It means that ultimately to ensure their own survival, virtualization and cloud providers will depend less upon virtual appliances and add more of the basic connectivity AND security capabilities into the VMMs themselves as its the only way to guarantee performance, scalability, resilience and satisfy the security requirements of customers. There will be new generations of protocols, APIs and control planes that will emerge to provide for this capability, but this will drive the same old integration battles we’re supposed to be absolved from with virtualization and Cloud.

Connectivity and security vendors will offer virtual replicas of their physical appliances in order to gain a foothold in virtualized/cloud environments in order to intercept traffic (think basic traps/ACL’s) and then interact with higher-performing physical appliance security service overlays or embedded line cards in service chassis.  This is especially true in enterprises but poses many challenges in software-only, mass-market cloud environments where what you’ll continue to get is simply basic connectivity and security with limited networking functionality.  This implies more and more security will be pushed into the guest and application logic layers to deal with this disconnect.

This is exactly where we are today with Cloud providers like Amazon Web Services: basic ingress-only filtering with a very simplistic, limited and abstracted set of both connectivity and security capability.  See “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye”  Will they add more functionality?  Perhaps. The question is whether they can afford to in order to limit the impact that connecitivity and security variability/instability can bring to an environment.

That said, it’s certainly achievable, if you are willing and able to do so, to construct a completely software-based networking environment, but these environments require a complete approach and stack re-write with an operational expertise that will be hard to support for those who have spent the last 20 years working in a different paradigm and that’s a huge piece of this problem.

The connectivity layer — however integrated into the virtualized and cloud environments they seem — continues to limit how and what the security layers can do and will for some time, thus limiting the uptake of virtual network and security appliances.

Situation normal.

/Hoff

Reblog this post [with Zemanta]

Hacking Exposed: Virtualization & Cloud Computing…Feedback Please

January 30th, 2010 26 comments

Craig Balding, Rich Mogull and I are working on a book due out later this year.

It’s the latest in the McGraw-Hill “Hacking Exposed” series.  We’re focusing on virtualization and cloud computing security.

We have a very interesting set of topics to discuss but we’d like to crowd/cloud-source ideas from all of you.

The table of contents reads like this:

Part I: Virtualization & Cloud Computing:  An Overview
Case Study: Expand the Attack Surface: Enterprise Virtualization & Cloud Adoption
Chapter 1: Virtualization Defined
Chapter 2: Cloud Computing Defined

Part II: Smash the Virtualized Stack
Case Study: Own the Virtualized Enterprise
Chapter 3: Subvert the CPU & Chipsets
Chapter 4: Harass the Host, Hypervisor, Virtual Networking & Storage
Chapter 5: Victimize the Virtual Machine
Chapter 6: Conquer the Control Plane & APIs

Part III: Compromise the Cloud
Case Study: Own the Cloud for Fun and Profit
Chapter 7: Undermine the Infrastructure
Chapter 8: Manipulate the Metastructure
Chapter 9: Assault the Infostructure

Part IV: Appendices

We’ll have a book-specific site up shortly, but if you’d like to see certain things covered (technology, operational, organizational, etc.) please let us know in the comments below.

Also, we’d like to solicit a few critical folks to provide feedback on the first couple of chapters. Email me/comment if interested.

Thanks!

/Hoff, Craig and Rich.

Reblog this post [with Zemanta]

MashSSL – An Excellent Idea You’ve Probably Never Heard Of…

January 30th, 2010 No comments

I’ve been meaning to write about MashSSL for a while as it occurs to me that this is a particularly elegant solution to some very real challenges we have today.  Trusting the browser, operator of said browser or a web service when using multi-party web applications is a fatal flaw.

We’re struggling with how to deal with authentication in distributed web and cloud applications. MashSSL seems as though it’s a candidate for the toolbox of solutions:

MashSSL allows web applications to mutually authenticate and establish a secure channel without having to trust the user or the browser. MashSSL is a Layer 7 security protocol running within HTTP in a RESTful fashion. It uses an innovation called “friend in the middle” to turn the proven SSL protocol into a multi-party protocol that inherits SSL’s security, efficiency and mature trust infrastructure

Make sure you check out the sections on “Why and How,” especially the “MashSSL Overview” section which explains how it works.

I should mention the code is also open source.

/Hoff

Cloud: Security Doesn’t Matter (Or, In Cloud, Nobody Can Hear You Scream)

January 25th, 2010 9 comments

In the Information Security community, many of us have long come to the conclusion that we are caught in what I call my “Security Hamster Sine Wave Of Pain.”  Those of us who have been doing this awhile recognize that InfoSec is a zero-sum game; it’s about staving off the inevitable and trying to ensure we can deal with the residual impact in the face of being “survivable” versus being “secure.”

While we can (and do) make incremental progress in certain areas, the collision of disruptive innovation, massive consumerization of technology along with the slow churn of security vendor roadmaps, dissolving budgets, natural marketspace commoditzation and the unfortunate velocity of attacker innovation yields the constant realization that we’re not motivated or incentivized to do the right thing or manage risk.

Instead, we’re poked in the side and haunted by the four letter word of our industry: compliance.

Compliance is often dismissed as irrelevant in the consumer space and associated instead with government or large enterprise, but as privacy continues to erode and breaches make the news, the fact that we’re putting more and more of our information — of all sorts — in the hands of others to manage is again beginning to stoke an upsurge in efforts to somehow measure and manage visibility against a standardized baseline of general, common sense and minimal efforts to guard against badness.

Ultimately, it doesn’t matter how “secure” Cloud providers suggest they are.  It doesn’t matter what breakthroughs in technology sprout up in the face of this new model of compute. The only measure that counts in the long run is how compliant you are.  That’s what will determine the success of Cloud.  Don’t believe me? Look at how the leading vendors in Cloud are responding today to their biggest (potential) customers — taking the “one size fits all” model of mass-market Cloud and beginning to chop it up and create one-off’s in order to satisfy…compliance.

Why?  Because it’s easier to deal with the vagaries of trust and isolation and multi-tenant environments by eliminating the latter to increase the former. If an auditor/examiner doesn’t understand or cannot measure your compliance to those things he/she is tasked to evaluate you against, you’re sunk.

The only thing that will budge the needle on this issue is how agile those who craft the regulatory guidelines are or how you can clearly demonstrate why your compensating controls mitigate the risk of the provider of service if they cannot. Given the nature and behavior of those involved in this space and where we are with putting our eggs in a vaporous basket, I wouldn’t hold my breath.  Movement in this area is glacial at best and in many cases out of touch with the realities of just how disruptive Cloud Computing is.  All it will take is one monumental cock-up due to a true Cloudtastrophe and the Cloud will hit the fan.

As I have oft suggested, the core issue we need to tackle in Cloud is trust, since the graceful surrender of such is at the heart of what Cloud requires.  Trust is comprised of Security, Control, Service Levels and Compliance.  It’s relatively easy to establish where we are today with the first three, but the last one is MIA.  We’re just *now* seeing movement in the form of SIGs to deal with virtualization.  Cloud?

When the best you have is a SAS-70, it’s time to weep.  Conversely, wishing for more regulation will simply extend the cycle.

What can you do?  Simple. Help educate your auditors and examiners. Read the Cloud Security Alliance’s guidelines. Participate in making the Automated Audit, Assertion, Assessment, and Assurance API (A6) a success so we can at least gain back some visibility and transparency which helps demonstrate compliance, since that’s how we’re measured.  Ultimately, if you’re able, focus on risk assessment in helping to advise your constituent business customers on how to migrate to Cloud Computing safely.

There are TONS of things one can do in order to make up for the shortcomings of Cloud security today.  The problem is, most of them erode the benefits of Cloud: agility, flexibility, cost savings, and dynamism.  We need to make the business aware of these tradeoffs as well as our auditors because we’re stuck.  We need the regulators and examiners to keep pace with technology — as painful as that might be in the short term — to guarantee our success in the long term.

Manage compliance, don’t let it manage you because a Cloud is a terrible thing to waste.

/Hoff

Reblog this post [with Zemanta]

Recording & Playback of WebEx A6 Working Group Kick-Off Call from 1/8/2010 Available

January 10th, 2010 No comments

If you’re interested in the great discussion and presentations we had during the kickoff call for the A6 (Automated Audit, Assertion, Assessment, and Assurance API) Working Group, there are two options to listen/view the WebEx recording:

Topic: A6 API Working Group – Kickoff Call-20100108 1704
Create time: 1/8/10 10:07 am
File size: 33.23MB
Duration: 1 hour 1 minute
Description: Streaming recording link:
https://ciscosales.webex.com/ciscosales/ldr.php?AT=pb&SP=MC&rID=41631852rKey=178e8b04941e5672
Download recording link:
https://ciscosales.webex.com/ciscosales/lsr.php?AT=dw&SP=MC&rID=41631…

MAKE SURE YOU VIEW THE CHAT WINDOW << It contains some really excellent discussion points.

We had two great presentations from representatives from the OGF OCCI group and CSC’s Trusted Cloud Team.

I’ll be setting up regular calls shortly and a few people have reached out to me regarding helping form the core team to begin organizing the working group in earnest.

You can also follow along via the Google Group here.

/Hoff

In need of a cool logo for the group by the way… 😉

To Achieve True Cloud (X/Z)en, One Must Leverage Introspection

January 6th, 2010 No comments

Back in October 2008, I wrote a post detailing efforts around the Xen community to create a standard security introspection API (Xen.Org Launches Community Project To Bring VM Introspection to Xen🙂

The Xen Introspection Project is a community effort within Xen.org to leverage the existing research presented above with other work not yet public to create a standard API specification and methodology for virtual machine introspection.

That blog was focused on introspection for virtualization proper but since many of the larger cloud providers utilize Xen virtualization as an underpinning of their service architecture and as an industry we’re suffering from a lack of visibility and deployable security capabilities, the relevance of VM and VMM introspection to cloud computing is quite relevant.

I thought I’d double around and see where we are.

It looks as though there’s been quite a bit of recent activity from the folks at Georgia Tech (XenAccess Project) and the University of Alaska at Fairbanks (Virtual Introspection for Xen) referenced in my previous blog.  The vCloud API proffered via the DMTF seems to also leverage (at least some of) the VMsafe API capabilities present in VMware‘s vSphere virtualization platform.

While details are, for obvious reasons sketchy, I am encouraged in speaking to representatives from a few cloud providers who are keenly interested in including these capabilities in their offerings.  Wouldn’t that be cool?

Adoption and inclusion of introspection capabilities will overcome some of the inherent security and visibility limitations we face in highly-virtualized multi-tenant environments due to networking constraints for integrating security functionality that I wrote about here.

I plan a follow-on blog in more detail once I finish some interviews.

/Hoff

Reblog this post [with Zemanta]

The Great Cloud Security Challenge: I Triple-Dog-Dare You…

December 27th, 2009 15 comments

I TRIPLE-DOG-DARE You!

There’s an awful lot of hyperbole being flung back and forth about the general state of security and Cloud-based services.

I’ve spent enough time highlighting both the practical and hypothetical (many of which actually have been realized) security issues created and exacerbated by Cloud up and down the stack, from IaaS to SaaS.

It seems, however, that there are a select few who ignore issues brought to light and seem to suggest that Cloud providers are at a state of maturity wherein they not only offer parity, but offer better security than the “average” IT shop.  What’s interesting is that while I agree that “Cloud Security is not insurmountable,” neither is non-Cloud security — but it’s sure as hell not progressed much in 40 years.

What’s missing is context.  What’s missing is the very risk assessment methodologies they reference in their tales of fancy.  What’s missing is that in the cases they suggest that security is not an obstacle to Cloud, there’s usually not much sensitive data or applications involved.

Ignore the U.S. CIO’s words of wisdom when he discusses the reality of security and moving to the Cloud. Ignore the CIO’s and CISO’s of the Fortune 500. Ignore everything in my Cloudifornication presentation and recent issues related to such. Ignore pragmatism.

Take my challenge instead…Here’s my dare:

  1. I’ll pay for an AWS EC2 instance for a month
  2. You choose the OS and LAMP stack components you’ll deploy in this AMI
  3. You harden it however you see fit, but ensure the web server can be reached via port 80 from the Internet*
  4. You put a .txt file somewhere on a readable filesystem (mounted) or create a row in a DB accessible via the web server
  5. This .txt file or row in the DB contains the following: Your name, (billing) address, social security number, credit card number, mother’s maiden name and your bank’s ABA routing number and checking account number
  6. I’ll invite some people I know to test your hypothesis for you

Let’s see if they want to put their money (literally) where their mouths are?  After all, they claim that Cloud providers will be able to secure their applications and data.

I triple-dog-dare you.

The only diatribes that we ought to be spared from are those that themselves don’t offer a balance of reality, responsibility and maturity as those they accuse of doing the same.

It’s not that Cloud deployments *can’t* be at least as secure as non-Cloud deployments with appropriate adjustments.  My issue with these wanderlust expressions is that the implication today that Cloud providers not only achieve parity but also exceed it — and that Cloud providers have some capability or technology the rest of us do not — given the challenges we have, is incredulous.

I’m all for evangelism, but generalizing about the state of security (in Cloud or otherwise) is a complete waste of electrons.  Yes, Cloud brings us opportunity and acts as a forcing function and we *will* see improvements, but NOT because we put blinders on and pretend that the delivery model (Cloud) will fix 40 years of legacy computing challenges — especially since Cloud is built upon most of them in the first place!

See here.

/Hoff

* Feel free to use SSL if it makes you feel any better.