Archive

Archive for the ‘Cloud Security’ Category

Incomplete Thought: The Curious Case Of the Carnival Cloud

June 1st, 2011 1 comment
The former Thunderbolt roller coaster, Coney I...

Image via Wikipedia

As a follow-on example to my blog titled The Curious Case of the MBO Cloud, in which I detailed the mad rush of companies to deploy something — anything — that looks like a “cloud” to achieve an artificial benchmark of cloudiness, I present you the oft-experienced “Carnival Cloud.”

What’s a Carnival Cloud?

Have you been to Disneyland?  Perhaps a Six Flags theme park?  Perhaps even Coney Island?  If so,  you’ve likely marveled at the engineering and physics associated with some of the rides.  Seeing a train of interconnected people haulers hurtling around a steel (or wooden) substructure of vomit-inducing gravity abuse is a marvelous thing.

I saw a documentary on the construction of some of these modern wonders.  Amazing.  Well modeled, precision engineered, expertly designed and a remarkably good track record of safety.  Yes, sometimes bad things happen, but rarely do you see a rollercoaster at a theme park decommissioned due to inherent design flaws.

I do however sometimes reflect on these rides and the notion that one willingly slaps down some money, straps in, gets rocketed around at 60+ mph pulling G’s rivaling that of some fighter aircraft and realize there’s really no Plan B should something go ass over tea kettles.

One simply has to trust in the quality of design, the skills of the engineering, the on-going maintenance efforts and the 16 year old who pushes the go-button after your crotch-squashing safety bar is slammed into one’s lap and morphs your unmentionables.

…keep your hands inside the ride at all times…

We trust that we’ll come out the other side and give no pause to the fact that the hopes for such are granted by the fact that these rides are often heavily automated, computer controlled and are instrumented to detect failure or flaw.

Now, have you been to a local fair or carnival?

Contrast your experience at Six Flags — often with the same level of risk — with a local carnival ride that’s unloaded from the back of a 1972 Chevy Stepsider, bolted together (and repeatedly unbolted) with some handtools and the sticky residue of lubricating oil and cotton candy.

I don’t know about you, but putting my life in the hands of carnie who does double duty as the bearded lady doesn’t inspire confidence, especially when the Tilt-o-whirl has the potential to demonstrate the unfortunate ramifications of cascading failure domains in physics that run well beyond the fact the some of these operators can’t count past the 4 missing fingers they have on their right hand.

This isn’t saying that carnivals aren’t fun.  This isn’t implying that toothliness ain’t close to Godliness (especially when the afterlife is one missing bolt away.) This isn’t saying you shouldn’t go, thereby missing the nacho cheese-dipped pretzel logs and colon-clogging funnel cakes.

Nay, the point is to simply suggest that there’s a lot to be said for solid infrastructure — regardless of where it’s sourced — that’s performant, safe, operated and maintained in a ruthlessly diligent manner, instrumented well and is subject to constant rigor and inspection.

Relating that to cloud, your experience — depending upon your appetite for risk (and nacho cheese) — will dictate whether your E-ticket ride ends well or as a footnote on MSNBC.

Mmmm. Ableskivers and giant stuffed armadillos.  Anyone got change for a $20?

/Hoff

Enhanced by Zemanta

The State Of the Art In Cloud Security…

May 31st, 2011 2 comments

…is still firewalls and SSL.

Cloud: The “revenge of (overlay) VPN and PKI”
/Sad Panda
Categories: Cloud Computing, Cloud Security Tags:

Quick Ping: VMware’s Horizon App Manager – A Big Bet That Will Pay Off…

May 17th, 2011 2 comments

It is so tempting to write about VMware‘s overarching strategy of enterprise and cloud domination, but this blog entry really speaks to an important foundational element in their stack of offerings which was released today: Horizon App Manager.

Check out @Scobleizer’s interview with Noel Wasmer (Dir. of Product Management for VMware) on the ins-and-outs of HAM.

Frankly, federated identity and application entitlement is not new.

Connecting and extending identities from inside the enterprise using native directory services to external applications (SaaS or otherwise) is also not new.

What’s “new” with VMware’s Horizon App Manager is that we see the convergence and well-sorted integration of a service-driven federated identity capability that ties together enterprise “web” and “cloud” (*cough*)-based SaaS applications with multi-platform device mobility powered by the underpinnings of freshly-architected virtualization and cloud architecture.  All delivered as a service (SaaS) by VMware for $30 per user/per year.

[Update: @reillyusa and I were tweeting back and forth about the inside -> out versus outside -> in integration capabilities of HAM.  The SAML Assertions/OAuth integration seems to suggest this is possible.  Moreover, as I alluded to above, solutions exist today which integrate classical VPN capabilities with SaaS offers that provide SAML assertions and SaaS identity proxying (access control) to well-known applications like SalesForce.  Here’s one, for example.  I simply don’t have any hands-on experience with HAM or any deeper knowledge than what’s publicly available to comment further — hence the “Quick Ping.”]

Horizon App Manager really is a foundational component that will tie together the various components of  VMware’s stack offers for seamless operation including such products/services as Zimbra, Mozy, SlideRocket, CloudFoundry, View, etc.  I predict even more interesting integration potential with components such as elements of the vShield suite — providing identity-enabled security policies and entitlement at the edge to provision services in vCloud Director deployments, for example (esp. now that they’ve acquired NeoAccel for SSL VPN integration with Edge.)

“Securely extending the enterprise to the Cloud” (and vice versa) is a theme we’ll hear more and more from VMware.  Whether this thin client, virtual machines, SaaS applications, PaaS capabilities, etc., fundamentally what we all know is that for the enterprise to be able to assert control to enable “security” and compliance, we need entitlement.

I think VMware — as a trusted component in most enterprises — has the traction to encourage the growth of their supported applications in their catalog ecosystem which will in turn make the enterprise excited about using it.

This may not seem like it’s huge — especially to vendors in the IAM space or even Microsoft — but given the footprint VMware has in the enterprise and where they want to go in the cloud, it’s going to be big.

/Hoff

(P.S. It *is* interesting to note that this is a SaaS offer with an enterprise virtual appliance connector.  It’s rumored this came from the TriCipher acquisition.  I’ll leave that little nugget as a tickle…)

(P.P.S. You know what I want? I want a consumer version of this service so I can use it in conjunction with or in lieu of 1Password. Please.  Don’t need AD integration, clearly)

Related articles

Enhanced by Zemanta

More On Cloud and Hardware Root Of Trust: Trusting Cloud Services with Intel® TXT

May 6th, 2011 No comments

Whilst at CloudConnect I filmed some comments with Intel, RSA, Terremark and HyTrust on Intel’s Trusted Execution Technology (TXT) and its implications in the Cloud Computing space specific to “trusted cloud” and using the underlying TPM present in many of today’s compute platforms.

The 30 minute session got cut down into more consumable sound bites, but combined with the other speakers, it does a good job setting the stage for more discussions regarding this important technology.

I’ve written previously on cloud and TXT with respect to measured launch environments and work done by RSA, Intel and VMware: More On High Assurance (via TPM) Cloud Environments. Hopefully we’ll see more adoption soon.

Enhanced by Zemanta

Hacking The Cloud – Popular Science!?

May 6th, 2011 No comments

OK, that’s not really a question, it’s a bit of a giddy, self-referential, fanboi-ish anouncement.

In the April 2011 “How It Works” issue of Popular Science, a magazine I’ve loved since I was a kid, Marie Pacella wrote a great story on security and cloud computing.

I was thrilled to be included in several sections and for once I’m not bashful about tooting my own horn — this is so cool to me personally!

The sections that went through editing got cut down quite bit, but originally the drafts included some heavier details on the mechanics and some more meaty sections on technical elements (and theoretical stuff,) but I think Marie and the editors did a great job.  The graphics were awesome, also.

At any rate, if you subscribe to the magazine or better yet have the iPad application (which is awesome,) you can check it out.

Don’t have an iPad or the magazine?  Read the story here

 

Enhanced by Zemanta

On the CA/Ponemon Security of Cloud Computing Providers Study…

April 29th, 2011 4 comments
CA Technologies

Image via Wikipedia

CA recently sponsored the second in a series of Ponemon Institute cloud computing security surveys.

The first, released in May, 2010 was focused on responses from practitioners: “Security of Cloud Computing Users – A Study of Practitioners in the US & Europe

The latest titled “Security of Cloud Computing Providers Study (pdf),” released this week, examines “cloud computing providers'” perspectives on the same.  You can find the intro here.

While the study breaks down the  survey in detail in Appendix 1, I would kill to see the respondent list so I could use the responses from some of these “cloud providers” to quickly make assessments of my short list of those to not engage with.

I suppose it’s not hard to believe that security is not a primary concern, but given all the hype surrounding claims of “cloud is more secure than the enterprise,” it’s rather shocking to think that this sort of behavior is reflective of cloud providers.

Let’s see why.

This survey qualifies those surveyed as such:

We surveyed 103 cloud service providers in the US and 24 in six European countries for a total of 127 separate providers. Respondents from cloud provider organizations say SaaS (55 percent) is the most frequently offered cloud service, followed by IaaS (34 percent) and PaaS (11 percent). Sixty-five percent of cloud providers in this study deploy their IT resources in the public cloud environment, 18 percent deploy in the private cloud and 18 percent are hybrid.

…and offers these most “salient” findings:

  • The majority of cloud computing providers surveyed do not believe their organization views the security of their cloud services as a competitive advantage. Further, they do not consider cloud computing security as one of their most important responsibilities and do not believe their products or services substantially protect and secure the confidential or sensitive information of their customers.
  • The majority of cloud providers believe it is their customer’s responsibility to secure the cloud and not their responsibility. They also say their systems and applications are not always  evaluated for security threats prior to deployment to customers.
  • Buyer beware – on average providers of cloud computing technologies allocate 10 percent or less of their operational resources to security and most do not have confidence that  customers’ security requirements are being met.
  • Cloud providers in our study say the primary reasons why customers purchase cloud  resources are lower cost and faster deployment of applications. In contrast, improved security  or compliance with regulations is viewed as an unlikely reason for choosing cloud services. The majority of cloud providers in our study admit they do not have dedicated security  personnel to oversee the security of cloud applications, infrastructure or platforms.

  • Providers of private cloud resources appear to attach more importance and have a higher  level of confidence in their organization’s ability to meet security objectives than providers of  public and hybrid cloud solutions.
    _
  • While security as a “true” service from the cloud is rarely offered to customers today, about  one-third of the cloud providers in our study are considering such solutions as a new source  of revenue sometime in the next two years.

Ultimately, CA summarized the findings as such:

“The focus on reduced cost and faster deployment may be sufficient for cloud providers now, but as organizations reach the point where increasingly sensitive data and applications are all that remains to migrate to the cloud, they will quickly reach an impasse,” said Mike Denning, general manager, Security, CA Technologies. “If the risk of breach outweighs potential cost savings and agility, we may reach a point of “cloud stall” where cloud adoption slows or stops until organizations believe cloud security is as good as or better than enterprise security.”

I have so much I’d like to say with respect to these summary findings and the details within the reports, but much of it I already have.  I don’t think these findings are reflective of the larger cloud providers I interact with which is another reason I would love to see who these “cloud providers” were beyond the breakdown of their service offerings that were presented.”

In the meantime, I’d like to refer you to these posts I wrote for reflection on this very topic:

/Hoff

Enhanced by Zemanta

OpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security

April 8th, 2011 3 comments

As facetious as the introductory premise of my Commode Computing presentation is, the main message — the automation of security capabilities up and down the stack — really is something I’m passionate about.

Ultimately, I made the point that “security” needs to be as programmatic/programmable, agile, scaleable and flexible as the workloads (and stacks) it is designed to protect. “Security” in this contexts extends well beyond the network, but the network provides such a convenient way of defining templated containers against which we can construct and enforce policies across a wide variety of deployment and delivery models.

So as I watch OpenFlow (and Software Defined Networking) mature, I’m really, really excited to recognize the potential for a slew of innovative ways we can leverage and extend this approach to networking [monitoring and enforcement] in order to achieve greater visibility, scale, agility, performance, efficacy and reduced costs associated with security.  The more programmatic and instrumented the network becomes, the more capable our security options will become also.

I’m busy reading many of the research activities associated with OpenFlow security and digesting where vendors are in terms of their approach to leveraging this technology in terms of security.  It may be just my perspective, but it’s a little sparse today — not disappointingly so — with a huge greenfield opportunity for really innovative stuff when paired with advancements we’re seeing in virtualization and cloud computing.

I’ll relate more of my thoughts and discoveries as time goes on.  If you’ve got some cool ideas/concepts/products in this area (I don’t care who you work for,) post ’em here in the comments, please!

In the meantime, check out: www.openflow.org to get your feet wet.

/Hoff

Reminders to self to perform more research on (I think I’m going to do my next presentation series on this):

  • AAA for messages between OpenFlow Switch and Controllers
  • Flood protection for controllers
  • Spoofing/MITM between switch/controllers (specifically SSL/TLS)
  • Flow-through (ha!)/support of OpenFlow in virtual switches (see 1000v and Open vSwitch)
  • (per above) Integration with VN-Tag (like) flow-VM (workload) tagging
  • Integration of Netflow data from OpenFlow flow tables
  • State/flow-table convergence for security decisions with/without cut-through given traffic steering
  • Service insertion overlays for security control planes
  • Integration with 802.1x (and protocol extensions such as TrustSec)
  • Telemetry integration with NAC and vNAC
  • Anti-DDoS implications
Enhanced by Zemanta

Budget Icebergs, Fiscal Anchors and a Boat (Fed)RAMP to Nowhere?

April 4th, 2011 No comments
United States Capitol in daylight

Image via Wikipedia

It’s often said that in order to make money, you have to spend money; invest in order to succeed.

However, when faced with the realities of budget shortfalls and unsavory economic pressure, it seems the easiest thing to do is simply hunt around for top-line blips on the budget radar and kill them, regardless of the long term implications that ceasing investment in efficiency and transparency programs have in reducing bottom line pain.

To wit, Alex Howard reports “Congress weighs deep cuts to funding for federal open government data platforms“:

Data.gov, IT.USASpending.gov, and other five other websites that offer platforms for open government transparency are facing imminent closure. A comprehensive report filed by Jason Miller, executive editor of Federal News Radio, confirmed that the United States of Office of Management and Budget is planning to take open government websites offline over the next four months because of a 94% reduction in federal government funding in the Congressional budget…

Cutting these funds would also shut down the Fedspace federal social network and, notably, the FedRAMP cloud computing cybersecurity programs. Unsurprisingly, open government advocates in the Sunlight Foundation and the larger community have strongly opposed these cuts.

Did you catch the last paragraph?  They’re kidding, right?

After demonstrable empirical data that shows how Vivek Kundra and his team’s plans for streamlining government IT is already saving money (and will continue to do so,) this is what we get?  Slash and burn?  I attempted to search for the investment made thus far in FedRAMP using the IT Dashboard, but couldn’t execute an appropriate search.  Anyone know that answer?

Read more on the topic by Daniel Schuman “Budget Technopocalypse: Proposed Congressional Budgets Slash Funding for Data Transparency

Now, I’m not particularly fond of how the initial FedRAMP drafts turned out, but I’m optimistic that the program will evolve, will ultimately make a difference and lead to more assured, cost-efficient and low-friction adoption of Cloud Computing. We need FedRAMP to succeed and we need continued investment in it to do so.  Let’s not throw the baby out with the Cloud water.

We literally cannot afford for FedRAMP (or these other transparency programs) to be cut — these are the very programs that will lead to long term efficiency and fiscally-responsible programs across the U.S. Government.  They will ultimately make us more competitive.

Vote with your clicks.  Support the Sunlight Foundation.

/Hoff

Enhanced by Zemanta

FYI: New NIST Cloud Computing Reference Architecture

March 31st, 2011 No comments
logo of National Institute of Standards and Te...

Image via Wikipedia

In case you weren’t aware, NIST has a WIKI for collaboration on Cloud Computing.  You can find it here.

They also have a draft of their v1.0 Cloud Computing Reference Architecture, which builds upon the prior definitional work we’ve seen before and has pretty graphics.  You can find that, here (dated 3/30/2011)

/Hoff

Enhanced by Zemanta

Dedicated AWS VPC Compute Instances – Strategically Defensive or Offensive?

March 28th, 2011 9 comments

Chugging right along on the feature enhancement locomotive, following the extension of networking capabilities of their Virtual Private Cloud (VPC) offerings last week (see: AWS’ New Networking Capabilities – Sucking Less 😉 ,) Amazon Web Services today announced the availability of dedicated (both on-demand and dedicated) compute instances within a VPC:

Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer. Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level.

That’s interesting, isn’t it?  I remember writing this post ” Calling All Private Cloud Haters: Amazon Just Peed On Your Fire Hydrant… and chuckling when AWS announced VPC back in 2009 in which I suggested that VPC:

  • Legitimized Private Cloud as a reasonable, needed, and prudent step toward Cloud adoption for enterprises,
  • Substantiated the value proposition of Private Cloud as a way of removing a barrier to Cloud entry for enterprises, and
  • Validated the ultimate vision toward hybrid Clouds and Inter-Cloud

That got some hackles up.

So this morning, people immediately started squawking on Twitter about how this looked remarkably like (or didn’t) private cloud or dedicated hosting.  This is why, about two years ago, I generated this taxonomy that pointed out the gray area of “private cloud” — the notion of who manages it, who owns the infrastructure, where it’s located and who it’s consumed by:

I did a lot of this work well before I utilized it in the original Cloud Security Alliance Guidance architecture chapter I wrote, but that experienced refined what I meant a little more clearly and this version was produced PRIOR to the NIST guidance which is why you don’t see mention of “community cloud”:

  1. Private
    Private Clouds are provided by an organization or their designated service provider and offer a single-tenant (dedicated) operating environment with all the benefits and functionality of elasticity* and the accountability/utility model of Cloud.  The physical infrastructure may be owned by and/or physically located in the organization’s datacenters (on-premise) or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated service provider respectively.
    The consumers of the service are considered “trusted.”  Trusted consumers of service are those who are considered part of an organization’s legal/contractual umbrella including employees, contractors, & business partners.  Untrusted consumers are those that may be authorized to consume some/all services but are not logical extensions of the organization.
  2. Public
    Public Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.
    The physical infrastructure is generally owned by and managed by the designated service provider and located within the provider’s datacenters (off-premise.)  Consumers of Public Cloud services are considered to be untrusted.
  3. Managed
    Managed Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.The physical infrastructure is owned by and/or physically located in the organization’s datacenters with an extension of management and security control planes controlled by the designated service provider.  Consumers of Managed Clouds may be trusted or untrusted.
  4. Hybrid
    Hybrid Clouds are a combination of public and private cloud offerings that allow for transitive information exchange and possibly application compatibility and portability across disparate Cloud service offerings and providers utilizing standard or proprietary methodologies regardless of ownership or location.  This model provides for an extension of management and security control planes.  Consumers of Hybrid Clouds may be trusted or untrusted.

* Note: the benefits of elasticity don’t imply massive scale, which in many cases is not a relevant requirement for an enterprise.  Also, ultimately I deprecated the “managed” designation because it was a variation on a theme, but you can tell that ultimately the distinction I was going for between private and hybrid is the notion of OR versus AND designations in the various criteria.

AWS’ dedicated VPC options now give you another ‘OR’ option when thinking about who manages, owns the infrastructure your workloads run on, and more importantly where.  More specifically, the notion of ‘virtual’ cloud becomes less and less important as the hybrid nature of interconnectedness of resources starts to make more sense — regardless of whether you use overlay solutions like CloudSwitch, “integrated” solutions from vendors like VMware or Citrix or from AWS.  In the long term, the answer will probably be “D) all of the above.”

Providing dedicated compute atop a hypervisor for which you are the only tenant will be attractive to many enterprises who have trouble coming to terms with sharing memory/cpu resources with other customers.  This dedicated functionality costs a pretty penny – $87,600 a year, and as Simon Wardley pointed out that this has an interesting effect inasmuch as it puts a price tag on isolation:

Here’s the interesting thing that goes to the title of this post:

Is this a capability that AWS really expects to be utilized as they further blur the lines between public, private and hybrid cloud models OR is it a defensive strategy hinged on the exorbitant costs to further push enterprises into shared compute and overlay security models?

Specifically, one wonders if this is a strategically defensive or offensive move?

A single tenant atop a hypervisor atop dedicated hardware — that will go a long way toward addressing one concern: noisy (and nosy) neighbors.

Now, keep in mind that if an enterprise’s threat modeling and risk management frameworks are reasonably rational, they’ll realize that this is compute/memory isolation only.  Clearly the network and storage infrastructure is still shared, but the “state of the art” in today’s cloud of overlay encryption (file systems and SSL/IPSec VPNs) will likely address those issues.  Shared underlying cloud management/provisioning/orchestration is still an interesting area of concern.

So this will be an interesting play for AWS. Whether they’re using this to take a hammer to the existing private cloud models or just to add another dimension in service offering (logical, either way) I think in many cases enterprises will pay this tax to further satisfy compliance requirements by removing the compute multi-tenancy boogeyman.

/Hoff

Enhanced by Zemanta