Archive

Archive for April, 2010

Dear SaaS Vendors: If Cloud Is The Way Forward & Companies Shouldn’t Spend $ On Privately-Operated Infrastructure, When Are You Moving Yours To Amazon Web Services?

April 30th, 2010 6 comments

We’re told repetitively by Software as a Service (SaaS)* vendors that infrastructure is irrelevant, that CapEx spending is for fools and that Cloud Computing has fundamentally changed the way we will, forever, consume computing resources.

Why is it then that many of the largest SaaS providers on the planet (including firms like Salesforce.com, Twitter, Facebook, etc.) continue to build their software and choose to run it in their own datacenters on their own infrastructure?  In fact, many of them are on a tear involving multi-hundred million dollar (read: infrastructure) private datacenter build-outs.

I mean, SaaS is all about the software and service delivery, right?  IaaS/PaaS is the perfect vehicle for the delivery of scaleable software, right?  So why do you continue to try to convince *us* to move our software to you and yet *you* don’t/won’t/can’t move your software to someone else like AWS?

Hypocricloud: SaaS firms telling us we’re backwards for investing in infrastructure when they don’t eat the dog food they’re dispensing (AKA we’ll build private clouds and operate them, but tell you they’re a bad idea, in order to provide public cloud offerings to you…)

Quid pro quo, agent Starling.

/Hoff

* I originally addressed this to Salesforce.com via Twitter in response to Peter Coffee’s blog here but repurposed the title to apply to SaaS vendors in general.

Reblog this post [with Zemanta]

You Can’t Secure The Cloud…

April 30th, 2010 3 comments

That’s right. You can’t secure “The Cloud” and the real shocker is that you don’t need to.

You can and should, however, secure your assets and the elements within your control that are delivered by cloud services and cloud service providers, assuming of course there are interfaces to do so made available by the delivery/deployment model and you’ve appropriately assessed them against your requirements and appetite for risk.

That doesn’t mean it’s easy, cheap or agile, and lest we forget, just because you can “secure” your assets does not mean you’ll achieve “compliance” with those mandates against which you might be measured.

Even if you’re talking about making investments primarily in solutions via software thanks to the abstraction of cloud (and/or virtualization) as well adjusting processes and procedures due to operational impact, you can generally effect compensating controls (preventative and/or detective) that give you security on-par with what you might deploy today in a non-Cloud based offering.

Yes, it’s true. It’s absolutely possible to engineer solutions across most cloud services today that meet or exceed the security provided within the walled gardens of your enterprise today.

The realities of that statement come crashing down, however, when people confuse possibility with the capability to execute whilst not disrupting the business and not requiring wholesale re-architecture of applications, security, privacy, operations, compliance, economics, organization, culture and governance.

Not all of that is bad.  In fact, most of it is long overdue.

I think what is surprising is how many people (or at least vendors) simply suggest or expect that the “platform” or service providers to do all of this for them across the entire portfolio of services in an enterprise.  In my estimation that will never happen, at least not if one expects anything more than commodity-based capabilities at a cheap price while simultaneously being “secure.”

Vendors conflate the various value propositions of cloud (agility, low cost, scalability, security) and suggest you can achieve all four simultaneously and in equal proportions.  This is the fallacy of Cloud Computing.  There are trade-offs to be found with every model and Cloud is no different.

If we’ve learned anything from enterprise modernization over the last twenty years, it’s that nothing comes for free — and that even when it appears to, there’s always a tax to pay on the back-end of the delivery cycle.  Cloud computing is a series of compromises; it’s all about gracefully losing control over certain elements of the operational constructs of the computing experience. That’s not a bad thing, but it’s a painful process for many.

I really enjoy the forcing function of Cloud Computing; it makes us re-evaluate and sharpen our focus on providing service — at least it’s supposed to.  I look forward to using Cloud Computing as a lever to continue to help motivate industry, providers and consumers to begin to fix the material defects that plague IT and move the ball forward.

This means not worrying about securing the cloud, but rather understanding what you should do to secure your assets regardless of where they call home.

/Hoff

Related articles by Zemanta

Reblog this post [with Zemanta]

Introducing The HacKid Conference – Hacking, Networking, Security, Self-Defense, Gaming & Technology for Kids & Their Parents

April 26th, 2010 1 comment

This is mostly a cross-post from the official HacKid.org website, but I wanted to drive as many eyeballs to it as possible.

The gist of the idea for HacKid (sounds like “hacked,” get it?) came about when I took my three daughters aged 6, 9 and 14 along with me to the Source Security conference in Boston.

It was fantastic to have them engage with my friends, colleagues and audience members as well as ask all sorts of interesting questions regarding the conference.

It was especially gratifying to have them in the audience when I spoke twice. There were times the iPad I gave them was more interesting, however. ;)

The idea really revolves around providing an interactive, hands-on experience for kids and their parents which includes things like:

  • Low-impact martial arts/self-defense training
  • Online safety (kids and parents!)
  • How to deal with CyberBullies
  • Gaming competitions
  • Introduction to Programming
  • Basic to advanced network/application security
  • Hacking hardware and software for fun
  • Build a netbook
  • Make a podcast/vodcast
  • Lockpicking
  • Interactive robot building (Lego Mindstorms?)
  • Organic snacks and lunches
  • Website design/introduction to blogging
  • Meet law enforcement
  • Meet *real* security researchers ;)

We’re just getting started, but the enthusiasm and offers from volunteers and sponsors has been overwhelming!

If you have additional ideas for cool things to do, let us know via @HacKidCon (Twitter) or better yet, PLEASE go to the Wiki and read about how the community is helping to make HacKid a reality and contribute there!

Thanks,

/Hoff

Categories: HacKid, Security Conferences Tags:

The Four Horsemen Of the Virtualization (and Cloud) Security Apocalypse…

April 25th, 2010 No comments

I just stumbled upon this YouTube video (link here, embedded below) interview I did right after my talk at Blackhat 2008 titled “The 4 Horsemen of the Virtualization Security Apocalypse (PDF)” [There's a better narrative to the PDF that explains the 4 Horsemen here.]

I found it interesting because while it was rather “new” and interesting back then, if you ‘s/virtualization/cloud‘ especially from the perspective of heavily virtualized or cloud computing environments, it’s even more relevant today!  Virtualization and the abstraction it brings to network architecture, design and security makes for interesting challenges.  Not much has changed in two years, sadly.

We need better networking, security and governance capabilities! ;)

Same as it ever was.

/Hoff

Reblog this post [with Zemanta]

Incomplete Thought: “The Cloud in the Enterprise: Big Switch or Little Niche?”

April 19th, 2010 1 comment

Joe Weinman wrote an interesting post in advance of his panel at Structure ’10 titled “The Cloud in the Enterprise: Big Switch or Little Niche?” wherein he explored the future of Cloud adoption.

In this blog, while framing the discussion with Nick Carr‘s (in)famous “Big Switch” utility analog, he asks the question:

So will enterprise cloud computing represent The Big Switch, a dimmer switch or a little niche?

…to which I respond:

I think it will be analogous to the “Theory of Punctuated Equilibrium,” wherein we see patterns not unlike classical dampened oscillations with many big swings ultimately settling down until another disruption causes big swings again.  In transition we see niches appear until they get subsumed in the uptake.

Or, in other words such as those I posted on Twitter: “…lots of little switches AND big niches

Go see Joe’s panel. Better yet, comment on your thoughts here. ;)

/Hoff

Related articles by Zemanta

Reblog this post [with Zemanta]

Patching the (Hypervisor) Platform: How Do You Manage Risk?

April 12th, 2010 7 comments

Hi. Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

Reblog this post [with Zemanta]

More On High Assurance (via TPM) Cloud Environments

April 11th, 2010 14 comments
North Bridge Intel G45
Image via Wikipedia

Back in September 2009 after presenting at the Intel Virtualization (and Cloud) Security Summit and urging Intel to lead by example by pushing the adoption and use of TPM in virtualization and cloud environments, I blogged a simple question (here) as to the following:

Does anyone know of any Public Cloud Provider (or Private for that matter) that utilizes Intel’s TXT?

Interestingly the replies were few; mostly they were along the lines of “we’re considering it,” “…it’s on our long radar,” or “…we’re unclear if there’s a valid (read: economically viable) use case.”

At this year’s RSA Security Conference, however, EMC/RSA, Intel and VMware made an announcement regarding a PoC of their “Trusted Cloud Infrastructure,” describing efforts to utilize technology across the three vendors’ portfolios to make use of the TPM:

The foundation for the new computing infrastructure is a hardware root of trust derived from Intel Trusted Execution Technology (TXT), which authenticates every step of the boot sequence, from verifying hardware configurations and initialising the BIOS to launching the hypervisor, the companies said.

Once launched, the VMware virtualisation environment collects data from both the hardware and virtual layers and feeds a continuous, raw data stream to the RSA enVision Security Information and Event Management platform. The RSA enVision is engineered to analyse events coming through the virtualisation layer to identify incidents and conditions affecting security and compliance.

The information is then contextualised within the Archer SmartSuite Framework, which is designed to present a unified, policy-based assessment of the organisation’s security and compliance posture through a central dashboard, RSA said.

It should be noted that in order to take advantage of said solution, the following components are required: a future release of RSA’s Archer GRC console, the upcoming Intel Westmere CPU and a soon-to-be-released version of VMware’s vSphere.  In other words, this isn’t available today and will require upgrades up and down the stack.

Sam Johnston today pointed me toward an announcement from Enomaly referencing the “High Assurance Edition” of ECP which laid claims of assurance using the TPM beyond the boundary of the VMM to include the guest OS and their management system:

Enomaly’s Trusted Cloud platform provides continuous security assurance by means of unique, hardware-assisted mechanisms. Enomaly ECP High Assurance Edition provides both initial and ongoing Full-Stack Integrity Verification to enable customers to receive cryptographic proof of the correct and secure operation of the cloud platform prior to running any application on the cloud.

  • Full-Stack Integrity Verification provides the customer with hardware-verified proof that the cloud stack (encompassing server hardware, hypervisor, guest OS, and even ECP itself) is intact and has not been tampered with. Specifically, the customer obtains cryptographically verifiable proof that the hardware, hypervisor, etc. are identical to reference versions that have been certified and approved in advance. The customer can therefore be assured, for example, that:
  • The hardware has not been modified to duplicate data to some storage medium of which the application is not aware
  • No unauthorized backdoors have been inserted into the cloud managment system
  • The hypervisor has not been modified (e.g. to copy memory state)
  • No hostile kernel modules have been injected into the guest OS
This capability therefore enables customers to deploy applications to public clouds with confidence that the confidentiality and integrity of their data will not be compromised.

Of particular interest was Enomaly’s enticement of service providers with the following claim:

…with Enomaly’s patented security functionality, can deliver a highly secure Cloud Computing service – commanding a higher price point than commodity public cloud providers.

I’m looking forward to exploring more regarding these two example solutions as they see the light of day (and how long this will take given the need for platform-specific upgrades up and down the stack) as well as whether or not customers are actually willing to pay — and providers can command — a higher price point for what these components may offer.  You can bet certain government agencies are interested.

There are potentially numerous benefits with the use of this technology including security, compliance, assurance, audit and attestation capabilities (I hope also to incorporate more of what this might mean into the CloudAudit/A6 effort) but I’m very interested as to the implications on (change) management and policy, especially across heterogeneous environments and the extension and use of TPM’s across mobile platforms.

Of course, researchers are interested in these things too…see Rutkowska, et. al and “Attacking Intel Trusted Execution Technology” as an example.

/Hoff

Related articles by Zemanta

Reblog this post [with Zemanta]

Good Interview/Resource Regarding CloudAudit from SearchCloudComputing…

April 6th, 2010 No comments

The guys from SearchCloudComputing gave me a ring and we chatted about CloudAudit. The interview that follows is a distillation of that discussion and goes a long way toward answering many of the common questions surrounding CloudAudit/A6.  You can find the original here.

What are the biggest challenges when auditing cloud-based services, particularly for the solution providers?

Christofer Hoff:: One of the biggest issues is their lack of understanding of how the cloud differs from traditional enterprise IT. They’re learning as quickly as their customers are. Once they figure out what to ask and potentially how to ask it, there is the issue surrounding, in many cases, the lack of transparency on the part of the provider to be able to actually provide consistent answers across different cloud providers, given the various delivery and deployment models in the cloud.

How does the cloud change the way a traditional audit would be carried out?

Hoff: For the most part, a good amount of the questions that one would ask specifically surrounding the infrastructure is abstracted and obfuscated. In many cases, a lot of the moving parts, especially as they relate to the potential to being competitive differentiators for that particular provider, are simply a black box into which operationally you’re not really given a lot of visibility or transparency.
If you were to host in a colocation provider, where you would typically take a box, the operating system and the apps on top of it, you’d expect, given who controls what and who administers what, to potentially see a lot more, as well as there to be a lot more standardization of those deployed solutions, given the maturity of that space.

How did CloudAudit come about?

Hoff: I organized CloudAudit. We originally called it A6, which stands for Automated Audit Assertion Assessment and Assurance API. And as it stands now, it’s less in its first iteration about an API, and more specifically just about a common namespace and interface by which you can use simple protocols with good authentication to provide access to a lot of information that essentially can be automated in ways that you can do all sorts of interesting things with.

How does it work exactly?

Hoff: What we wanted to do is essentially keep it very simple, very lightweight and easy to implement without cloud providers having to make a lot of programmatic changes. Although we’re not prescriptive about how they do it (because each operation is different), we expect them to figure out how they’re going to get the information into this namespace, which essentially looks like a directory structure.

This kind of directory/namespace is really just an organized repository. We don’t care what is contained within those directories: .pdf, text documents, links to other websites. It could be a .pdf of a SAS 70 report with a signature that refers back to the issuing governing body. It could be logs, it could be assertions such as firewall=true. The whole point here is to allow these providers to agree upon the common set of minimum requirements.
We have aligned the first set of compliance-driven namespaces to that of theCloud Security Alliance‘s compliance control-mapping tool. So the first five namespaces pretty much run the gamut of what you expect to see most folks concentrating on in terms of compliance: PCI DSS, HIPAA, COBIT, ISO 27002 and NIST 800-53…Essentially, we’re looking at both starting with those five compliance frameworks, and allowing cloud providers to set up generic infrastructure-focused type or operational type namespaces also. So things that aren’t specific to a compliance framework, but that you may find of interest if you’re a consumer, auditor, or provider.

Who are the participants in CloudAudit?

Hoff: We have both pretty much the largest cloud providers as well as virtualization platform and cloud platform providers on the planet. We’ve got end users, auditors, system integrators. You can get the list off of the CloudAudit website. There are folks from CSC, Stratus, Akamai, Microsoft, VMware, Google, Amazon Web Services, Savvis, Terrimark, Rackspace, etc.

What are your short-term and long-term goals?

Hoff: Short-term goals are those that we are already trucking toward: to get this utilized as a common standard by which cloud providers, regardless of location — that could be internal private cloud or could be public cloud — essentially agree on the same set of standards by which consumers or interested parties can pull for information.

In the long-term, we wish to be able to improve visibility and transparency, which will ultimately drive additional market opportunities because, for example, if you have various levels of authentication, anywhere from anonymous to system administrator to auditor to fully trusted third party, you can imagine there’ll be a subset of anonymized information available that would actually allow a cloud broker or consumer to poll multiple cloud providers and actually make decisions based upon those assertions as to whether or not they want to do business with that cloud provider.

…It gives you an opportunity to shop wisely and ultimately compares services or allow that to be done in an automated fashion. And while CloudAudit does not seek to make an actual statement regarding compliance, you will ultimately be provided with enough information to allow either automated tools or at least auditors to get back to the business of auditing rather than data collection. Because this data gathering can be automated, it means that instead of having a PCI audit once every year, or every 6 months, you can have it on a schedule that is much more temporal and on-demand.

What will solution providers and resellers be able to take from it? How is it to their benefit to get involved?

Hoff: The cloud service providers themselves, for the most part, are seeing this as a tremendous opportunity to not only reduce cost, but also make this information more visible and available…The reality is, in many cases, to be frank, folks that make a living auditing actually spend the majority of their time in data collection rather than actually looking at and providing good, actual risk management, risk assessment and/or true interpretation of the actual data. Now the automation of that, whether it’s done on a standard or on an ad-hoc basis, could clearly put a crimp in their ability to collect revenues. So the whole point here is their “value-add” needs to be about helping customers to actually manage risk appropriately vs. just kind of becoming harvesters of information. It behooves them to make sure that the type of information being collected is in line with the services they hope to produce.

What needs to be done for this to become an industry standard?

Hoff: We’ve already written a normative spec that we hope to submit to the IETF. We have cross-section representation across industry, we’re building namespaces, specifications, and those are not done in the dark. They’re done with a direct contribution of the cloud providers themselves, because they understand how important it is to get this information standardized. Otherwise, you’re going to be having ad-hoc comparisons done of services which may not portray your actual security services capabilities or security posture accurately. We have a huge amount of interest, a good amount of participation, and a lot of alliances that are already bubbling with other cloud standards.

Cloud computing changes the game for many security services, including vulnerability management, penetration testing and data protection/encryption, not just audits. Is the CloudAudit initiative a piece of a larger cloud security puzzle?

Hoff: If anything, it’s a light bulb in the darkness. For us, it’s allowing these folks to adjust their tools to be able to consume the data that’s provided as part of the namespace within CloudAudit, and then essentially in the same way, we suggest human auditors focus more on interpreting that data rather than gathering it.
If gathering that data was unavailable to most of the vendors who would otherwise play in that space, due to either just that data not being presented or it being a violation of terms of service or acceptable use policy, the reality is that this is another way for these tool vendors to get back into the game, which is essentially then understanding the namespaces that we have, being able to modify their tools (which shouldn’t take much, since it’s already a standard-based protocol), and be able to interpret the namespaces to actually provide value with the data that we provide.
I think it’s an overall piece here, but again we’re really the conduit or the interface by which some of these technologies need to adapt. Rather than doing a one-off by one-off basis for every single cloud provider, you get a standardized interface. You only have to do it once.

Where should people go to get involved?

Hoff: If people want to get involved, it’s an open project. You can go to cloudaudit.org. There you’ll find links about us. There’ll be a link to the farm. The farm itself is currently a Google group, which you can sign up for and participate. We have calls every Monday, which are posted on the farm and tell you how to connect. You can also replay the last of the many calls that we’ve had already as we record them each time so that people have both the audio and visual versions of what we produce and how we’re going about this, and it’s very transparent and very open and we enjoy people getting involved. If you have something to add, please do.

Related articles by Zemanta

Reblog this post [with Zemanta]