Archive

Posts Tagged ‘Amazon Web Services’

The Security Hamster Sine Wave Of Pain: Public Cloud & The Return To Host-Based Protection…

July 7th, 2010 7 comments
Snort Intrusion Detection System Logo
Image via Wikipedia

This is a revisitation of a blog I wrote last year: Incomplete Thought: Cloud Security IS Host-Based…At The Moment

I use my ‘Security Hamster Sine Wave of Pain” to illustrate the cyclical nature of security investment and deployment models over time and how disruptive innovation and technology impacts the flip-flop across the horizon of choice.

To wit: most mass-market Public Cloud providers such as Amazon Web Services rely on highly-abstracted and limited exposure of networking capabilities.  This means that most traditional network-based security solutions are impractical or non-deployable in these environments.

Network-based virtual appliances which expect generally to be deployed in-line with the assets they protect are at a disadvantage given their topological dependency.

So what we see are security solution providers simply re-marketing their network-based solutions as host-based solutions instead…or confusing things with Barney announcements.

Take a press release today from SourceFire:

Snort and Sourcefire Vulnerability Research Team(TM) (VRT) rules are now available through the Amazon Elastic Compute Cloud (Amazon EC2) in the form of an Amazon Machine Image (AMI), enabling customers to proactively monitor network activity for malicious behavior and provide automated responses.

Leveraging Snort installed on the AMI, customers of Amazon Web Services can further secure their most critical cloud-based applications with Sourcefire’s leading protection. Snort and Sourcefire(R) VRT rules are also listed in the Amazon Web Services Solution Partner Directory, so that users can easily ensure that their AMI includes the latest updates.

As far as I can tell, this means you can install a ‘virtual appliance’ of Snort/Sourcefire as a standalone AMI, but there’s no real description on how one might actually implement it in an environment that isn’t topologically-friendly to this sort of network-based implementation constraint.*

Since you can’t easily “steer traffic” through an IPS in the model of AWS, can’t leverage promiscuous mode or taps, what does this packaging implementation actually mean?  Also, if  one has a few hundred AMI’s which contain applications spread out across multiple availability zones/regions, how does a solution like this scale (from both a performance or management perspective?)

I’ve spoken/written about this many times:

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where… and

Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye

Ultimately, expect that Public Cloud will force the return to host-based HIDS/HIPS deployments — the return to agent-based security models.  This poses just as many operational challenges as those I allude to above.  We *must* have better ways of tying together network and host-based security solutions in these Public Cloud environments that make sense from an operational, cost, and security perspective.

/Hoff

Related articles by Zemanta

* I “spoke” with Marty Roesch on the Twitter and he filled in the gaps associated with how this version of Snort works – there’s a host-based packet capture element with a “network” redirect to a stand-alone AMI:

@Beaker AWS->Snort implementation is IDS-only at the moment, uses software packet tap off customer app instance, not topology-dependent

and…

they install our soft-tap on their AMI and send the traffic to our AMI for inspection/detection/reporting.

It will be interesting to see how performance nets out using this redirect model.

Enhanced by Zemanta

Incomplete Thought: The DevOps Disconnect

May 31st, 2010 17 comments

DevOps — what it means and how it applies — is a fascinating topic that inspires all sorts of interesting reactions from people, polarized by their interpretation of what this term really means.

At CloudCamp Denver, adjacent to Gluecon, Aaron Peterson of OpsCode gave a lightning talk titled: “Operations as Code.”  I’ve seen this presentation on-line before, but listened intently as Aaron presented.  You can see John Willis’ version on Slideshare here.  Adrian Cole (@adrianfcole) of jClouds fame (and now Opscode) and I had an awesome hour-long discussion afterwards that was the genesis for this post.

“Operations as Code” (I’ve seen it described also as “Infrastructure as Code”) is really a fantastically sexy and intriguing phrase.  When boiled down, what I extract is that the DevOps “movement” is less about developers becoming operators, but rather the notion that developers can be part of the process whereby they help enable operations/operators to repeatably and with discipline, automate processes that are otherwise manual and prone to error.

[Ed: great feedback from Andrew Shafer: “DevOps isn’t so much about developers helping operations, it’s about operational concerns becoming more and more programmable, and operators becoming more and more comfortable and capable with that.  Further, John Allspaw (@allspaw) added some great commentary below – talking about DevOps really being about tools + culture + communication. Adam Jacobs from Opscode *really* banged out a great set of observations in the comments also. All good perspective.]

Automate, automate, automate.

While I find the message of DevOps totally agreeable, it’s the messaging that causes me concern, not because of the groups it includes, but those that it leaves out.  I find that the most provocative elements of the DevOps “manifesto” (sorry) are almost religious in nature.  That’s to be expected as most great ideas are.

In many presentations promoting DevOps, developers are shown to have evolved in practice and methodology, but operators (of all kinds) are described as being stuck in the dark ages. DevOps evangelists paint a picture that compares and contrasts the Agile-based, reusable componentized, source-controlled, team-based and integrated approach of “evolved” development environments with that of loosely-scripted, poorly-automated, inefficient, individually-contributed, undisciplined, non-source-controlled operations.

You can see how this might be just a tad off-putting to some people.

In Aaron’s presentation, the most interesting concept to me is the definition of “infrastructure.” Take the example to the right, wherein various “infrastructure” roles are described.  What should be evident is that to many — especially those in enterprise (virtualized or otherwise) or non-Cloud environments — is that these software-only components represent only a fraction of what makes up “infrastructure.”

The loadbalancer role, as an example makes total sense if you’re using HAproxy or Zeus ZXTM. What happens if it’s an F5 or Cisco appliance?

What about the routers, switches, firewalls, IDS/IPS, WAFs, SSL engines, storage, XML parsers, etc. that make up the underpinning of the typical datacenter?  The majority of these elements — as most of them exist today — do not present consistent interfaces for automation/integration. Most of them utilize proprietary/closed API’s for management that makes automation cumbersome if not impossible across a large environment.

Many will react to that statement by suggesting that this is why Cloud Computing is the great equalizer — that by abstracting the “complexity” of these components into a more “simplified” set of software resources versus hardware, it solves this problem and without the hardware-centric focus of infrastructure and the operations mess that revolves around it today, we’re free to focus on “building the business versus running the business.”

I’d agree.  The problem is that these are two different worlds.  The mass-market IaaS/PaaS providers who provide abstracted representations of infrastructure are still the corner-cases when compared to the majority of service providers who are entering the Cloud space specifically focused on serving the enterprise, and the enterprise — even those that are heavily virtualized — still very dependent upon hardware.

This is where the DevOps messaging miss comes — at least as it’s described today. DevOps is really targeted (today) toward the software-homogeneity of public, mass-market Cloud environments (such as Amazon Web Services) where infrastructure can be defined as abstract component, software-only roles, not the complex mish-mash of hardware-focused IT of the enterprise as it currently stands. This may be plainly obvious to some, but the messaging of DevOps is obscuring the message which is unfortunate.

DevOps is promoted today as a target operational end-state without explicitly defining that the requirements for success really do depend upon the level of abstraction in the environment; it’s really focused on public Cloud Computing.  In and of itself, that’s not a bad thing at all, but it’s a “marketing” miss when it comes to engaging with a huge audience who wants and needs to get the DevOps religion.

You can preach to the choir all day long, but that’s not going to move the needle.

My biggest gripe with the DevOps messaging is with the name itself. If you expect to truly automate “infrastructure as code,” we really should call it NetSecDevOps. Leaving the network and security teams — and the infrastructure they represent — out of the loop until they are either subsumed by software (won’t happen) or get religion (probable but a long-haul exercise) is counter-productive.

Take security, for example. By design, 95% of security technology/solutions are — by design — not easily automatable or are built to require human interaction given their mission and lack of intelligence/correlation with other tools.  How do you automate around that?  It’s really here that the statement I’ve made that “security doesn’t scale” is apropos. Believe me, I’m not making excuses for the security industry, nor am I suggesting this is how it ought to be, but it is how it currently exists.

Of course we’re seeing the next generation of datacenters in the enterprise become more abstract. With virtualization and cloud-like capabilities being delivered with automated provisioning, orchestration and governance by design for BOTH hardware and software and the vision of private/public cloud integration baked into enterprise architecture, we’re actually on a path where DevOps — at its core — makes total sense.

I only wish that (NetSec)DevOps evangelists — and companies such as Opscode — would  address this divide up-front and start to reach out to the enterprise world to help make DevOps a goal that these teams aspire to rather than something to rub their noses in.  Further, we need a way for the community to contribute things like Chef recipes that allow for flow-through role definition support for hardware-based solutions that do have exposed management interfaces (Ed: Adrian referred to these in a tweet as ‘device’ recipes)

/Hoff

Related articles by Zemanta

Enhanced by Zemanta

Amazon Web Services Hires a CISO – Did You Know?

May 18th, 2010 No comments
Image representing Amazon Web Services as depi...
Image via CrunchBase

Just to point out a fact many/most of you may not be aware of, but Amazon Web Services hired (transferred (?) since he was an AWS insider) Stephen Schmidt as their CISO earlier this year.  He has a team that goes along with him, also.

That’s a very, very good thing. I, for one, am very glad to see it. Combine that with folks like Steve Riley and I’m enthusiastic that AWS will make some big leaps when it comes to visibility, transparency and interaction with the security community.

See. Christmas wishes can come true! Thanks, Santa! 😉

You can find more about Mr. Schmidt by checking out his LinkedIn profile.

/Hoff

Reblog this post [with Zemanta]

The Hypervisor Platform Shuffle: Pushing The Networking & Security Envelope

May 14th, 2010 1 comment

Last night we saw coverage by Carl Brooks Jo Maitland (sorry, Jo) of an announcement from RackSpace that they were transitioning their IaaS Cloud offerings based on the FOSS Xen platform and moving to the commercially-supported Citrix XenServer instead:

Jaws dropped during the keynote sessions [at Citrix Synergy] when Lew Moorman, chief strategy officer and president of cloud services at Rackspace said his company was moving off Xen and over to XenServer, for better support. Rackspace is the second largest cloud provider after Amazon Web Services. AWS still runs on Xen.

People really shouldn’t be that surprised. What we’re playing witness to is the evolution of the next phase of provider platform selection in Cloud environments.

Many IaaS providers (read: the early-point market leaders) are re-evaluating their choices of primary virtualization platforms and some are actually adding support for multiple offerings in order to cast the widest net and meet specific requirements of their more evolved and demanding customers.  Take Terremark, known for their VMware vCloud-based service, who is reportedly now offering services based on Citrix:

Hosting provider Terremark announced a cloud-based compliance service using Citrix technology. “Now we can provide our cloud computing customers even greater levels of compliance at a lower cost,” said Marvin Wheeler, chief strategy officer at Terremark, in a statement.

Demand for services will drive hypervisor-specific platform choices on the part of provider with networking and security really driving many of those opportunities. IaaS Providers who offer bare-metal boot infrastructure that allows flexibility of multiple operating environments (read: hypervisors) will start to make in-roads.  This isn’t a mass-market provider’s game, but it’s also not a niche if you consider the enterprise as a target market.

Specifically, the constraints associated with networking and security (via the hypervisor) limit the very flexibility and agility associated with what IaaS/PaaS clouds are designed to provide. What many developers, security and enterprise architects want is the ability to replicate more flexible enterprise virtualized networking (such as multiple interfaces/IP’s) and security capabilities (such as APIs) in Public Cloud environments.

Support of specific virtualization platforms can enable these capabilities whether they are open or proprietary (think Open vSwitch versus Cisco Nexus 1000v, for instance.)  In fact, Citrix just announced a partnership with McAfee to address integrated security between the ecosystem and native hypervisor capabilities. See Simon Crosby’s announcement here titled “Taming the Four Horsemen of the Virtualization Security Apocalypse” (it’s got a nice title, too 😉

To that point, are some comments I made on Twitter that describe these points at a high level:

I wrote about this in my post titled “Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…” and what it means technically in my “Four Horsemen of the Virtualization Security Apocalypse” presentation.  Funny how these things come back around into the spotlight.

I think we’ll see other major Cloud providers reconsider their platform architecture from the networking and security perspectives in the near term.

/Hoff

Reblog this post [with Zemanta]

Dear SaaS Vendors: If Cloud Is The Way Forward & Companies Shouldn’t Spend $ On Privately-Operated Infrastructure, When Are You Moving Yours To Amazon Web Services?

April 30th, 2010 6 comments

We’re told repetitively by Software as a Service (SaaS)* vendors that infrastructure is irrelevant, that CapEx spending is for fools and that Cloud Computing has fundamentally changed the way we will, forever, consume computing resources.

Why is it then that many of the largest SaaS providers on the planet (including firms like Salesforce.com, Twitter, Facebook, etc.) continue to build their software and choose to run it in their own datacenters on their own infrastructure?  In fact, many of them are on a tear involving multi-hundred million dollar (read: infrastructure) private datacenter build-outs.

I mean, SaaS is all about the software and service delivery, right?  IaaS/PaaS is the perfect vehicle for the delivery of scaleable software, right?  So why do you continue to try to convince *us* to move our software to you and yet *you* don’t/won’t/can’t move your software to someone else like AWS?

Hypocricloud: SaaS firms telling us we’re backwards for investing in infrastructure when they don’t eat the dog food they’re dispensing (AKA we’ll build private clouds and operate them, but tell you they’re a bad idea, in order to provide public cloud offerings to you…)

Quid pro quo, agent Starling.

/Hoff

* I originally addressed this to Salesforce.com via Twitter in response to Peter Coffee’s blog here but repurposed the title to apply to SaaS vendors in general.

Reblog this post [with Zemanta]

Good Interview/Resource Regarding CloudAudit from SearchCloudComputing…

April 6th, 2010 No comments

The guys from SearchCloudComputing gave me a ring and we chatted about CloudAudit. The interview that follows is a distillation of that discussion and goes a long way toward answering many of the common questions surrounding CloudAudit/A6.  You can find the original here.

What are the biggest challenges when auditing cloud-based services, particularly for the solution providers?

Christofer Hoff:: One of the biggest issues is their lack of understanding of how the cloud differs from traditional enterprise IT. They’re learning as quickly as their customers are. Once they figure out what to ask and potentially how to ask it, there is the issue surrounding, in many cases, the lack of transparency on the part of the provider to be able to actually provide consistent answers across different cloud providers, given the various delivery and deployment models in the cloud.

How does the cloud change the way a traditional audit would be carried out?

Hoff: For the most part, a good amount of the questions that one would ask specifically surrounding the infrastructure is abstracted and obfuscated. In many cases, a lot of the moving parts, especially as they relate to the potential to being competitive differentiators for that particular provider, are simply a black box into which operationally you’re not really given a lot of visibility or transparency.
If you were to host in a colocation provider, where you would typically take a box, the operating system and the apps on top of it, you’d expect, given who controls what and who administers what, to potentially see a lot more, as well as there to be a lot more standardization of those deployed solutions, given the maturity of that space.

How did CloudAudit come about?

Hoff: I organized CloudAudit. We originally called it A6, which stands for Automated Audit Assertion Assessment and Assurance API. And as it stands now, it’s less in its first iteration about an API, and more specifically just about a common namespace and interface by which you can use simple protocols with good authentication to provide access to a lot of information that essentially can be automated in ways that you can do all sorts of interesting things with.

How does it work exactly?

Hoff: What we wanted to do is essentially keep it very simple, very lightweight and easy to implement without cloud providers having to make a lot of programmatic changes. Although we’re not prescriptive about how they do it (because each operation is different), we expect them to figure out how they’re going to get the information into this namespace, which essentially looks like a directory structure.

This kind of directory/namespace is really just an organized repository. We don’t care what is contained within those directories: .pdf, text documents, links to other websites. It could be a .pdf of a SAS 70 report with a signature that refers back to the issuing governing body. It could be logs, it could be assertions such as firewall=true. The whole point here is to allow these providers to agree upon the common set of minimum requirements.
We have aligned the first set of compliance-driven namespaces to that of theCloud Security Alliance‘s compliance control-mapping tool. So the first five namespaces pretty much run the gamut of what you expect to see most folks concentrating on in terms of compliance: PCI DSS, HIPAA, COBIT, ISO 27002 and NIST 800-53…Essentially, we’re looking at both starting with those five compliance frameworks, and allowing cloud providers to set up generic infrastructure-focused type or operational type namespaces also. So things that aren’t specific to a compliance framework, but that you may find of interest if you’re a consumer, auditor, or provider.

Who are the participants in CloudAudit?

Hoff: We have both pretty much the largest cloud providers as well as virtualization platform and cloud platform providers on the planet. We’ve got end users, auditors, system integrators. You can get the list off of the CloudAudit website. There are folks from CSC, Stratus, Akamai, Microsoft, VMware, Google, Amazon Web Services, Savvis, Terrimark, Rackspace, etc.

What are your short-term and long-term goals?

Hoff: Short-term goals are those that we are already trucking toward: to get this utilized as a common standard by which cloud providers, regardless of location — that could be internal private cloud or could be public cloud — essentially agree on the same set of standards by which consumers or interested parties can pull for information.

In the long-term, we wish to be able to improve visibility and transparency, which will ultimately drive additional market opportunities because, for example, if you have various levels of authentication, anywhere from anonymous to system administrator to auditor to fully trusted third party, you can imagine there’ll be a subset of anonymized information available that would actually allow a cloud broker or consumer to poll multiple cloud providers and actually make decisions based upon those assertions as to whether or not they want to do business with that cloud provider.

…It gives you an opportunity to shop wisely and ultimately compares services or allow that to be done in an automated fashion. And while CloudAudit does not seek to make an actual statement regarding compliance, you will ultimately be provided with enough information to allow either automated tools or at least auditors to get back to the business of auditing rather than data collection. Because this data gathering can be automated, it means that instead of having a PCI audit once every year, or every 6 months, you can have it on a schedule that is much more temporal and on-demand.

What will solution providers and resellers be able to take from it? How is it to their benefit to get involved?

Hoff: The cloud service providers themselves, for the most part, are seeing this as a tremendous opportunity to not only reduce cost, but also make this information more visible and available…The reality is, in many cases, to be frank, folks that make a living auditing actually spend the majority of their time in data collection rather than actually looking at and providing good, actual risk management, risk assessment and/or true interpretation of the actual data. Now the automation of that, whether it’s done on a standard or on an ad-hoc basis, could clearly put a crimp in their ability to collect revenues. So the whole point here is their “value-add” needs to be about helping customers to actually manage risk appropriately vs. just kind of becoming harvesters of information. It behooves them to make sure that the type of information being collected is in line with the services they hope to produce.

What needs to be done for this to become an industry standard?

Hoff: We’ve already written a normative spec that we hope to submit to the IETF. We have cross-section representation across industry, we’re building namespaces, specifications, and those are not done in the dark. They’re done with a direct contribution of the cloud providers themselves, because they understand how important it is to get this information standardized. Otherwise, you’re going to be having ad-hoc comparisons done of services which may not portray your actual security services capabilities or security posture accurately. We have a huge amount of interest, a good amount of participation, and a lot of alliances that are already bubbling with other cloud standards.

Cloud computing changes the game for many security services, including vulnerability management, penetration testing and data protection/encryption, not just audits. Is the CloudAudit initiative a piece of a larger cloud security puzzle?

Hoff: If anything, it’s a light bulb in the darkness. For us, it’s allowing these folks to adjust their tools to be able to consume the data that’s provided as part of the namespace within CloudAudit, and then essentially in the same way, we suggest human auditors focus more on interpreting that data rather than gathering it.
If gathering that data was unavailable to most of the vendors who would otherwise play in that space, due to either just that data not being presented or it being a violation of terms of service or acceptable use policy, the reality is that this is another way for these tool vendors to get back into the game, which is essentially then understanding the namespaces that we have, being able to modify their tools (which shouldn’t take much, since it’s already a standard-based protocol), and be able to interpret the namespaces to actually provide value with the data that we provide.
I think it’s an overall piece here, but again we’re really the conduit or the interface by which some of these technologies need to adapt. Rather than doing a one-off by one-off basis for every single cloud provider, you get a standardized interface. You only have to do it once.

Where should people go to get involved?

Hoff: If people want to get involved, it’s an open project. You can go to cloudaudit.org. There you’ll find links about us. There’ll be a link to the farm. The farm itself is currently a Google group, which you can sign up for and participate. We have calls every Monday, which are posted on the farm and tell you how to connect. You can also replay the last of the many calls that we’ve had already as we record them each time so that people have both the audio and visual versions of what we produce and how we’re going about this, and it’s very transparent and very open and we enjoy people getting involved. If you have something to add, please do.

Related articles by Zemanta

Reblog this post [with Zemanta]

Microsoft Azure Going “Down Stack,” Adding IaaS Capabilities. AWS/VMware WAR!

February 4th, 2010 4 comments

It’s very interesting to see that now that infrastructure-as-a-service (IaaS) players like Amazon Web Services are clawing their way “up the stack” and adding more platform-as-a-service (PaaS) capabilities, that Microsoft is going “down stack” and providing IaaS capabilities by way of adding RDP and VM capabilities to Azure.

From Carl Brooks’ (@eekygeeky) article today:

Microsoft is expected to add support for Remote Desktops and virtual machines (VMs) to Windows Azure by the end of March, and the company also says that prices for Azure, now a baseline $0.12 per hour, will be subject to change every so often.

Prashant Ketkar, marketing director for Azure, said that the service would be adding Remote Desktop capabilities as soon as possible, as well as the ability to load and run virtual machine images directly on the platform. Ketkar did not give a date for the new features, but said they were the two most requested items.

This move begins a definite trend away from the original concept for Azure in design and execution. It was originally thought of as a programming platform only: developers would write code directly into Azure, creating applications without even being aware of the underlying operating system or virtual instances. It will now become much closer in spirit to Amazon Web Services, where users control their machines directly. Microsoft still expects Azure customers to code for the platform and not always want hands on control, but it is bowing to pressure to cede control to users at deeper and deeper levels.

One major reason for the shift is that there are vast arrays of legacy Windows applications users expect to be able to run on a Windows platform, and Microsoft doesn’t want to lose potential customers because they can’t run applications they’ve already invested in on Azure. While some users will want to start fresh, most see cloud as a way to extend what they have, not discard it.

This sets the path to allow those enterprise customers running HyperV internally to take those VMs and run them on (or in conjunction with) Azure.

Besides the obvious competition with AWS in the public cloud space, there’s also a private cloud element. As it stands now, one of the primary differentiators for VMware from the private-to-public cloud migration/portability/interoperability perspective is the concept that if you run vSphere in your enterprise, you can take the same VMs without modification and move them to a service provider who runs vCloud (based on vSphere.)

This is a very interesting and smart move by Microsoft.

/Hoff

Reblog this post [with Zemanta]