From the X-Files – The Cloud in Context: Evolution from Gadgetry to Popular Culture

November 27th, 2009 4 comments

apple1984

[This post was originally authored November 27, 2009.  I pushed it back to the top of the stack because I think it’s an interesting re-visitation of the benefits and challenges we are experiencing in Cloud today]

Below is an article I wrote many months ago prior to all the Nicholas Carr “electricity ain’t Cloud” discussions.  The piece was one from a collection that was distributed to “…the Intelligence Community, the DoD, and Congress” with the purpose of giving a high-level overview of Cloud security issues.

The Cloud in Context: Evolution from Gadgetry to Popular Culture

It is very likely that should one develop any interest in Cloud Computing (“Cloud”) and wish to investigate its provenance, one would be pointed to Nicholas Carr’s treatise “The Big Switch” for enlightenment. Carr offers a metaphoric genealogy of Cloud Computing, mapped to, and illustrated by, a keenly patterned set of observations from one of the most important catalysts of a critical inflection point in modern history: the generation and distribution of electricity.

Carr offers an uncannily prescient perspective on the evolution and adaptation of computing by way of this electric metaphor, describing how the scale of technology, socioeconomic, and cultural advances were all directly linked to the disruptive innovation of a shift from dedicated power generation in individual factories to a metered utility of interconnected generators powering distribution grids feeding all. He predicts a similar shift from insular, centralized, private single-function computational gadgetry to globally-networked, distributed, public service-centric collaborative fabrics of information interchange.

This phenomenon will not occur overnight nor has any other paradigm shift in computing occurred overnight; bursts of disruptive innovation have a long tail of adoption. Cloud is not the product or invocation of some singular technology, but rather an operational model that describes how computing will mature.

There is no box with blinking lights that can be simply pointed to as “Cloud” and yet it is clearly more than just timesharing with Internet connectivity. As corporations seek to drive down cost and gain efficiency force-multipliers, they have ruthlessly focused on divining what is core to their businesses, and expensive IT cost-centers are squarely in the crosshairs for rigorous valuation.

To that end, Carr wrote another piece on this very topic titled “IT Doesn’t matter” in which he argued that IT was no longer a strategic differentiator due to commoditization, standardization, and cost. This was followed by “The End of Corporate Computing” wherein he suggested that IT will simply subscribe to IT services as an outsourced function. Based upon these themes, Cloud seems a natural evolutionary outcome motivated primarily by economics as companies pare down their IT investment — outsourcing what they can and optimizing what is left.

Enter Cloud Computing

The emergence of Cloud as cult-status popular culture also has its muse anchored firmly in the little machines nestled in the hands of those who might not realize that they’ve helped create the IT revolution at all: the consumer. The consumer’s shift to an always-on, many-to-many communication model with unbridled collaboration and unfettered access to resources, sharply contrasts with traditional IT — constrained, siloed, well-demarcated, communication-restricted, and infrastructure-heavy.

Regardless of any value judgment on the fate of Man, we are evolving to a society dedicated to convenience, where we are not tied to the machine, but rather the machine is tied to us, and always on. Your applications and data are always there, consumed according to business and pricing models that are based upon what you use while the magic serving it up remains transparent.

This is Cloud in a nutshell; the computing equivalent to classical Greek theater’s Deus Ex Machina.

For the purpose of this paper, it is important that I point out that I refer mainly to so-called “Public Cloud” offerings; those services provided by parties external to the data owner who provides an “outsourced” service capability on behalf of the consumer.

This graceful surrender of control is the focus of my discussion. Private Clouds — those services that may operate on the corporation’s infrastructure or those of a provider but managed under said corporation’s control and policies, offers a different set of benefits and challenges but not to the degree of Public Cloud.

There are also hybrid and brokered models, but to keep focused, I shall not address these directly.

Cloud Reference Model

Cloud Reference Model

A service is generally considered to be “Cloud-based” should it meet the following characteristics and provide for:

  • The abstraction of infrastructure from the resources that deliver them
  • The democratization of those resources as an elastic pool to be consumed
  • Services-oriented, rather than infrastructure or application-centric
  • Enabling self-service, scale on-demand elasticity and dynamism
  • Employs a utility-like model of consumption and allocation

Cloud exacerbates the issues we have faced for years in the information security, assurance, and survivability spaces and introduces new challenges associated with extreme levels of abstraction, mobility, scale, dynamism and multi-tenancy. It is important that one contemplate the “big picture” of how Cloud impacts the IT landscape and how given this “service- centric” view, certain things change whilst others remain firmly status quo.

Cloud also provides numerous challenges to the way in which computing and resources are organized, operated, governed and secured, given the focus on:

  • Automated and autonomic resource provisioning and orchestration
  • Massively interconnected and mashed-up data sources, conduits and results
  • Virtualized layers of software-driven, service-centric capability rather than infrastructure or application- specific monoliths
  • Dynamic infrastructure that is aware of and adjusts to the information, applications and services (workloads) running over it, supporting dynamism and abstraction in terms of scale, policy, agility, security and mobility

As a matter of correctness, virtualization as a form of abstraction may exist in many forms and at many layers, but it is not required for Cloud. Many Cloud services do utilize virtualization to achieve scale and I make liberal use of this assumptive case in this paper. As we grapple with the tradeoffs between convenience, collaboration, and control, we find that existing products, solutions and services are quickly being re-branded and adapted as “Cloud” to the confusion of all.keep focused, I shall not address these directly.

Modeling the Cloud

There exist numerous deployment, service delivery models and use cases for Cloud, each offering a specific balance of integrated features, extensibility/ openness and security hinged on high levels of automation for workload distribution.

Three archetypal models generally describe cloud service delivery, popularly referred to as the “SPI Model,” where “SPI” refers to Software, Platform and Infrastructure (as a service) respectively.

NIST - Visual Cloud Model

NIST – Visual Cloud Model

Using the National Institute of Standards and Technology’s (NIST) draft working definition as the basis for the model:

Software as a Service (SaaS)

The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email).

The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Platform as a Service (PaaS)

The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., Java, Python, .Net). The consumer does not manage or control the underlying cloud infrastructure,

Infrastructure as a Service (IaaS)

The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

Understanding the relationship and dependencies between these models is critical. IaaS is the foundation of all Cloud services with PaaS building upon IaaS, and SaaS — in turn — building upon PaaS. We will cover this in more detail later in the document.

Peanut Butter & Jelly — Making the Perfect Cloud Sandwich

Infostructure/Metastructure/Infrastructure

Infostructure/Metastructure/Infrastructure

To understand how Cloud will affect security, visualize its functional structure in three layers:

  • The Infrastructure layer represents the traditional compute, network and storage hardware and operating systems familiar to us all. Virtualization platforms also exist at this layer and expose their capabilities northbound.
  • The Infostructure layer represents the programmatic components such as applications and service objects that produce, operate on or interact with the content, information and metadata.
  • Sitting in between Infrastructure and Infostructure is the Metastructure layer. This layer represents the underlying set of protocols and functions with layers such as DNS, BGP, and IP address management, which “glue” together and enable the applications and content at the Infostructure layer to in turn be delivered by the Infrastructure.

Certain areas of Cloud Computing’s technology underpinnings are making progress, but those things that will ultimately make Cloud the ubiquitous and transparent platform for our entire computing experience remain lacking.

Unsurprisingly, most of the deficient categories of technology or capabilities are those that need to be delivered from standards and consensus-driven action; things that have always posed challenges such as management, governance, provisioning, orchestration, automation, portability, interoperability and security. As security solutions specific to Cloud are generally slow in coming while fast innovating attackers are unconstrained by rules of engagement, it will come as no surprise that we are constantly playing catch up.

Cloud is a gradual adaptation rather than a wholesale re-tooling, and represents another cycle of investment which leaves us to consider where to invest our security dollars to most appropriately mitigate threat and vulnerability:

Typically, we react by cycling between investing in host-based controls > application controls > information controls > user controls > network controls and back again. While our security tools tend to be out of phase and less innovative than the tools of our opposition, virtualization and Cloud may act as much needed security forcing functions that get us beyond solving just the problem du jour.

The need to apply policy to workloads throughout their lifecycle, regardless of state, physical location, or infrastructure from which they are delivered, is paramount. Collapsing the atomic unit of the datacenter to the virtual machine boundary may allow for a simpler set of policy expressions that travel with the VM instance. At the same time, Cloud’s illusion of ubiquity and infinite scale means that we will not know where our data is stored, processed, or used.

Combine mobility, encryption, distributed resources with multiple providers, and a lack of open standards with economic cost pressure and even basic security capabilities seem daunting. Cloud simultaneously re-centralizes some resources while de-perimeterizing trust boundaries and distributing data. Understanding how the various layers map to traditional non-Cloud architecture is important, especially in relation to the Cloud deployment model used; there are significant trade-offs in integration, extensibility, cost, management, governance, compliance, and security.

Live by the Cloud, Die by the Cloud

Despite a tremendous amount of interest and momentum, Cloud is still very immature — pockets of innovation spread out across a long-tail of mostly-proprietary infrastructure-, platform-, and software-as-a-service offerings that do not provide for much in the way of or workload portability or interoperability.

Cloud is not limited to lower cost “server” functionality. With the fevered adoption of netbooks, virtualization, low-cost storage services, fixed/mobile convergence, the proliferation of “social networks,” and applications built to take advantage of all of this, Cloud becomes a single pane of glass for our combined computing experience. N.B., these powers are not inherently ours alone; the same upside can be used for wrongdoing.

In an attempt to whet the reader’s appetite in regards to how Cloud dramatically impacts the risk modeling, assumptions, and security postures of today, I will provide a reasonably crisp set of examples, chosen to bring pause:

Organizational and Operational Misalignment

The way in which most enterprise IT organizations are structured — in functional silos optimized to specialized, isolated functions — is diametrically opposed to the operational abstraction provided by Cloud.

The on-demand, elastic and self-service capabilities through simple interfaces and automated service layers abstract away core technology and support staff alike.

Few IT departments are prepared for what it means to apply controls, manage service levels, implement and manage security capabilities, and address compliance when the IT department is operationally irrelevant in that process. This leaves huge gaps in both identifying and managing risk, especially in outsourced models where ultimately the operational responsibility is “Cloudsourced” but the accountability is not.

The ability to apply specific security controls and measure compliance in mass-marketed Public Cloud services presents very real barriers to entry for enterprises who are heavily regulated, especially when balanced against the human capital (expertise) built-up by organizations.

Monoculture of Operating Systems, Virtualized Components, and Platforms

The standardization (de facto and de jure) on common interfaces to Cloud resources can expose uniform attack vectors that could affect one consumer, or, in the case of multi-tenant Public Cloud offerings, affect many. This is especially true in IaaS offerings where common sets of abstraction layers (such as hypervisors,) prototyped OS/application bundles (usually in the form of virtual machines) and common sets of management functions are used — and used to extend and connect the walled garden internal assets of enterprises to the public or semi-public Cloud environments of service providers operating infrastructure in proxy.

While most attack vectors target applications and information at the Infostructure layer or abuse operating systems and assorted hardware at the Infrastructure layer, the Metastructure layer is beginning to show signs of stress also. Recent attacks against key Metastructure elements such as BGP and DNS indicate that aging protocols do not fare well.

Segmentation and Isolation In Multi-tenant environments

Multi-tenancy in the Cloud (whether in the Public or Private Cloud contexts) brings new challenges to trust, privacy, resiliency and reliability model assertions by providers.  Many of these assertions are based upon the premise that that we should trust — without reliably provable models or evidence — that in the absence of relevant illustration, Cloud is simply trustworthy in all of these dimensions, despite its immaturity. Vendors claim “airtight” information, process, application, and service, but short of service level agreements, there is little to demonstrate or substantiate the claims that software-enabled Cloud Computing — however skinny the codebase may be — is any more (or less) secure than what we have today, especially with commercialized and proprietary implementations.

In multi-tenant Cloud offerings, exposures can affect millions, and placing some types of information in the care of others without effective compensating controls may erode the ROI valuation offered by Cloud in the first place, and especially so as the trust boundaries used to demarcate and segregate workloads of different consumers are provided by the same monoculture operating system and virtualization platforms described above.

Privacy of Data/Metadata, Exfiltration, and Leakage

With increased adoption of Cloud for sensitive workloads, we should expect innovative attacks against Cloud assets, providers, operators, and end users, especially around the outsourcing and storage of confidential information. The uptake is that solutions focused on encryption, at rest and in motion, will have the side effect of more and more tools (legitimate or otherwise) losing visibility into file systems, application/process execution, information and network traffic. Key management becomes remarkably relevant once again — on a massive scale.

Recent proof-of-concepts such as so-called side- channel attacks demonstrate how it is possible to determine where a specific virtual instance is likely to reside in a Public multi-tenant Cloud and allow an attacker to instantiate their own instance and cause it to be located such that it is co-resident with the target. This would potentially allow for sniffing and exfiltration of confidential data — or worse, potentially exploit vulnerabilities which would violate the sanctity of isolated workloads within the Cloud itself.

Further, given workload mobility — where the OS, applications and information are contained in an instance represented by a single atomic unit such as a virtual machine image — the potential for accidental or malicious leakage and exfiltration is real. Legal intercept, monitoring, forensics, and attack detection/incident response are heavily impacted, especially at the volume and levels of traffic envisioned by large Cloud providers, creating blind spots in ways we can’t fathom today.

Inability to Deploy Compensating or Detective Controls

The architecture of Cloud services — as abstract as they ought to be — means that in many cases the security of workloads up and down the stack are still dependent upon the underlying platform for enforcement. This is problematic inasmuch as the constructs representing compute, networking and storage resources — and security — are in many cases themselves virtualized.

Further we are faced with more stealthy and evasive malware that is able to potentially evade detection while co-opting (or rootkitting) not only software and hypervisors, but exploiting vulnerabilities in firmware and hardware such as CPU chipsets.

These sorts of attack vectors are extremely difficult to detect let alone defend against. Referring back to the monoculture issue above, a so-called blue- pilled hypervisor, uniform across tens of thousands of compute nodes providing multi-tenant Cloud services could be catastrophic. It is simply not yet feasible to provide parity in security capabilities between physical and Cloud environments; the maturity of solutions just isn’t there.

These are heady issues and should not be taken lightly when considering what workloads and services are candidates for various Cloud offerings.

What’s old is news again…

Perhaps it is worth adapting familiar attack taxonomies to Cloud.

Botnets that previously required massive malware- originated endpoint compromise in order to function can easily activate in standardized fashion, in apparently legitimate form, and in large numbers by criminals who wish to harness the organized capabilities of Bots without the effort. Simply use stolen credit cards to establish fake accounts using a provider’s Infrastructure-as-a-Service and hundreds or thousands of distributed images could be activated in a very short timeframe.

Existing security threats such as DoS/DDoS attacks, SPAM and phishing will continue to be a prime set of tools for the criminal ecosystem to leverage the distributed and well-connected Cloud as well as targeted attacks against telecommuters using both corporate and consumerized versions of Cloud services.

Consider a new take on an old problem based on ecommerce: Click-fraud. I frame this new embodiment as something called EDoS — economic denial of sustainability. Distributed Denial of Service (DDoS) attacks are blunt force trauma. The goal, regardless of motive, is to overwhelm infrastructure and remove from service a networked target by employing a distributed number of attackers. An example of DDoS is where a traditional botnet is activated to swarm/overwhelm an Internet connected website using an asynchronous attack which makes the site unavailable due to an exhaustion of resources (compute, network, or storage.)

EDoS attacks, however, are death by a thousand cuts. EDoS can also utilize distributed attack sources as well as single entities, but works by making legitimate web requests at volumes that may appear to be “normal” but are done so to drive compute, network, and storage utility billings in a cloud model abnormally high.

An example of EDoS as a variant of click fraud is where a botnet is activated to visit a website whose income results from ecommerce purchases. The requests are all legitimate but purchases are never made. The vendor has to pay the cloud provider for increased elastic use of resources but revenue is never recognized to offset them.

We have anti-DDoS capabilities today with tools that are quite mature. DDoS is generally easy to spot given huge increases in traffic. EDoS attacks are not necessarily easy to detect, because the instrumentation and business logic is not present in most applications or stacks of applications and infrastructure to provide the correlation between “requests” and “ successful transactions.” In theexample above, increased requests may look like normal activity. Many customers do not invest in this sort of integration and Cloud providers generally will not have visibility into applications that they do not own.

Ultimately the most serious Cloud concern is presented by way of the “stacked turtles” analogy: layer upon layer of complex interdependencies at the Infastructure, Metastructure and Infostructure layers, predicated upon fragile trust models framed upon nothing more than politeness. Without re-engineering these models, strengthening the notion of (id)entity management, authentication and implementing secure protocols, we run the risk of Cloud simply obfuscating the fragility of the supporting layers until something catastrophic occurs.

Combined with where and how our data is created, processed, accessed, stored, and backed up — and by whom and using whose infrastructure — Cloud yields significant concerns related to on-going security, privacy, compliance and resiliency.

Moving Forward – Critical Areas of Focus

The Cloud Security Alliance (http://www. cloudsecurityalliance.org) issued its “Guidance for Critical Areas of Focus” related to Cloud Computing Security and defined fifteen domains of concern:

  • Cloud Architecture
  • Information lifecycle management
  • Governance and Enterprise Risk Management
  • Compliance & Audit
  • General Legal
  • eDiscovery
  • Encryption and Key Management
  • Identity and Access Management
  • Storage
  • Virtualization
  • Application Security
  • Portability & Interoperability
  • Data Center Operations Management
  • Incident Response, Notification, Remediation
  • “Traditional” Security impact (business continuity, disaster recovery, physical security)

The sheer complexity of the interdependencies between the Infrastructure, Metastructure and Infostructure layers makes it almost impossible to recommend focusing on only a select subset of these items since all are relevant and important.

Nevertheless, those items in boldface most deserve initial focus just to retain existing levels of security, resilience, and compliance while information and applications are moved from the walled gardens of the private enterprise into the care of others.

Attempting to retain existing levels of security will consume the majority of Cloud transition effort.  Until we see an expansion of available solutions to bridge the gaps between “traditional” IT and dynamic infrastructure 2.0 capabilities, any company can only focus on the traditional security elements of sound design, encryption, identity, storage, virtualization and application security. Similarly, until a standardized set of methods allow well-defined interaction between the Infrastructure, Metastructure and Infostructure layers, companies will be at the mercy of industry for instrumenting, much less auditing,

Cloud elements — yet, as was already stated, the very sameness of standardization creates shared risk. As with any change of this magnitude, the potential of Cloud lies between its trade-offs. In security terms, this “big switch” surrenders visibility and control so as to gain agility and efficiency. The question is, how to achieve a net positive result?

Well-established enterprise security teams who optimize their security spend on managing risk versus purely threat, should not be surprised by Cloud. To these organizations, adapting their security programs to the challenges and opportunities provided by Cloud is business as usual. For organizations unprepared for Cloud, the maturity of security programs they can buy will quickly be outmoded.

Summary

The benefits of Cloud are many. The challenges are substantial. How we deal with these challenges and their organizational, operational, architectural, and technical impacts will fundamentally change the way in which we think about assessing and assuring the security of our assets.

Attribution is the new black…what’s in a name, anyway?

February 26th, 2015 No comments

Attribution is hard.  It’s as much art as it is science.  It’s also very misunderstood.

So, as part of my public service initiative, I created and then unintentionally crowdsourced the most definitive collection of reality-based constructs reflecting the current state of this term of art.

Here you go:

  • Faptribution => The process of trying to reach PR climax on naming an adversary before anyone else does
  • Pattribution => The art of self-congratulatory back patting that goes along with attributing an actor(s) to a specific campaign or breach.
  • Flacktribution => The process of dedicating your next press release to the concept that, had the victim only used $our_software, none of this would have happened. (Per Nick Selby)
  • Maptribution => when you really just have no fucking idea and play “pin the tail on the donkey” with a world map. (Per Sam Johnston)
  • Craptribution => The collective negative social media and PR feedback associated with Snaptribution (Per Gunter Ollmann)
  • Masturbution => When you feel awesome about it, but nobody else gives a flying f$ck (Per Paul Stamp, but ‘betterized’ by me)
  • Snaptribution => naming the threat actor so quickly you can’t possibly be right but you are first. Also known as premature faptribution. (Chris Wysopal)

May you go forth with the confidence to assess the quality, scope and impact of any attribution using these more specific definitions.

/Hoff

Categories: Uncategorized Tags:

The Active Response Continuum & The Right To Cyber Self Defense…

February 24th, 2015 6 comments

IMG_6659-680x400At the 2015 Kaspersky Security Analyst Summit, I kicked off the event with a keynote titled: “Active Defense and the A.R.T. of W.A.R.”

The A.R.T. of W.A.R. stands for “Active Response Techniques of Weaponization and Resilience.”

You can read about some of what I discussed here.  I will post the presentation shortly and Kaspersky will release the video also.  The video of my talk is here (I am walking out, hoodie up, like I’m in a fight per the show thematic):

While thematically I used the evolution of threat actors, defensive security practices, operations and technology against the backdrop of the evolution of modern mixed martial arts (the theme of the conference,) the main point was really the following:

If we now face threat actors who have access to the TTPs of nation states, but themselves are not, and they are attacking enterprises who do not/cannot utilize these TTPs, and our only current “best practices” references against said actors are framed within the context of “cyberwar,” and only able to be acted upon by representatives of a nation state, it will be impossible for anyone outside of that circle to actively defend our interests, intellectual property and business with an appropriate and contextualized framing of the use of force.

It is extremely easy to take what I just mentioned above and start picking it apart without the very context to which I referenced.

The notion of “Active Defense” is shrouded in interpretive nuance — and usually immediately escalates to the most extreme use case of “hacking back” or “counter-hacking.”  As I laid out in the talk — leaning heavily on the work of Dave Dittrich in this area — there are levels of intrusion as well as levels of response, and the Rubik’s Cube of choices allows for ways or responding that includes more than filing a breach report and re-imaging endpoints.

While the notion of “active” and “passive” are loaded terms without context, I think it’s important that we — as the technical community — be allowed to specifically map those combinations of intrusion and response and propose methodologies against which air cover of legal frameworks and sovereignty can be laid.  Not having this conversation is unacceptable.

Likewise unacceptable is the disingenuous representation that organizations (in the private sector) who specialize in one of the most important areas of discussion here — attribution — magically find all their information by accident on Pastebin.  Intelligence — threat, signals, human, etc. — is a very specialized and delicate practice, but as it stands today, there 4-5 companies who operate in this space with ties to the public sector/DoD/IC and are locked in their own “arms race” to be the first to attribute a name, logo and theme song around every attack.

It’s fair to suggest they operate in spaces along to continuum that others do not.  But these are things we really don’t talk about because it exists in the grey fringe.

Much of that information and sources are proprietary and while we see executive orders and governmental offices being spun up to exchange “threat intelligence,” the reality is that even if we nail attribution, there’s nothing most of us can do about it…and I mean that technologically and operationally.

We have documents such as the Tallin Manual and the Army Cyber Command Field Manual for Electromagnetic Warfare that govern these discussion in their realms — yet in the Enterprise space, we have only things like the CFAA.

This conversation needs to move forward.  It’s difficult, it’s hairy and it’s going to take a concerted effort…but it needs a light shone upon it.

/Hoff

Categories: Active Defense Tags:

Incomplete Thought: The Time Is Now For OCP-like White Box Security Appliances

January 25th, 2015 1 comment

Over the last couple of years, we’ve seen some transformative innovation erupt in networking.

In no particular order OR completeness:

  • CLOS architectures and protocols are evolving
  • the debate over Ethernet and IP fabrics is driving toward the outcome that we need both
  • x86 is finding a home in networking at increasing levels of throughput thanks to things like DPDK and optimized IP stacks
  • merchant silicon, FPGA and ASICs are seeing increased investment as the speeds/feeds move from 10 > 40 > 100 Gb/s per NIC
  • programmable abstraction and the operational models to support it has been proven at scale
  • virtualization and virtualized services are now common place architectural primitives in discussions for NG networking
  • Open Source is huge in both orchestration as well as service delivery
  • Entirely new network operating systems like that of Cumulus have emerged to challenge incumbents
  • SDN, NFV and overlays are starting to see production at-scale adoption beyond PoCs
  • automation is starting to take root for everything from provisioning to orchestration to dynamic service insertion and traffic steering

Stir in the profound scale-out requirements of mega-scale web/cloud providers and the creation and adoption of Open Compute Platform compliant network, storage and compute platforms, and there’s a real revolution going on:

The Open Compute Networking Project is creating a set of technologies that are disaggregated and fully open, allowing for rapid innovation in the network space. We aim to facilitate the development of network hardware and software – together with trusted project validation and testing – in a truly open and collaborative community environment.

We’re bringing to networking the guiding principles that OCP has brought to servers & storage, so that we can give end users the ability to forgo traditional closed and proprietary network switches – in favor of a fully open network technology stack. Our initial goal is to develop a top-of-rack (leaf) switch, while future plans target spine switches and other hardware and software solutions in the space.

Now, interestingly, while there are fundamental shifts occurring in the approach to and operations of security — the majority of investment in which is still network-centric — as an industry, we are still used to buying our security solutions as closed appliances or chassis form-factors from vendors with integrated hardware and software.

While vendors offer virtualized versions of their hardware solutions as virtual appliances that can also run on bare metal, they generally have not enjoyed widespread adoption because of the operational challenges involved with the operationally-siloed challenges involved in distinguishing the distribution of security as a service layer across dedicated appliances or across compute fabrics as an overlay.

But let’s just agree that outside of security, software is eating the world…and that at some point, the voracious appetite of developers and consumers will need to be sated as it relates to security.

Much of the value (up to certain watermark levels of performance and latency) of security solutions is delivered via software which when coupled with readily-available hardware platforms such as x86 with programmable merchant silicon, can provide some very interesting and exciting solutions at a much lower cost.

So why then, like what we’ve seen with networking vendors who have released OCP-compliant white-box switching solutions that allow end-users to run whatever software/NOS they desire, have we not seen the same for security?

I think it would be cool to see an OCP white box spec for security and let the security industry innovate on the software to power it.

You?

/Hoff

 

 

 

J-Law Nudie Pics, Jeremiah, Privacy and Dropbox – An Epic FAIL of Mutual Distraction

September 2nd, 2014 No comments

onedoesnotFrom the “It can happen to anyone” department…

A couple of days ago, prior to the announcement that hundreds of celebrities’ nudie shots were liberated from their owners and posted to the Web, I customized some Growl notifications on my Mac to provide some additional realtime auditing of some apps I was interested in.  One of the applications I enabled was Dropbox synch messaging so I could monitor some sharing activity.

Ordinarily, these two events would not be related except I was also tracking down a local disk utilization issue that was vexing me as on a day-to-day basis as my local SSD storage would ephemerally increase/decrease by GIGABYTES and I couldn’t figure out why.

So this evening, quite literally as I was reading RSnake’s interesting blog post titled “So your nude selfies were just hacked,” a Growl notification popped up informing me that several new Dropbox files were completing synchronization.

Puzzled because I wasn’t aware of any public shares and/or remote folders I was synching, I checked the Dropbox synch status and saw a number of files that were unfamiliar — and yet the names of the files certainly piqued my interest…they appeared to belong to a very good friend of mine given their titles. o_O

I checked the folder these files were resting in — gigabytes of them — and realized it was a shared folder that I had setup 3 years ago to allow a friend of mine to share a video from one of our infamous Jiu Jitsu smackdown sessions from the RSA Security Conference.  I hadn’t bothered to unshare said folder for years, especially since my cloud storage quota kept increasing while my local storage didn’t.

As I put 1 and 1 together, I realized that for at least a couple of years, Jeremiah (Grossman) had been using this dropbox share folder titled “Dropit” as a repository for file storage, thinking it was HIS!

This is why gigs of storage was appearing/disappearing from my local storage when he added/removed files, but I didn’t  see the synch messages and thus didn’t see the filenames.

I jumped on Twitter and engaged Jer in a DM session (see below) where I was laughing so hard I was crying…he eventually called me and I walked him through what happened.

Once we came to terms of what had happened, how much fun I could have with this, Jer ultimately copied the files off the share and I unshared the Dropbox folder.

We agreed it was important to share this event because like previous issues each of us have had, we’re all about honest disclosure so we (and others) can learn from our mistakes.

The lessons learned?

  1. Dropbox doesn’t make it clear whether a folder that’s shared and mounted is yours or someone else’s — they look the same.
  2. Ensure you know where your data is synching to!  Services like Dropbox, iCloud, Google Drive, SkyDrive, etc. make it VERY easy to forget where things are actually stored!
  3. Check your logs and/or enable things like Growl notifications (on the Mac) to ensure you can see when things are happening
  4. Unshare things when you’re done.  Audit these services regularly.
  5. Even seasoned security pros can make basic security/privacy mistakes; I shared a folder and didn’t audit it and Jer put stuff in a folder he thought was his.  It wasn’t.
  6. Never store nudie pics on a folder you don’t encrypt — and as far as I can tell, Jer didn’t…but I DIDN’T CLICK…HONEST!

Jer and I laughed our asses off, but imagine if this had been confidential information or embarrassing pictures and I wasn’t his friend.

If you use Dropbox or similar services, please pay attention.

I don’t want to see your junk.

/Hoff

P.S. Thanks for being a good sport, Jer.

P.P.S. I about died laughing sending these DMs:

Jer-Twitter

 

How To Be a Cloud Mogul(l) – Our 2014 RSA “Dueling Banjos/Cloud/DevOps” Talk

March 27th, 2014 No comments

dueling_banjosRich Mogull (Securosis) and I have given  a standing set of talks over the last 5-6 years at the RSA Security Conference that focus on innovation, disruption and ultimately making security practitioners more relevant in the face of all this churn.

We’ve always offered practical peeks of what’s coming and what folks can do to prepare.

This year, we (I should say mostly Rich) built a bunch of Ruby code that leveraged stuff running in Amazon Web Services (and using other Cloud services) to show how security folks with little “coding” capabilities could build and deploy this themselves.

Specifically, this talk was about SecDevOps — using principles that allow for automated and elastic cloud services to do interesting security things that can be leveraged in public and private clouds using Chef and other assorted mechanisms.

I also built a bunch of stuff using the RackSpace Private Cloud stack and Chef, but didn’t have the wherewithal or time to demonstrate it — and doing live demos over a tethered iPad connection to AWS meant that if it sucked, it was Rich’s fault.

You can find the presentation here (it clearly doesn’t include the live demos):

Dueling Banjos – Cloud vs. Enterprise Security: Using Automation and (Sec)DevOps NOW

/Hoff

 

On the Topic Of ‘Stopping’ DDoS.

March 10th, 2014 11 comments

The insufferable fatigue of imprecise language with respect to “stopping” DDoS attacks caused me to tweet something that my pal @CSOAndy suggested was just as pedantic and wrong as that against which I railed:

The long and short of Andy’s displeasure with my comment was:

to which I responded:

…and then…

My point, ultimately, is that in the context of DDoS mitigation such as offload scrubbing services, unless one renders the attacker(s) from generating traffic, the attack is not “stopped.”  If a scrubbing service redirects traffic and absorbs it, and the attacker continues to send packets, the “attack” continues because the attacker has not been stopped — he/she/they have been redirected.

Now, has the OUTCOME changed?  Absolutely.  Has the intended victim possibly been spared the resultant denial of service?  Quite possibly.  Could there even now possibly be extra “space in the pipe?” Uh huh.

Has the attack “stopped” or ceased?  Nope.  Not until the spice stops flowing.

Nuance?  Pedantry?  Sure.

Wrong?  I don’t think so.

/Hoff

Categories: Uncategorized Tags:

The Easiest $20 I ever saved…

March 2nd, 2014 5 comments

20dollarbillDuring the 2014 RSA Conference, I participated on a repeating panel with Bret Hartman, CTO of Cisco’s Security Business Unit and Martin Brown from BT.  The first day was moderated by Jon Olstik while the second day, the three of us were left to, um, self-moderate.

It occurred to me that during our very lively (and packed) second day wherein the audience was extremely interactive,  I should boost the challenge I made to the audience on day one by offering a little monetary encouragement in answering a question.

Since the panel was titled “Network Security Smackdown: Which Technologies Will Survive?,” I offered a $20 kicker to anyone who could come up with a legitimate counter example — give me one “network security” technology that has actually gone away in the last 20 years.

<chirp chirp>

Despite Bret trying to pocket the money and many folks trying valiantly to answer, I still have my twenty bucks.

I’ll leave the conclusion as an exercise for the reader.

/Hoff

Categories: General Rants & Raves Tags:

NGFW = No Good For Workloads…

February 13th, 2014 3 comments

lion_dog-93478So-called Next Generation Firewalls (NGFW) are those that extend “traditional port firewalls” with the added context of policy with application visibility and control to include user identity while enforcing security, compliance and productivity decisions to flows from internal users to the Internet.

NGFW, as defined, is a campus and branch solution. Campus and Branch NGFW solves the “inside-out” problem — applying policy from a number of known/identified users on the “inside” to a potentially infinite number of applications and services “outside” the firewall, generally connected to the Internet. They function generally as forward proxies with various network insertion strategies.

Campus and Branch NGFW is NOT a Data Center NGFW solution.

Data Center NGFW is the inverse of the “inside-out” problem.  They solve the “outside-in” problem; applying policy from a potentially infinite number of unknown (or potentially unknown) users/clients on the “outside” to a nominally diminutive number of well-known applications and services “inside” the firewall that are exposed generally to the Internet.  They function generally as reverse proxies with various network insertion strategies.

Campus and Branch NGFWs need to provide application visibility and control across potentially tens of thousands of applications, many of which are evasive.

Data Center NGFWs need to provide application visibility and control across a significantly fewer number of well-known managed applications, many of which are bespoke.

There are wholesale differences in performance, scale and complexity between “inside-out” and “outside-in” firewalls.  They solve different problems.

The things that make a NGFW supposedly “special” and different from a “traditional port firewall” in a Campus & Branch environment are largely irrelevant in the Data Center.  Speaking of which, you’d find it difficult to find solutions today that are simply “traditional port firewalls”; the notion that firewalls integrated with IPS, UTM, ALGs, proxies, integrated user authentication, application identification/granular control (AVC), etc., are somehow incapable of providing the same outcome is now largely a marketing distinction.

While both sets of NGFW solutions share a valid deployment scenario at the “edge” or perimeter of a network (C&B or DC,) a further differentiation in DC NGFW is the notion of deployment in the so-called “core” of a network.  The requirements in this scenario mean comparing the deployment scenarios is comparing apples and oranges.

Firstly, the notion of a “core” is quickly becoming an anachronism from the perspective of architectural references, especially given the advent of collapsed network tiers and fabrics as well as the impact of virtualization, cloud and network virtualization (nee SDN) models.  Shunting a firewall into these models is often difficult, no matter how many interfaces.  Flows are also asynchronous and often times stateless.

Traditional Data Center segmentation strategies are becoming a blended mix of physical isolation (usually for compliance and/or peace of mind o_O) with a virtualized overlay provided in the hypervisor and/or virtual appliances.  Shifts in traffic patterns include a majority of machine-to-machine in east-west direction via intra-enclave “pods” are far more common.  Dumping all flows through one (or a cluster) of firewalls at the “core” does what, exactly — besides adding latency and often times obscured or unnecessary inspection.

Add to this the complexity of certain verticals in the DC where extreme low-latency “firewalls” are needed with requirements at 5 microseconds or less.  The sorts of things people care about enforcing from a policy perspective aren’t exactly “next generation.”  Or, then again, how about DC firewalls that work at the mobile service provider eNodeB, mobile packet core or Gi with specific protocol requirements not generally found in the “Enterprise?”

In these scenarios, claims that a Campus & Branch NGFW is tuned to defend against “outside-in” application level attacks against workloads hosted in a Data Center is specious at best.  Slapping a bunch of those Campus & Branch firewalls together in a chassis and calling it a Data Center NGFW invokes ROFLcoptr.

Show me how a forward-proxy optimized C&B NGFW deals with a DDoS attack (assuming the pipe isn’t flooded in the first place.)  Show me how a forward-proxy optimized C&B NGFW deals with application level attacks manipulating business logic and webapp attack vectors across known-good or unknown inputs.

They don’t.  So don’t believe the marketing.

I haven’t even mentioned the operational model and expertise deltas needed to manage the two.  Or integration between physical and virtual zoning, or on/off-box automation and visibility to orchestration systems such that policies are more dynamic and “virtualization aware” in nature…

In my opinion, NGFW is being redefined by the addition of functionality that again differentiates C&B from DC based on use case.  Here are JUST two of them:

  • C&B NGFW is becoming what I call C&B NGFW+, specifically the addition of advanced anti-malware (AAMW) capabilities at the edge to detect and prevent infection as part of the “inside-out” use case.  This includes adjacent solutions that include other components and delivery models.
  • DC NGFW is becoming DC NGFW+, specifically the addition of (web) application security capabilities and DoS/DDoS capabilities to prevent (generally) externally-originated attacks against internally-hosted (web) applications.  This, too, requires the collaboration of other solutions specifically designed to enable security in this use case.

There are hybrid models that often take BOTH solutions to adequately protect against client infection, distribution and exploitation in the C&B to prevent attacks against DC assets connected over the WAN or a VPN.  

Pretending both use cases are the same is farcical.

It’s unlikely you’ll see a shift in analyst “Enchanted Dodecahedrons” relative to functionality/definition of NGFW because…strangely…people aren’t generally buying Campus and Branch NGFW for their datacenters because they’re trying to solve different problems.  At different levels of scale and performance.

A Campus and Branch NGFW is “No Good For Workloads” in the Data Center.  

/Hoff

Maslow’s Hierarchy Of Security Product Needs & Vendor Selection…

November 21st, 2013 1 comment

Interpretation is left as an exercise for the reader ;)  This went a tad bacterial (viral is too strong of a description) on Twitter:

maslow-v2_9

 

Categories: General Rants & Raves Tags:

My Information Security Magazine Cover Story: “Virtualization security dynamics get old, changes ahead”

November 4th, 2013 2 comments

ISM_cover_1113This month’s Search Security (nee Information Security Magazine) cover story was penned by none other than your’s truly and titled “Virtualization security dynamics get old, changes ahead”

I hope you enjoy the story; its a retrospective regarding the beginnings of security in the virtual space, where we are now, and we we’re headed.

I tried very hard to make this a vendor-neutral look at the state of the union of virtual security.

I hope that it’s useful.

You can find the story here.

/Hoff

Enhanced by Zemanta