Archive

Archive for the ‘Compliance’ Category

Hey, Uh, Someone Just Powered Off Our Firewall Virtual Appliance…

June 11th, 2009 11 comments

onoffswitchI’ve covered this before in more complex terms, but I thought I’d reintroduce the topic due to a very relevant discussion I just had recently (*cough cough*)

So here’s an interesting scenario in virtualized and/or Cloud environments that make use of virtual appliances to provide security capabilities*:

Since virtual appliances (VAs) are just virtual machines (VMs) what happens when a SysAdmin spins down or moves one that happens to be your shiny new firewall protecting your production VMs behind it, accidentally or maliciously?  Brings new meaning to the phrase “failing closed.”

Without getting into the vagaries of vendor specific mobility-enabled/enabling technologies, one of the issues with VMs/VAs is that there’s not really a good way of designating one as being “more important” or functionally differentiated such as “security” or “critical application” that would otherwise ensure a higher priority for service availability (read: don’t spin this down unless…) or provide a topological dependency hierarchy in virtualized network constructs.

Unlike physical environments where system administrators (servers) are segregated from access to network and security appliances, this isn’t the case in virtual environments. In Cloud environments (especially public, multi-tenant) where we are often reliant only upon virtual security capabilities since we have no option for physical alternatives, this is an interesting corner case.

We’ve talked a lot about visibility, audit and policy management in virtual environments and this is a poignant example.

/Hoff

*Despite the silly notion that the Google dudes tried to suggest I equated virtualization with Cloud as one-in-the-same, I don’t.

Cloud Catastrophes (Cloudtastrophes?) Caused by Clueless Caretakers?

March 22nd, 2009 4 comments
You'll ask "How?" Everytime... 

 

 

 

You'll ask "How?" Everytime...

Enter the dawn of the Cloudtastrophe…

I read a story today penned by Maureen O’Gara titled “Carbonite Loses Cloud-Based Data, Sues Storage Vendor.”

I thought this was going to be another story regarding a data breach (loss) of customer data by a Cloud Computing service vendor.

What I found, however, was another hyperbolic illustration of how the messaging of the Cloud by vendors has set expectations for service and reliability that are out of alignment with reality when you take a lack of: sound enterprise architecture, proper contingency planning, solid engineering and common sense and add the economic lubricant of the Cloud.

Stir in a little journalistic sensationalism, and you’ve got CloudWow!

Carbonite, the online backup vendor, says it lost data belonging to over 7,500 customers in a number of separate incidents in a suit filed in Massachusetts charging Promise Technology Inc with supplying it with $3 million worth of defective storage, according to a story in Saturday’s Boston Globe.

The catastrophe is the latest in a series of cloud failures.

The widgetry was supposed to detect disk failures and transfer the data to a working drive. It allegedly didn’t.

The story says Promise couldn’t fix the errors and “Carbonite’s senior engineers, senior management and senior operations personnel…spent enormous amounts of time dealing with the problems.”

Carbonite claims the data losses caused “serious damage” to its business and reputation for reliability. It’s demanding unspecified damages. Promise told the Globe there was “no merit to the allegations.”

Carbonite, which sells largely to consumers and small businesses and competes with EMC’s Mozy, tells its customers: “never worry about losing your files again.”

The abstraction of infrastructure and democratization of applications and data that Cloud Computing services can bring does not mean that all services are created equal.  It does not make our services or information more secure (or less for that matter.)  Just because a vendor brands themselves as a “Cloud” provider does not mean that “their” infrastructure is any more implicitly reliable, stable or resilient than traditional infrastructure or that proper enterprise architecture as it relates to people, process and technology is in place.  How the infrastructure is built and maintained is just as important as ever.

If you take away the notion of Carbonite being a “Cloud” vendor, would this story read any differently?

We’ve seen a few failures recently of Cloud-based services, most of them sensationally lumped into the Cloud pile: Google, Microsoft, and even Amazon; most of the stories about them relate the impending doom of the Cloud…

Want another example of how Cloud branding, the Web2.0 experience and blind faith makes for another FUDtastic “catastrophe in the cloud?”  How about the well-known service Ma.gnolia?

There was a meltdown at bookmark sharing website Ma.gnolia Friday morning. The service lost both its primary store of user data, as well as its backup. The site has been taken offline while the team tries to reconstruct its databases, though some users may never see their stored bookmarks again.

The failure appears to be catastrophic. The company can’t say to what extent it will be able to restore any of its users’ data. It also says the data failure was so extensive, repairing the loss will take “days, not hours.”

So we find that a one man shop was offering a service that people liked and it died a horrible death.  Because it was branded as a Cloud offering, it “seemed” bigger than it was.  This is where perception definitely was not reality and now we’re left with a fluffy bad taste in our mouths.

Again, what this illustrates is that just because a service is “Cloud-based” does not imply it’s any more reliable or resilient as one that is not. It’s just as important that as enterprises look to move to the Cloud that they perform as much due diligence on their providers as makes sense. We’ll see a weeding out of the ankle-biters in Cloud Computing.

Nobody ever gets fired for buying IBM…

What we’ll also see is that even though we’re not supposed to care what our Cloud providers’ infrastructure is powered by and how, we absolutely will in the long term and the vendors know it.

This is where people start to freak about how standards and consolidation will kill innovation in the space but it’s also where the realities of running a business come crashing down on early adopters.

Large enterprises will move to providers who can demonstrate that their services are solid by way of co-branding with the reputation of the providers of infrastructure coupled with the compliance to “standards.”

The big players like IBM see this as an opportunity and as early as last year introduced a Cloud certification program:

IBM to Validate Resiliency of Cloud Computing Infrastructures

Will Consult With Businesses of All Sizes to Ensure Resiliency, Availability, Security; Drive Adoption of New Technology

ARMONK, NY – 24 Nov 2008: In a move that could spur the rise of the nascent computing model known as “cloud,” IBM (NYSE: IBM) today said it would introduce a program to validate the resiliency of any company delivering applications or services to clients in the cloud environment. As a result, customers can quickly and easily identify trustworthy providers that have passed a rigorous evaluation, enabling them to more quickly and confidently reap the business benefits of cloud services.

Cloud computing is a model for network-delivered services, in which the user sees only the service and does not view the implementation or infrastructure required for its delivery. The success to date of cloud services like storage, data protection and enterprise applications, has created a large influx of new providers. However, unpredictable performance and some high-profile downtime and recovery events with newer cloud services have created a challenge for customers evaluating the move to cloud.

IBM’s new “Resilient Cloud Validation” program will allow businesses who collaborate with IBM on a rigorous, consistent and proven program of benchmarking and design validation to use the IBM logo: “Resilient Cloud” when marketing their services.

Remember the “Cisco Powered Network” program?  How about a “Cisco Powered Cloud?”  See how GoGrid advertises their load balancers are f5?

In the long term, like the CapitalOne credit card commercials challenging the company providing your credit card services by asking “What’s in your wallet?” you can expect to start asking the same thing about your Cloud providers’ offerings, also.

/Hoff

 

How To Be PCI Compliant in the Cloud…

March 15th, 2009 9 comments

Monkeys
I kicked off a bit of a dust storm some months ago when I wrote a tongue-in-cheek post titled "Please Help Me: I Need a QSA to Assess PCI/DSS Compliance In the Cloud."  It may have been a little contrived, but it asked some really important questions and started some really good conversations on my blog and elsewhere.

At SourceBoston I sat in on Mike Dahn's presentation titled "Cloud Compliance and Privacy" in which he did an excellent job outlining the many issues surrounding PCI and Compliance and it's relevance to Cloud Computing.  

Shortly thereafter, I was speaking to Geva Perry and James Urquhart on their "Overcast" podcast and the topic of PCI and Cloud came up. 

Geva asked me if after my rant on PCI and Cloud if what I was saying was that one could never be PCI compliant in the Cloud.  I basically answered that one could be PCI compliant in the Cloud depending upon the services used/offered by the provider and what sort of data you trafficked in.

Specifically, Geva made reference to the latest announcement by Rackspace regarding their Mosso Cloud offering and PCI compliance in which they tout that by using Mosso, a customer can be "PCI Compliant"  Since I hadn't seen the specifics of the offering, I deferred my commentary but here's what I found:

Cloud Sites, Mosso|The Rackspace Cloud’s Flagship offering, is officially the very first cloud hosting solution to enable an Internet merchant to pass PCI Compliance scans for both McAfee’s PCI scans and McAfee Secure Site scans. 

This achievement occurred just after Computer World published an article where some CIO’s shared their concern that Cloud Computing is still limited to “things that don’t require full levels of security.”  This landmark breakthrough may be the beginning of an answer to those fears, as Mosso leads Cloud Hosting towards a solid future of trust and reliability.

Mosso's blog featured an example of a customer — The Spreadsheet Store — who allegedly attained PCI compliance by using Mosso's offering. Pay very close attention to the bits below:

“We are making the Cloud business-ready.  Online merchants, like The Spreadsheet Store can now benefit from the scalability of the Cloud without compromising the security of online transactions,” says Emil Sayegh, General Manager of Mosso|The Rackspace Cloud.  “We are thrilled to have worked with The Spreadsheet Store to prepare the Cloud for their online transactions.”

The Spreadsheet Store set up their site using aspdotnetstorefront, “Which is, in our opinion, the best shopping cart solution on the market today,” says Murphy.  “It also happens to be fully compatible with Mosso.”  Using Authorize.Net, a secure payment gateway, to handle credit card transaction, The Spreadsheet Store does not store any credit card information on the servers.  Murphy and team use MaxMind for fraud prevention, Cardinal Commerce for MasterCard Secure Code and Verified by Visa, McAfee for PCI and daily vulnerability scans, and Thawte for SSL certification.

So after all of those lofty words relating to "…preparing the Cloud for…online transactions," what you can decipher is that Mosso doesn't seem to provide services to The Spreadsheet Store which are actually in scope for PCI in the first place!*

The Spreadsheet store redirects that functionality to a third party card processor!  

So what this really means is if you utilize a Cloud based offering and don't traffic in data that is within PCI scope and instead re-direct/use someone else's service to process and store credit card data, then it's much easier to become PCI compliant.  Um, duh. 

The goofiest bit here is that in Mosso's own "PCI How-To" (warning: PDF) primer, they basically establish that you cannot be PCI compliant by using them if you traffic in credit card information:

Cloud Sites is not currently designed for the storage or archival of credit card information.  In order to build a PCI compliant e-commerce solution, Cloud Sites needs to be paired up with a payment gateway partner.

Doh!

I actually wrote quite a detailed breakdown of this announcement for this post yes
terday, but I awoke to find my buddy Craig Balding had already done a stellar job of that (curses, timezones!)  I'll refer you to his post on the matter, but here's the gem in all of this.  Craig summed it up perfectly:

The fact that Mosso is seeking ways to help their customers off-load as much PCI compliance requirements to other 3rd parties is fine – it makes business sense for them and their merchant customers.  It’s their positioning of the effort as a “landmark breakthrough” and that they are somehow pioneers which leads to generalisations rooted in misunderstandings that is the problem.
Next time you hear someone say ‘Cloud Provider X is PCI compliant’, ask the golden PCI question: is their Cloud receiving, processing, storing or transmitting Credit Card data (as defined by the PCI DSS)?  If they say ‘No’, you’ll know what that really means…marketecture.

There's some nifty marketing for you, eh?

* Except for the fact that the web servers housed at Mosso must undergo regularly-scheduled vulnerability scans — which Mosso doesn't do, either.

PCI Security Standards Council to Form Virtualization SIG…

January 24th, 2009 1 comment

I'm happy to say that there appears to be some good news on the PCI DSS front with the promise of a SIG being formed this year for virtualization.  This is a good thing. 

You'll remember my calls for better guidance for both virtualization and ultimately cloud computing from the council given the proliferation of these technologies and the impact they will have on both security and compliance.

In that light, news comes from Troy Leach, technical director of the PCI Security Standards Council via a kind note to me from Michael Hoesing:

A PCI SSC Special Interest Group (SIG) for virtualization is most likely coming this year but we don't have any firm dates or objectives as of yet.  We will be soliciting feedback from our Participating Organizations which is comprised of more than 500 companies (which include Vmware, Microsoft, Dell, etc) as well as industry subject matter experts such as the 1,800+ security assessors that currently perform assessments as either a Qualified Security Assessor or Approved Scanning Vendor (ASV).

The PCI SSC Participating Organization program allows industry stakeholders an opportunity to provide feedback on all standards and supporting procedures.  Information to join as a Participating Organization can be found here on our website.

This is a good first step.  if you've got input, make sure to contribute!

/Hoff

Categories: Compliance, PCI, Virtualization, VMware Tags:

CloudSQL – Accessing Datastores in the Sky using SQL…

December 2nd, 2008 5 comments
Zohosql-angled
I think this is definitely a precursor of things to come and introduces some really interesting security discussions to be had regarding the portability, privacy and security of datastores in the cloud.

Have you heard of Zoho?  No?  Zoho is a SaaS vendor that describe themselves thusly:

Zoho is a suite of online applications (services) that you sign up for and access from our Website. The applications are free for individuals and some have a subscription fee for organizations. Our vision is to provide our customers (individuals, students, educators, non-profits, small and medium sized businesses) with the most comprehensive set of applications available anywhere (breadth); and for those applications to have enough features (depth) to make your user experience worthwhile.

Today, Zoho announced the availability of CloudSQL which is middleware that allows customers who use Zoho's SaaS apps to "…access their data on Zoho SaaS
applications using SQL queries."
 

From their announcement:

Zoho CloudSQL is a technology that allows developers to interact with business data stored across Zoho Services using the familiar SQL language. In addition, JDBC and ODBC database drivers make writing code a snap – just use the language construct and syntax you would use with a local database instance. Using the latest Web technology no longer requires throwing away years of coding and learning.

Zoho CloudSQL allows businesses to connect and integrate the data and applications they have in Zoho with the data and applications they have in house, or even with other SaaS services. Unlike other methods for accessing data in the cloud, CloudSQL capitalizes on enterprise developers’ years of knowledge and experience with the widely‐used SQL language. This leads to faster deployments and easier (read: less expensive) integration projects.

Basically, CloudSQL is interposed between the suite of Zoho applications and the backend datastores and functions as an intermediary receiving SQL queries against the pooled data sets using standard SQL commands and dialects. Click on the diagram below for a better idea of what this looks like.

Zoho-cloud-sql
What's really interesting about allowing native SQL access is the ability to then allow much easier information interchange between apps/databases on an enterprises' "private cloud(s)" and the Zoho "public" cloud.

Further, it means that your data is more "portable" as it can be backed up, accessed, and processed by applications other than Zoho's.  Imagine if they were to extend the SQL exposure to other cloud/SaaS providers…this is where it will get really juicy. 

This sort of thing *will* happen.  Customers will see the absolute utility of exposing their cloud-based datastores and sharing them amongst business partners, much in the spirit of how it's done today, but with the datastores (or chunks of them) located off-premises.

That's all good and exciting, but obviously security questions/concerns immediately surface regarding such things as: authentication, encryption, access control, input sanitation, privacy and compliance…

Today our datastores typically live inside the fortress with multiple
layers of security and proxied access from applications, shielded from
direct access and yet we still have basic issues with attacks such as
SQL injection.  Imagine how much fun we can have with this!

The best I could find regarding security and Zoho came from their FAQ which doesn't exactly inspire confidence given the fact that they address logical/software security by suggesting that anti-virus software is the best line of defense ffor protecting your data and that "data encryption" will soon be offered as an "option" and (implied) SSL will make you secure:

6. Is my data secured?

Many people ask us this question. And rightly so; Zoho has invested alot of time and money to ensure that your information is secure and private. We offer security on multiple levels including the physical, software and people/process levels; In fact your data is more secure than walking around with it on a laptop or even on your corporate desktops.

Physical: Zoho servers and infrastructure are located in the most secure types of data centers that have multiple levels of restrictions for access including: on-premise security guards, security cameras, biometric limited access systems, and no signage to indicate where the buildings are, bullet proof glass, earthquake ratings, etc.

Hardware: Zoho employs state of the art firewall protection on multiple levels eliminating the possibility of intrusion from outside attacks

Logical/software protection: Zoho deploys anti-virus software and scans all access 24 x7 for suspicious traffic and viruses or even inside attacks; All of this is managed and logged for auditing purposes.

Process: Very few Zoho staff have access to either the physical or logical levels of our infrastructure. Your data is therefore secure from inside access; Zoho performs regular vulnerability testing and is constantly enhancing its security at all levels. All data is backed up on multiple servers in multiple locations on a daily basis. This means that in the worst case, if one data center was compromised, your data could be restored from other locations with minimal disruption. We are also working on adding even more security; for example, you will soon be able to select a "data encryption" option to encrypt your data en route to our servers, so that in the unlikely event of your own computer getting hacked while using Zoho, your documents could be securely encrypted and inaccessible without a "certificate" which generally resides on the server away from your computer.

Fun times ahead, folks.

/Hoff

Security Will Not End Up In the Network…

June 3rd, 2008 9 comments

Secdeadend
It’s not the destination, it’s the journey, stupid.

You can’t go a day without reading from the peanut gallery that it is
"…inevitable that network security will eventually be subsumed into
the network fabric."  I’m not picking on Rothman specifically, but he’s been banging this drum loudly of late.

For such a far-reaching, profound and prophetic statement, claims like these are strangely myopic and inaccurate..and then they’re exactly right.

Confused?

Firstly, it’s sort of silly and obvious to trumpet that "network security" will end up in the "network."  Duh.  What’s really meant is that "information security" will end up in the network, but that’s sort of goofy, too. You’ll even hear that "host-based security" will end up in the network…so let’s just say that what’s being angled at here is that security will end up in the network.

These statements are often framed within a temporal bracket
that simply ignores the bigger picture and reads like a eulogy.  The reality is that historically
we have come to accept that security and technology are
cyclic and yet we continue to witness these terminal predictions defining an end state for security that has never arrived and never will.


Let me make plain my point: there is no final resting place for where and how security will "end up."

I’m visual, so let’s reference a very basic representation of my point.  This graph represents the cyclic transition over time of where and how
we invest in security.

We ultimately transition between host-based security,
information-centric security and network security over time. 

We do this little
shuffle based upon the effectiveness and maturity of technology,
economics, cultural, societal and regulatory issues and the effects of disruptive innovation.  In reality, this
isn’t a smooth sine wave at all, it’s actually more a classic dampened
oscillation ala the punctuated equilibrium theory I’ve spoken about
before
, but it’s easier to visualize this way.

Youarehere_3

Our investment strategy and where security is seen as being "positioned" reverses direction over time and continues ad infinitum.  This has proven itself time and time again yet we continue to be wowed by the prophetic utterances of people who on the one hand talk about these never-ending cycles and yet on the other pretend they don’t exist by claiming the "death" of one approach over another. 
 

Why?

To answer that let’s take a look at how the cyclic pendulum effect of our focus on
security trends from the host to the information to the network and
back again by analyzing the graph above. 

  1. If we take a look at the arbitrary "starting" point indicated by the "You Are Here" dot on the sine wave above, I suggest that over the last 2-3 years or so we’ve actually headed away from the network as the source of all things security.   

    There are lots of reasons for this; economic, ideological, technological, regulatory and cultural.  If you want to learn more about this, check out my posts on how disruptive Innovation fuels strategic transience.

    In short, the network has not been able to (and never will) deliver the efficacy, capabilities or
    cost-effectiveness desired to secure us from evil, so instead we look at
    actually securing the information itself.  The security industry messaging of late is certainly bearing testimony to that fact.  Check out this year’s RSA conference…
     

  2. As we focus then on information centricity, we see the resurgence of ERM, governance and compliance come into focus.  As policies proliferate, we realize that this is really hard and we don’t have effective and ubiquitous data
    classification, policy affinity and heterogeneous enforcement capabilities.  We shake our heads at the ineffectiveness of the technology we have and hear the cries of pundits everywhere that we need to focus on the things that really matter…

    In order to ensure that we effectively classify data at the point of creation, we recognize that we can’t do this automagically and we don’t have standardized schemas or metadata across structured and unstructured data, so we’ll look at each other, scratch our heads and conclude that the applications and operating systems need modification to force fit policy, classification and enforcement.

    Rot roh.
     

  3. Now that we have the concept of policies and classification, we need the teeth to ensure it, so we start to overlay emerging technology solutions on the host in applications and via the OS’s that are unfortunately non-transparent and affect the users and their ability to get their work done.  This becomes labeled as a speed bump and we grapple with how to make this less impacting on the business since security has now slowed things down and we still have breaches because users have found creative ways of bypassing technology constraints in the name of agility and efficiency…
     
  4. At this point, the network catches up in its ability to process closer to "line
    speed," and some of the data classification functionality from the host commoditizes into the "network" — which by then is as much in the form of appliances as it is routers and switches — and always
    will be.   So as we round this upturn focusing again on being "information centric," with the help of technology, we seek to use our network investment to offset impact on our users.
     
  5. Ultimately, we get the latest round of "next generation" network solutions which promise to deliver us from our woes, but as we "pass go and collect $200" we realize we’re really at the same point we were at point #1.

‘Round and ’round we go.

So, there’s no end state.  It’s a continuum.  The budget and operational elements of who "owns" security and where it’s implemented simply follow the same curve.  Throw in disruptive innovation such as virtualization, and the entire concept of the "host" and the "network" morphs and we simply realize that it’s a shift in period on the same graph.

So all this pontification that it is "…inevitable that network security will eventually be subsumed into
the network fabric" is only as accurate as what phase of the graph you reckon you’re on.  Depending upon how many periods you’ve experienced, it’s easy to see how some who have not seen these changes come and go could be fooled into not being able to see the forest for the trees.

Here’s the reality we actually already know and should not come to you as a surprise if you’ve been reading my blog: we will always need a blended investment in technology, people and process in order to manage our risk effectively.  From a technology perspective, some of this will take the form of controls embedded in the information itself, some will come from the OS and applications and some will come from the network.

Anyone who tells you differently has something to sell you or simply needs a towel for the back of his or her ears…

/Hoff

Asset Focused, Not Auditor Focused

May 3rd, 2008 5 comments

Grcsoup
Gunnar Peterson wrote a great piece the other day on the latest productization craze in InfoSec – GRC (Governance, Risk Management and Compliance) wherein he asks "GRC – To Be or To Do?"

I don’t really recall when or from whence GRC sprung up as an allegedly legitimate offering, but to me it seems like a fashionably over-sized rug under which the existing failures of companies to effectively execute on the individual G, R, and C initiatives are conveniently being swept.

I suppose the logic goes something like this: "If you cant effectively
govern, manage risk or measure compliance it must be because what you’re doing is fragmented and siloed.  What you need is
a product/framework/methodology that takes potentially digestible
deliverables and perspectives and "evolves" them into a behemoth suite instead?"

I do not dispute that throughout most enterprises, the definitions, approaches and processes in managing each function are largely siloed and fragmented and I see the attractiveness of integrating and standardizing them, but  I am unconvinced that re-badging a control and policy framework collection constitutes a radical new approach. 

GRC appears to be a way to sell more products and services under a fancy new name to address problems rather than evaluate and potentially change the way in which we solve them.  Look at who’s pushing this: large software companies and consultants as well as analysts looking to pin their research to something more meaningful.

From a first blush, GRC isn’t really about governance or managing risk.  It’s audit-driven compliance all tarted up.

It’s a more fashionable way of getting all your various framework and control definitions in one place and appealing to an auditor’s desire for centralized "stuff" in order to document the effectiveness of controls and track findings against some benchmark.  I’m not really sure where the business-driven focus comes into play?

It’s also sold as a more efficient way of reducing the scope and costs of manual process controls.  Fine.  Can’t argue with that.  I might even say it’s helpful, but at what cost?

Gunnar said:

GRC (or Governance, Risk Management, and Compliance for
the uninitiated) is all the rage, but I have to say I think that again
Infosec has the wrong focus.

Instead of Risk Management helping to deliver transparent Governance and as a natural by-product demonstrate compliance as a function of the former, the model’s up-ended with compliance driving the inputs and being mislabeled.

As I think about it, I’m not sure GRC would be something a typical InfoSec function would purchase or use unless forced which is part of the problem.  I see internal audit driving the adoption which given today’s pressures (especially in public companies) would first start in establishing gaps against regulatory compliance.

If the InfoSec function is considering an approach that drives protecting the things that matter most and managing risk to an acceptable level and one that is not compliance-driven but rather built upon a business and asset-driven approach, rather than make a left turn Gunnar suggested:

Personally, I am happy sticking to classic infosec knitting – delivering confidentiality, integrity, and availability through authentication, authorization, and auditing. But if you are looking for a next generation conceptual horse to bet on, I don’t think GRC is it, I would look at information survivability. Hoff’s information survivability primer is a great starting point for learning about survivability.

Why survivability is more valuable over the long haul than GRC is that survivability is focused on assets not focused on giving an auditor what they need, but giving the business what it needs.

Seminal paper on survivability by Lipson, et al. "survivability solutions are best understood as risk management strategies that first depend on an intimate knowledge of the mission being protected." Make a difference – asset focus, not auditor focus.

For obvious reasons, I am compelled to say "me, too."

I would really like to talk to someone in a large enterprise who is using one of these GRC suites — I don’t really care which department you’re from.  I just want to examine my assertions and compare them against my efforts and understanding.

/Hoff

Pondering Implications On Standards & Products Due To Cold Boot Attacks On Encryption Keys

February 22nd, 2008 4 comments

Scientist
You’ve no doubt seen the latest handywork of Ed Felten and his team from the Princeton Center for Information Technology Policy regarding cold boot attacks on encryption keys:

Abstract: Contrary to popular assumption, DRAMs used in
most modern computers retain their contents for seconds to minutes
after power is lost, even at operating temperatures and even if removed
from a motherboard. Although DRAMs become less reliable when they are
not refreshed, they are not immediately erased, and their contents
persist sufficiently for malicious (or forensic) acquisition of usable
full-system memory images. We show that this phenomenon limits the
ability of an operating system to protect cryptographic key material
from an attacker with physical access. We use cold reboots to mount
attacks on popular disk encryption systems — BitLocker, FileVault,
dm-crypt, and TrueCrypt — using no special devices or materials. We
experimentally characterize the extent and predictability of memory
remanence and report that remanence times can be increased dramatically
with simple techniques. We offer new algorithms for finding
cryptographic keys in memory images and for correcting errors caused by
bit decay. Though we discuss several strategies for partially
mitigating these risks, we know of no simple remedy that would
eliminate them.

Check out the video below (if you have scripting disabled, here’s the link.)  Fascinating and scary stuff.

Would a TPM implementation mitigate this if they keys weren’t stored (even temporarily) in RAM?

Given the surge lately toward full disk encryption products, I wonder how the market will react to this.  I am interested in both the broad industry impact and response from vendors.  I won’t be surprised if we see new products crop up in a matter of days advertising magical defenses against such attacks as well as vendors scrambling to do damage control.

This might be a bit of a reach, but equally as interesting to me are the potential implications upon DoD/Military crypto standards such as FIPS140.2 ( I believe the draft of 140.3 is circulating…)  In the case of certain products at specific security levels, it’s obvious based on the video that one wouldn’t necessarily need physical access to a crypto module (or RAM) in order to potentially attack it.

It’s always amazing to me when really smart people think of really creative, innovative and (in some cases) obvious ways of examining what we all take for granted.

I see your “More on Data Centralization” & Raise You One “Need to Conduct Business…”

June 19th, 2007 1 comment

Pokerhand
Bejtlich continues to make excellent points regarding his view on centralizing data within an enterprise.  He cites the increase in litigation regarding inadequate eDiscovery investment and the increasing pressures amassed from compliance.

All good points, but I’d like to bring the discussion back to the point I was trying to make initially and here’s the perfect perch from which to do it.  Richard wrote:

Christopher Christofer Hoff used the term "agile" several times in his good blog post. I think "agile" is going to be thrown out the window when corporate management is staring at $50,000 per day fines for not being able to produce relevant documents during ediscovery. When a company loses a multi-million dollar lawsuits because the judge issued an adverse inference jury instruction, I guarantee data will be centralized from then forward. "

…how about when a company loses the ability to efficiently and effectively conduct business because they spend so much money and time on "insurance policies" against which a balanced view of risk has not been applied?  Oh, wait.  That’s called "information security." 😉

Fear.  Uncertainty.  Doubt.  Compliance.  Ugh.  Rinse, later, repeat.

I’m not taking what you’re proposing lightly, Richard, but the notion of agility, time to market, cost transformation and enhancing customer experience are being tossed out with the bathwater here. 

Believe it or not, we have to actually have a sustainable business in order to "secure" it. 

It’s fine to be advocating Google Gears and all these other Web 2.0
applications and systems. There’s one force in the universe that can
slap all that down, and that’s corporate lawyers. If you disagree, whom
do you think has a greater influence on the CEO: the CTO or the
corporate lawyer? When the lawyer is backed by stories of lost cases,
fines, and maybe jail time, what hope does a CTO with plans for
"agility" have?

But going back to one of your own mantras, if you bake security into your processes and SDLC in the first place, then the CEO/CTO/CIO and legal counsel will already have assessed the position the company has and balance the risk scorecard to ensure that they have exercised the appropriate due care in the first place. 

The uncertainty and horrors associated with the threat of punitive legal impacts have, are, and will always be there…and they will continue to be exploited by those in the security industry to buy more stuff and justify a paycheck.

Given the business we’re in, it’s not a surprise that the perspective presented is very, very siloed and focused on the potential "security" outcomes of what happens if we don’t start centralizing data now; everything looks like a nail when you’re a hammer.

However, you still didn’t address the other two critical points I made previously:

  1. The underlying technology associated with decentralization of data and applications is at complete odds with the "curl up in a fetal position and wait for the sky to fall" approach
  2. The only reason we have security in the first place is to ensure survivability and availability of service — and make sure that we stay in business.  That isn’t really a technical issue at all, it’s a business one.  I find it interesting that you referenced this issue as the CTO’s problem and not the CIO.

As to your last point, I’m convinced that GE — with the resources, money and time it has to bear on a problem — can centralize its data and resources…they can probably get cold fusion out of a tuna fish can and a blow pop, but for the rest of us on planet Earth, we’re going to have to struggle along trying to cram all the ‘agility’ and enablement we’ve just spent the last 10 years giving to users back into the compliance bottle.

/Hoff