Home > MLS (Multilevel Security), Virtualization, VM HyperJacking, VMware > How the DOD/Intel Communities Can Help Save Virtualization from the Security Trash Heap…

How the DOD/Intel Communities Can Help Save Virtualization from the Security Trash Heap…

September 3rd, 2007 Leave a comment Go to comments

Sausagemachine
If you’ve been paying attention closely over the last year or so, you will have noticed louder-than-normal sucking sounds coming from the virtualization sausage machine as it grinds the various ingredients driving virtualization’s re-emergence and popularity together to form the ideal tube of tasty technology bologna. 

{I rather liked that double entendre, but if you find it too corrosive, feel free to substitute your own favorite banger marque in its stead. 😉 }

Virtualization is a hot topic; from clients to servers  applications to datastores, and networking to storage, virtualization is coming back full-circle from its MULTICS and LPAR roots and promises to change everything.  Again.

Unfortunately, one of the things virtualization isn’t changing quickly enough (for my liking) and in enough visible ways is the industry’s approach to engineering security into the virtualization product lifecycle early enough to allow us to deploy a more secure product out of the box.   

Sadly, most of the commercial virtualization offerings as well as the open source platforms have lacked much in the way of guidance as how to secure VMs beyond the common sense approach of securing non-virtualized instances, and the security industry has been slow to see any more than a few innovative solutions to the problems virtualization introduces or in some way intensifies.

You can imagine then the position that leaves customers.

I’m from the Government and I’m here to help…

However, here’s where some innovation from what some might consider an unlikely source may save this go-round from another security wreck stacked in the IT boneyard: the DoD and Intelligence communities and a high-profile partnering strategy for virtualized security.

Both the DoD and Intel agencies are driven, just like the private sector, to improve efficiency, cut costs, consolidate operationally and still maintain an ever vigilant high level of security.

An example of this dictate is the Global Information Grid (GIG.) The GIG represents:

"…a net-centric system
operating in a global context to provide processing, storage,
management, and transport of information to support all Department of
Defense (DoD), national security, and related Intelligence Community
missions and functions-strategic, operational, tactical, and
business-in war, in crisis, and in peace.

GIG capabilities
will be available from all operating locations: bases, posts, camps,
stations, facilities, mobile platforms, and deployed sites. The GIG
will interface with allied, coalition, and non-GIG systems.

One of the core components of the GIG is building the capability and capacity to securely collapse and consolidate what are today physically separate computing enclaves (computers, networks and data) based upon the classification, sensitivity and clearances of information and personnel which govern the access to data by those who try to access it.

Multi-Level Security Marketing…

This represents the notion of multilevel security or MLS.  I am going to borrow liberally from this site authored by Dr. Rick Smith to provide a quick overview, as the concepts and challenges of MLS are really critical to fully appreciate what I’m about to describe.  Oddly enough, the concept and work is also 30+ years old and you’d recognize the constructs as being those you’ll find in your CISSP test materials…You remember the Bell-LaPadula model, don’t you?

The MLS Problem

We use the term multilevel
because the defense community has classified both people and
information into different levels of trust and sensitivity. These
levels represent the well-known security classifications: Confidential,
Secret, and Top Secret. Before people are allowed to look at classified
information, they must be granted individual clearances that are based
on individual investigations to establish their trustworthiness. People
who have earned a Confidential clearance are authorized to see
Confidential documents, but they are not trusted to look at Secret or
Top Secret information any more than any member of the general public.
These levels form the simple hierarchy shown in Figure 1.
The dashed arrows in the figure illustrate the direction in which the
rules allow data to flow: from "lower" levels to "higher" levels, and
not vice versa.

Figure 1: The hierarchical security levels
 

When speaking about these levels, we use three different terms:

  • Clearance level
    indicates the level of trust given to a person with a security
    clearance, or a computer that processes classified information, or an
    area that has been physically secured for storing classified
    information. The level indicates the highest level of classified
    information to be stored or handled by the person, device, or location.
  • Classification level
    indicates the level of sensitivity associated with some information,
    like that in a document or a computer file. The level is supposed to
    indicate the degree of damage the country could suffer if the
    information is disclosed to an enemy.
  • Security level is a generic term for either a clearance level or a classification level.

The
defense community was the first and biggest customer for computing
technology, and computers were still very expensive when they became
routine fixtures in defense organizations. However, few organizations
could afford separate computers to handle information at every
different level: they had to develop procedures to share the computer
without leaking classified information to uncleared (or insufficiently
cleared) users. This was not as easy as it might sound. Even when
people "took turns" running the computer at different security levels
(a technique called periods processing), security officers had to worry
about whether Top Secret information may have been left behind in
memory or on the operating system’s hard drive.
Some sites purchased
computers to dedicate exclusively to highly classified work, despite
the cost, simply because they did not want to take the risk of leaking
information.

Multiuser
systems, like the early timesharing systems, made such sharing
particularly challenging. Ideally, people with Secret clearances should
be able to work at the same time others were working on Top Secret
data, and everyone should be able to share common programs and
unclassified files. While typical operating system mechanisms could
usually protect different user programs from one another, they could
not prevent a Confidential or Secret user from tricking a Top Secret
user into releasing Top Secret information via a Trojan horse.

When
a user runs the word processing program, the program inherits that
user’s access permissions to the user’s own files. Thus the Trojan
horse circumvents the access permissions by performing its hidden
function when the unsuspecting user runs it. This is true whether the
function is implemented in a macro or embedded in the word processor
itself. Viruses and network worms are Trojan horses in the sense that
their replication logic is run under the context of the infected user.
Occasionally, worms and viruses may include an additional Trojan horse
mechanism that collects secret files from their victims. If the victim
of a Trojan horse is someone with access to Top Secret information on a
system with lesser-cleared users, then there’s nothing on a
conventional system to prevent leakage of the Top Secret information.
Multiuser systems clearly need a special mechanism to protect
multilevel data from leakage.

Think about the challenges of supporting modern-day multiuser Windows Operating Systems (virtualized or not,) together onto a single compute platform while also consolidating multiple networks of various classifications (including the Internet) into a single network transport while providing ZERO tolerance for breach.

What’s also different here from the compartmentalization requirements of "basic" virtualization is that the segmentation and isolation is
critically driven by the classification and sensitivity of the data itself and the clearance of those trying to access it.   

To wit:

VMware and General Dynamics are partnering to provide the NSA with the next evolution of their High Assurance Platform (HAP) to solve the following problem:

… users with multiple security clearances, such as members of the U.S. Armed Forces and Homeland Security personnel, must
use separate physical workstations. The result is a so-called "air gap"
between systems to access information in each security clearance level
in order to uphold the government’s security standards.

VMware
said it will provide an extra layer of security in its virtualization
software, which lets these users run the equivalent of physically
isolated machines with separate levels of security clearance on the
same workstation.

HAP builds on the current
solution based on VMware, called NetTop, which allows simultaneous
access to classified information on the same platform in what the
agency refers to as low-risk environments.

For HAP, VMware has added a thin API of
fewer than 5,000 lines of code to its virtualization software that can
evolve over time. NetTop is more static and has to go through a lengthy
re-approval process as changes are made. "This code can evolve over
time as needs change and the accreditation process is much quicker than
just addressing what’s new." 

HAP encompasses standard Intel-based commercial hardware that
could range from notebooks and desktops to traditional workstations. Government agencies will see a minimum 60 percent
reduction in their hardware footprints and greatly reduced energy
requirements.

HAP
will allow for one system to maintain up to six simultaneous virtual
machines. In addition to Windows and Linux, support for
Sun’s Solaris operating system is planned."

This could yield some readily apparent opportunities for improving the security of virtualized environments in many sensitive applications.  There are also other products on the market that offer this sort of functionality such as Googun’s Coccoon and Raytheon’s Guard offerings, but they are complex and costly and geared for non-commercial spaces.  Also, with VMware’s market-force dominance and near ubiquity, this capability has the real potential of bleeding over into the commerical space.

Today we see MLS systems featured in low risk environments, but it’s still not uncommon to see an operator tasked with using 3-4 different computers which are sometimes located in physically isolated facilities.

While this offers a level of security that has physical air gaps to help protect against unauthorized access, it is costly, complex, inefficient and does not provide for the real-time access needed to support the complex mission of today’s intelligence operatives, coalition forces or battlefield warfighters.

It may sound like a simple and mundane problem to solve, but in today’s distributed and collaborative Web2.0 world (which is one the DoD/Intel crowd are beginning to utilize) we find it more and more difficult to achieve.   Couple the information compartmentalization issue with the recent virtualization security grumblings: breaking out of VM Jails, Hypervisor
Rootkits and exploiting VM API’s for fun and profit…

This functionality has many opportunities to provide for
more secure virtualization deployments that will utilize MLS-capable
OS’s in conjunction with strong authentication, encryption, memory
firewalling and process isolation.  We’ve seen the first steps toward that already.

I look forward to what this may bring to the commercial space and the development of more secure virtualization platforms in general.  It’s building on decades of work in the information assurance space, but it’s getting closer to being cost-effective and reasonable enough for deployment.

/Hoff

  1. September 4th, 2007 at 10:02 | #1

    A good post on a topic that needs more discussion and greater understanding in the commercial security space due to confidentiality/privacy concerns.
    While I won't comment on Raytheon Guard, I must correct your inclusion of Googgun/Trustifier's Cocoon in your statement "…they are complex and costly and geared for non-commercial spaces." This is a common assumption about this level of security.
    The primary goal of Trustifier research, was to bring military grade security to the mainstream commercial space by removing the barriers of complexity and associated costs, which we have done. Cocoon is simply one application of the Trustifier Security System, which includes virtualization. Trustifier can be applied to any existing IT infrastructures to add internal controls and core data-centric security where they are lacking.
    Trustifier's solves the scalability, usability and networkability issues found in traditional MLS solutions through a paradigm shift in TCB kernel design and implementation. Designed to address and solve the problems of delivering MLS in highly networked heterogeneous environments (even the Global Information Grid(GIG) in a very unique way, Trustifier design and implementation makes maintaining classified information even in large environments natural, efficient and easy. Trustifier is indeed suitable for commercial enterprise, working with all COTS systems and applications.

  2. NRC
    September 8th, 2007 at 13:22 | #2

    This is good news if a little late in coming. The problem for most corporate is, as I know you are more than aware, this advance will come a little late for most companies. The commercial Gartner zombies are already on the move within many enterprises chanting in monotone voices "grrrr must virtualised". Network and security engineering teams are being driven to develop corporate security solutions for virtualised environments today. It is not un-typical for the bad boys to have their hacking tools out before the vendors bolt on some security – when will they ever learn!!
    A number of analysts I know are now heading off in an over confident way that Host intrusion will fix all their issues but I cant help think that even in our layered security models recommended by the worlds leading "experts" we are missing the fundamental issues. In today's bladecenter solutions we have potentially hundreds of virtual systems running inside the box, traditional network based security at this point hasn't had a look in and the time to deploy carefully tuned HID is unrealistic in many areas. In addition the risk of un-authorised and unknown guest operating systems being launched is a great one. In many ways hacking the NAS systems is a really neat way of attacking lots of systems at once and probably simpler than attacking individual VM's. Like many Microsoft features which are designed to ease use but in turn make it easier for attacks, VMware VMotion sets out the template to have large NAS shares for storing all virtual disks – isn't that nice a one stop shop to change hundreds of Virtual systems in one foul swoop.
    Like the rest of the world we are still all waiting for the birth of the secure operating system, and I think it will be slightly longer still before application developers stop carefully adding bugs to their code so I guess right now aside from bricking up the data center we should grab hold – the virtualisation train is not scheduled to stop soon and there are rocky tracks ahead.

  3. April 27th, 2008 at 17:41 | #3

    If a think is "Top Secret" who exactly can call it top secret? If something is a secret it must mean someone knows about the secret in order to call it a secret.
    Basically all of this is said in order to make you think, " who actually runs the entire show?
    It makes me laugh how you can have another countries land actually protected with in another country to, surely if it is your country it should be owned by that country and not parts rented out to other places?

  4. Eric Marceau
    May 16th, 2008 at 15:30 | #4

    The last post made a statement with all the assertion of authority possible. I can't let it stand unchallenged, regardless of how long ago it was made.
    "This is good news if a little late in coming."
    It seems to me that when National Security is at stake, until the "ultimate" solution comes along, every attempt at developping the ultimate will fall short, and those responsible for National Security will, at all times, apply due diligence and consider the speed of adoption of a better solution, and not WHETHER they should adopt the solution, regardless of the extent of prior investment. It is the one field where NO shortcomings, ONCE identified, and CAN be overcome, are allowed to continue unchecked.
    I believe they need to apply due diligence and give Trustifier a proper workout, so as to determine for themselves the extent to which it addresses any and all concerns.
    To the same extent, those entrusted with responsible management of other peoples funds (and credit) cannot ignore the potential simplicity with which Trustifier could address the full scope of risks of non-compliance within the current regulatory environment.

  5. February 7th, 2009 at 19:31 | #5

    I definitely agree with you guys. I think there isnt enough emphasis put on this topic. I frankly am shocked that anyone could say host intrusion would solve all of there problems. I think host intrusion does greatly improve matters but I dont think its perfect by any means.