Archive

Archive for the ‘VM HyperJacking’ Category

More On Cloud and Hardware Root Of Trust: Trusting Cloud Services with Intel® TXT

May 6th, 2011 No comments

Whilst at CloudConnect I filmed some comments with Intel, RSA, Terremark and HyTrust on Intel’s Trusted Execution Technology (TXT) and its implications in the Cloud Computing space specific to “trusted cloud” and using the underlying TPM present in many of today’s compute platforms.

The 30 minute session got cut down into more consumable sound bites, but combined with the other speakers, it does a good job setting the stage for more discussions regarding this important technology.

I’ve written previously on cloud and TXT with respect to measured launch environments and work done by RSA, Intel and VMware: More On High Assurance (via TPM) Cloud Environments. Hopefully we’ll see more adoption soon.

Enhanced by Zemanta

How the DOD/Intel Communities Can Help Save Virtualization from the Security Trash Heap…

September 3rd, 2007 5 comments

Sausagemachine
If you’ve been paying attention closely over the last year or so, you will have noticed louder-than-normal sucking sounds coming from the virtualization sausage machine as it grinds the various ingredients driving virtualization’s re-emergence and popularity together to form the ideal tube of tasty technology bologna. 

{I rather liked that double entendre, but if you find it too corrosive, feel free to substitute your own favorite banger marque in its stead. 😉 }

Virtualization is a hot topic; from clients to servers  applications to datastores, and networking to storage, virtualization is coming back full-circle from its MULTICS and LPAR roots and promises to change everything.  Again.

Unfortunately, one of the things virtualization isn’t changing quickly enough (for my liking) and in enough visible ways is the industry’s approach to engineering security into the virtualization product lifecycle early enough to allow us to deploy a more secure product out of the box.   

Sadly, most of the commercial virtualization offerings as well as the open source platforms have lacked much in the way of guidance as how to secure VMs beyond the common sense approach of securing non-virtualized instances, and the security industry has been slow to see any more than a few innovative solutions to the problems virtualization introduces or in some way intensifies.

You can imagine then the position that leaves customers.

I’m from the Government and I’m here to help…

However, here’s where some innovation from what some might consider an unlikely source may save this go-round from another security wreck stacked in the IT boneyard: the DoD and Intelligence communities and a high-profile partnering strategy for virtualized security.

Both the DoD and Intel agencies are driven, just like the private sector, to improve efficiency, cut costs, consolidate operationally and still maintain an ever vigilant high level of security.

An example of this dictate is the Global Information Grid (GIG.) The GIG represents:

"…a net-centric system
operating in a global context to provide processing, storage,
management, and transport of information to support all Department of
Defense (DoD), national security, and related Intelligence Community
missions and functions-strategic, operational, tactical, and
business-in war, in crisis, and in peace.

GIG capabilities
will be available from all operating locations: bases, posts, camps,
stations, facilities, mobile platforms, and deployed sites. The GIG
will interface with allied, coalition, and non-GIG systems.

One of the core components of the GIG is building the capability and capacity to securely collapse and consolidate what are today physically separate computing enclaves (computers, networks and data) based upon the classification, sensitivity and clearances of information and personnel which govern the access to data by those who try to access it.

Multi-Level Security Marketing…

This represents the notion of multilevel security or MLS.  I am going to borrow liberally from this site authored by Dr. Rick Smith to provide a quick overview, as the concepts and challenges of MLS are really critical to fully appreciate what I’m about to describe.  Oddly enough, the concept and work is also 30+ years old and you’d recognize the constructs as being those you’ll find in your CISSP test materials…You remember the Bell-LaPadula model, don’t you?

The MLS Problem

We use the term multilevel
because the defense community has classified both people and
information into different levels of trust and sensitivity. These
levels represent the well-known security classifications: Confidential,
Secret, and Top Secret. Before people are allowed to look at classified
information, they must be granted individual clearances that are based
on individual investigations to establish their trustworthiness. People
who have earned a Confidential clearance are authorized to see
Confidential documents, but they are not trusted to look at Secret or
Top Secret information any more than any member of the general public.
These levels form the simple hierarchy shown in Figure 1.
The dashed arrows in the figure illustrate the direction in which the
rules allow data to flow: from "lower" levels to "higher" levels, and
not vice versa.

Figure 1: The hierarchical security levels
 

When speaking about these levels, we use three different terms:

  • Clearance level
    indicates the level of trust given to a person with a security
    clearance, or a computer that processes classified information, or an
    area that has been physically secured for storing classified
    information. The level indicates the highest level of classified
    information to be stored or handled by the person, device, or location.
  • Classification level
    indicates the level of sensitivity associated with some information,
    like that in a document or a computer file. The level is supposed to
    indicate the degree of damage the country could suffer if the
    information is disclosed to an enemy.
  • Security level is a generic term for either a clearance level or a classification level.

The
defense community was the first and biggest customer for computing
technology, and computers were still very expensive when they became
routine fixtures in defense organizations. However, few organizations
could afford separate computers to handle information at every
different level: they had to develop procedures to share the computer
without leaking classified information to uncleared (or insufficiently
cleared) users. This was not as easy as it might sound. Even when
people "took turns" running the computer at different security levels
(a technique called periods processing), security officers had to worry
about whether Top Secret information may have been left behind in
memory or on the operating system’s hard drive.
Some sites purchased
computers to dedicate exclusively to highly classified work, despite
the cost, simply because they did not want to take the risk of leaking
information.

Multiuser
systems, like the early timesharing systems, made such sharing
particularly challenging. Ideally, people with Secret clearances should
be able to work at the same time others were working on Top Secret
data, and everyone should be able to share common programs and
unclassified files. While typical operating system mechanisms could
usually protect different user programs from one another, they could
not prevent a Confidential or Secret user from tricking a Top Secret
user into releasing Top Secret information via a Trojan horse.

When
a user runs the word processing program, the program inherits that
user’s access permissions to the user’s own files. Thus the Trojan
horse circumvents the access permissions by performing its hidden
function when the unsuspecting user runs it. This is true whether the
function is implemented in a macro or embedded in the word processor
itself. Viruses and network worms are Trojan horses in the sense that
their replication logic is run under the context of the infected user.
Occasionally, worms and viruses may include an additional Trojan horse
mechanism that collects secret files from their victims. If the victim
of a Trojan horse is someone with access to Top Secret information on a
system with lesser-cleared users, then there’s nothing on a
conventional system to prevent leakage of the Top Secret information.
Multiuser systems clearly need a special mechanism to protect
multilevel data from leakage.

Think about the challenges of supporting modern-day multiuser Windows Operating Systems (virtualized or not,) together onto a single compute platform while also consolidating multiple networks of various classifications (including the Internet) into a single network transport while providing ZERO tolerance for breach.

What’s also different here from the compartmentalization requirements of "basic" virtualization is that the segmentation and isolation is
critically driven by the classification and sensitivity of the data itself and the clearance of those trying to access it.   

To wit:

VMware and General Dynamics are partnering to provide the NSA with the next evolution of their High Assurance Platform (HAP) to solve the following problem:

… users with multiple security clearances, such as members of the U.S. Armed Forces and Homeland Security personnel, must
use separate physical workstations. The result is a so-called "air gap"
between systems to access information in each security clearance level
in order to uphold the government’s security standards.

VMware
said it will provide an extra layer of security in its virtualization
software, which lets these users run the equivalent of physically
isolated machines with separate levels of security clearance on the
same workstation.

HAP builds on the current
solution based on VMware, called NetTop, which allows simultaneous
access to classified information on the same platform in what the
agency refers to as low-risk environments.

For HAP, VMware has added a thin API of
fewer than 5,000 lines of code to its virtualization software that can
evolve over time. NetTop is more static and has to go through a lengthy
re-approval process as changes are made. "This code can evolve over
time as needs change and the accreditation process is much quicker than
just addressing what’s new." 

HAP encompasses standard Intel-based commercial hardware that
could range from notebooks and desktops to traditional workstations. Government agencies will see a minimum 60 percent
reduction in their hardware footprints and greatly reduced energy
requirements.

HAP
will allow for one system to maintain up to six simultaneous virtual
machines. In addition to Windows and Linux, support for
Sun’s Solaris operating system is planned."

This could yield some readily apparent opportunities for improving the security of virtualized environments in many sensitive applications.  There are also other products on the market that offer this sort of functionality such as Googun’s Coccoon and Raytheon’s Guard offerings, but they are complex and costly and geared for non-commercial spaces.  Also, with VMware’s market-force dominance and near ubiquity, this capability has the real potential of bleeding over into the commerical space.

Today we see MLS systems featured in low risk environments, but it’s still not uncommon to see an operator tasked with using 3-4 different computers which are sometimes located in physically isolated facilities.

While this offers a level of security that has physical air gaps to help protect against unauthorized access, it is costly, complex, inefficient and does not provide for the real-time access needed to support the complex mission of today’s intelligence operatives, coalition forces or battlefield warfighters.

It may sound like a simple and mundane problem to solve, but in today’s distributed and collaborative Web2.0 world (which is one the DoD/Intel crowd are beginning to utilize) we find it more and more difficult to achieve.   Couple the information compartmentalization issue with the recent virtualization security grumblings: breaking out of VM Jails, Hypervisor
Rootkits and exploiting VM API’s for fun and profit…

This functionality has many opportunities to provide for
more secure virtualization deployments that will utilize MLS-capable
OS’s in conjunction with strong authentication, encryption, memory
firewalling and process isolation.  We’ve seen the first steps toward that already.

I look forward to what this may bring to the commercial space and the development of more secure virtualization platforms in general.  It’s building on decades of work in the information assurance space, but it’s getting closer to being cost-effective and reasonable enough for deployment.

/Hoff

HyperJackStacking? Layers of Chewy VMM Goodness — the BLT of Security Models

August 27th, 2007 1 comment

Blt
So Mogull is back on the bench and I’m glad to see him blogging again. 

As I type this, I’m listening to James Blunt’s  new single "1973" which is unfortunately where Rich’s timing seems to be on this topic.  ‘Salright though.  Can’t blame him.  He’s been out scouting the minors for a while, so being late to practice is nothing to be too wound up about.

<If you can’t tell, I’m being sarcastic.  I only wish that Rich was
when he told me that his favorite TexMex place in his hometown is
called the "Pink Taco."  That’s all I’m going to say about that…>

The notion of the HyperJackStack (Hypervisor Jacking & Stacking) is actually a problem set that has been discussed at length and in the continuum of these discussions happened quite a while ago. 

To put it bluntly, I believe the discussion — for right or wrong — stepped over this naughty little topic months ago in lieu of working from the bottom up for the purpose exposing fundamental architectural deficiencies (or at least their potential) in the core of virtualization technology.  This is an argument that parallels dissecting a BLT sandwich…you’re approaching getting to the center of a symmetric stack so which end you start at is almost irrelevant.

The good/bad VMM/HV problem has really been relegated to push-pin on the to-do board of all of the virtualization vendors and this particular problem has been framed by said vendors to be apparently solved first operationally from the management plane and THEN dealt with from the security perspective.

So Rich argues that after boning up on Joanna and Thom’s research that they’re arguing the wrong case completely for the dangers of virtualized rootkits.  Instead of worrying about undetectability of this or that — pills and poultry be damned — one should be focused on establishing the relative disposition of *any* VMM/Hypervisor running in/on a host:

Problem is, they’re looking at the wrong problem. I will easily concede
that detecting virtualization is always possible, but that’s not the
real problem. Long-term virtualization will be normal, not an
exception, so detecting if you’re virtualized won’t buy you anything.
The bigger problem is detecting a malicious hypervisor, either the main
hypervisor or maybe some wacky new malicious hypervisor layered on top
of the trusted hypervisor.

To Rich’s credit, I think that this is a huge problem and one that deserves to be solved.  That does not mean that I think one is the "right" versus "wrong" problem to solve, however.  Nor does it mean this hasn’t been discussed.  I’ve talked about it many times already.  Maybe not as eloquently…

The flexibility of virtualization is what provides the surface expansion of vectors for threat; you can spin up, move or kill a VM across an enterprise with a point-click.  So the first thing to do before trying to determine if a VMM/HV is malicious is to detect its presence and layering in the first place…this is where Thom/Joanna’s research really does make sense.

You’re approaching this from a different direction, is all.

Jackintheboxceo
Thom responded here, and I have to agree with his overall posture; the notion of putting hooks into the VMM/HV to allow for "external" detection mechanisms for the sake solely of VMM/HV rootkit detection is unlikely given the threat, but we are already witness to the engineered capacities to allow for "plug-ins" such as Blue Lane’s that function "along side" the HV/VMM and there’s nothing saying one couldn’t adapt a similar function for this sort of detection (and/or prevention) as a value-add.

Ultimately though, I think that the point of response boils down to the definition of the mechanisms used in the detection of a malicious VMM/HV.  I ask you Rich, please define a "malicious" VMM/HV from one steeped in goodness. 

This sounds like in practice, it will come down to yet another iteration of the signature-driven IPS circle jerk to fingerprint/profile disposition.  We’ll no doubt see anomaly and behavioral analysis used here, and then we’ll have hashing, memory firewalls, etc…it’s going to be the Hamster Wheel all over again.  For the same reason we have trouble with validating security and compliance state for anything more than the cursory checks @ 30K feet today, you’ll face the same issue with virtualization — only worse.

I’ve got one for you…how about escaping from the entire VM "jail" entirely…Ed Skoudis over @ IntelGuardians just did an interview with the PaulDotCom boys on this topic…

I believe one must start from the bottom and work up; they’re trying to make up for the fact that this stuff wasn’t properly thought through in this iteration and are trying to expose the issue now. In fact, look at what Intel just announced today with vPro:

New in this product is Intel Trusted Execution Technology (Intel
TXT, formerly codenamed LaGrande). Intel TXT protects data within
virtualized computing environments, an important feature as IT managers
are considering the adoption of new virtualization-enabled computer
uses. Used in conjunction with a new generation of the company’s
virtualization technology – Intel Virtualization Technology for
Directed I/O – Intel TXT ensures that virtual machine monitors are less
vulnerable to attacks that cannot be detected by today’s conventional
software-security solutions. By isolating assigned memory through this
hardware-based protection, it keeps data in each virtual partition
protected from unauthorized access from software in another partition.

So no, Ptacek and Joanna aren’t fighting the "wrong" battle, they’re just fighting one that garners much more attention, notoriety, and terms like "HyperJackStack" than the one you’re singling out.  😉

/Hoff

P.S. Please invest in a better setup for your blog…I can’t trackback to you (you need Halo or something) and your comment system requires registration…bah!  Those G-Boys have you programmed… 😉

Oh SNAP! VMware acquires Determina! Native Security Integration with the Hypervisor?

August 19th, 2007 12 comments

Determinalogo
Hot on the trails of becoming gigagillionaires, the folks at VMware make my day with this.  Congrats to the folks @ Determina.

Methinks that for the virtualization world, it’s a very, very good thing.  A step in the right direction.

I’m going to prognosticate that this means that Citrix will buy Blue Lane or Virtual Iron next (see bottom of the post) since their acquisition of XenSource leaves them with the exact same problem that this acquisition for VMware tries to solve:

VMware Inc., the market leader in virtualization software, has acquired
Determina Inc., a Silicon Valley maker of host intrusion prevention
products.

…the security of virtualized
environments has been something of an unknown quantity due to the
complexity of the technology and the ways in which hypervisors interact
with the host OS. 
Determina’s technology is designed specifically to protect the OS
from malicious code, regardless of the origin of the attack, so it
would seem to be a sensible fit for VMware, analysts say.Memoryfirewall

In his analysis of the deal, Gartner’s MacDonald sounded many of
the same notes. "By potentially integrating Memory Firewall into the
ESX hypervisor, the hypervisor itself can provide an additional level
of protection against intrusions. We also believe the memory protection
will be extended to guest OSs as well: VMware’s extensive use of binary
emulation for virtualization puts the ESX hypervisor in an advantageous
position to exploit this style of protection," he wrote.

I’ve spoken a lot recently  about how much I’ve been dreading the notion that security was doomed to repeat itself with the accelerated take off of server virtualization since we haven’t solved many of the most basic security problem classes.  Malicious code is getting more targeted and more intelligent and when you combine an emerging market using hot technology without an appropriate level of security… 

Basically, my concerns have stemmed from the observation that if we can’t do a decent job protecting physically-seperate yet interconnected network elements with all the security fu we have, what’s going to happen when the "…network is the computer" (or vice versa.)  Just search for "virtualization" via the Lijit Widget above for more posts on this…

Some options for securing virtualized guest OS’s in a VM are pretty straight foward:

  1. Continue to deploy layered virtualized security services across VLAN segments of which each VM is a member (via IPS’s, routers, switches, UTM devices…)
  2. Deploy software like Virtual Iron’s which looks like a third party vSwitch IPS on each VM
  3. Integrate something like Blue Lane’s ESX plugin-in which interacts with and at the VMM level
  4. As chipset level security improves, enable it
  5. Deploy HIPS as part of every guest OS.

Each of these approaches has its own sets of pros and cons, and quite honestly, we’ll probably see people doing all five at the same time…layered defense-in-depth.  Ugh.

What was really annoying to me, however, is that it really seemed that in many cases, the VM solution providers were again expecting that we’d just be forced to bolt security ON TO our VM environments instead of BAKING IT IN.  This was looking like a sad reality.

I’ll get into details in another post about Determina’s solution, but I am encouraged by VMware’s acquisition of a security company which will be integrated into their underlying solution set.  I don’t think it’s  a panacea, but quite honestly, the roadmap for solving these sorts of problems were blowing in the wind for VMware up until this point.

"Further, by
using the LiveShield capabilities, the ESX hypervisor could be used
‘introspectively’ to shield the hypervisor and guest OSs from attacks
on known vulnerabilities in situations where these have not yet been
patched. Both Determina technologies are fairly OS- and
application-neutral, providing VMware with an easy way to protect ESX
as well as Linux- and Windows-based guest OSs."

Quite honestly, I hoped they would have bought Blue Lane since the ESX Hypervisor is now going to be a crowded space for them…

We’ll see how well this gets integrated, but I smiled when I read this.

Oh, and before anyone gets excited, I’m sure it’s going to be 100% undetectable! 😉

/Hoff

Joanna Rutkowska’s Amazing Undetectable Virtualization HyperMalware Preso @ Blackhat…

August 1st, 2007 4 comments

I’m sorry, what?

EOM.

 

/Hoff

Categories: Virtualization, VM HyperJacking Tags:

For Sale / Special Price: One (Un)detectable Hyperjacking PillWare: $416,000. Call Now While Supplies Last!

June 29th, 2007 No comments

Rootkits_for_dummies
Joanna Rutkowska of "Invisible Things" Blue Pill Hypervisor rootkit fame has a problem.  It’s about 6 foot+ something, dresses in all black and knows how to throw down both in prose and in practice.

Joanna and crew maintain that they have the roughed-out prototype that supports their assertion that their HyperJacking malware is undetectable.  Ptacek and his merry band of Exploit-illuminati find this a hard pill to swallow and reckon they have a detector that can detect the "undetectable."

They intend to prove it.  This is awesome!  It’s like the Jackson/Lidell UFC fight.  You don’t really know who to "root" for, you just want to be witness to the ensuing carnage!

We’ve got a stare down.  Ptacek and crew have issued a challenge that they expect — with or without Joanna’s participation — to demonstrate successfully at BlackHat Vegas:

Joanna, we respectfully request terms under which you’d agree to an
“undetectable rootkit detection challenge”. We’ll concede almost
anything reasonable; we want the same access to the
(possibly-)infected machine than any antivirus software would get.

The backstory:

  • Dino Dai Zovi, under Matasano colors,
    presented a hypervisor rootkit (“Vitriol”) for Intel’s VT-X extensions at Black Hat last year,
    at the same time as Joanna presented BluePill for AMD’d SVM.

  • We concede: Joanna’s rootkit is coolor than ours. I particularly
    liked using the debug registers to grab network traffic out of
    the drivers. We stopped weaponizing Vitriol.

  • Peter Ferrie, the Symantec branch of our Black Hat team, releases
    a kick-ass paper
    on hypervisor detection. Peter’s focus is
    on fingerprinting software hypervisors (like VMWare), but he also
    comes up with a clever way to detect hardware virtualization.

  • Nate Lawson, Dino, and I are, simultaneously, working on hardware
    rootkit detection techniques.

  • Nate, Peter, Dino, and I join up to defend our thesis at Black
    Hat: if you surreptitiously “hyperjack” an OS, enabling hardware
    virtualization (or replacing or infecting an existing hypervisor),
    you introduce so many subtle changes in system behavior —- timing
    and otherwise —- that you’re bound to be detectable.

…and Joanna respondeth, signaling her "readiness" and conditions for the acceptance of said challenge:

Thomas Ptacek and company just came up with this funny challenge to test our Blue Pill rootkit. And, needles to say, the Invisible Things Lab team is ready to take their challenge, however with some additional requirements, that would assure the fairness of the contest.

First,
we believe that 2 machines are definitely not enough, because the
chance of correct guess, using a completely random (read: unreliable)
detection method is 50%. Thus we think that the reasonable number is 5
machines. Each of them could be in a state 0 or 1bluepill.exe and bluepill.sys

The .sys
file is digitally signed, so it loads without any problem (we could use
one of our methods for loading unsigned code on vista that we’re
planning to demonstrate at BH, but this is not part of the challenge,
so we will use the official way).

The bluepill.exe takes one argument which is 0 or 1. If it’s 1 it loads the driver and infects the machines. If it’s 0 it also loads the driver, but the driver does not infect the machine.

So, on each of the 5 machines we run bluepill.exe with randomly chosen argument, being 0 or 1. We make sure that at least one machine is not infected and that at least one machine is infected.

After that the detection team runs their detector.exe executable on each machine. This program can not take any arguments and must return only one value: 0 or 1. It must act autonomously — no human assistance when interpreting the results.

The goal of the detection team is to correctly mark each machine as either being infected (1) or not (0). The chance of a blind guess is:

(i.e. infected or not). On each of this machines we install two files:

1/(2^5-2) = 3%


The
detector can not cause system crash or halt the machine — if it does
they lose. The detector can not consume significant amount of CPU time
(say > 90%) for more then, say 1 sec. If it does, then it’s
considered disturbing for the user and thus unpractical.

The
source code of our rootkit as well as the detector should be provided
to the judges at the beginning of the contests. The judges will compile
the rootkit and the detector and will copy the resulting binaries to
all test machines.

After the completion of the contest,
regardless of who wins, the sources for both the rootkit and the
detector will be published in the Internet — for educational purpose
to allow others to research this subject.

Our current Blue Pill
has been in the development for only about 2 months (please note that
we do not have rights to use the previous version developed for
COSEINC) and it is more of a prototype, with primary use for our training in Vegas,
rather then a "commercial grade rootkit". Obviously we will be
discussing all the limitations of this prototype during our training.
We believe that we would need about 6 months full-time work by 2 people
to turn it into such a commercial grade creature that would win the
contest described above. We’re ready to do this, but we expect that
somebody compensate us for the time spent on this work. We would expect
an industry standard fee for this work, which we estimate to be $200
USD per hour per person.

If Thomas Ptacek and his colleges are
so certain that they found a panacea for virtualization based malware,
then I’m sure that they will be able to find sponsors willing to
financially support this challenge.

As a side note, the description for our new talk for Black Hat Vegas has just been published yesterday.

So, if you get past the polynomial math, the boolean logic expressions, and the fact that she considers this challenge "funny," reading between the HyperLines, you’ll extract the following:

  1. The Invisible Things team has asserted for some time that their rootkit is 100% undetectable
  2. They’ve worked for quite sometime on their prototype, however it’s not "commercial grade"
  3. In order to ensure success in winning the competition and thus proving the assertion, they need to invest time in polishing the rootkit
  4. They need 5 laptops to statistically smooth the curve
  5. The Detector can’t impact performance of the test subjects
  6. All works will be Open Sourced at the conclusion of the challenge
    (Perhaps Alan Shimel can help here! 😉 ) and, oh, yeah…
  7. They have no problem doing this, but someone needs to come up with $416,000 to subsidize the effort to prove what has already been promoted as fact

That last requirement is, um, unique.

Nate Lawson, one of the challengers, is less than impressed with this codicil and respectfully summarizes:

The final requirement is not surprising. She claims she has put four
person-months work into the current Blue Pill and it would require
twelve more person-months for her to be confident she could win the
challenge. Additionally, she has all the experience of developing Blue
Pill for the entire previous year.

We’ve put about one person-month into our detector software and have
not been paid a cent to work on it. However, we’re confident even this
minimal detector can succeed, hence the challenge. Our Blackhat talk
will describe the fundamental principles that give the detector the
advantage.

If Joanna’s time estimate is correct, it’s about 16 times harder to
build a hypervisor rootkit than to detect it. I’d say that supports our
findings.

I’m not really too clear on Nate’s last sentence as I didn’t major in logic in high school, but to be fair, this doesn’t actually discredit Joanna’s assertion; she didn’t say it wasn’t difficult to detect HV rootkits, she said it was impossible. Effort and possibility are mutually exclusive.

This is going to be fun.  Can’t wait to see it @ BlackHat.

See you there!

/Hoff

Read more…

Categories: Virtualization, VM HyperJacking Tags: