Archive

Archive for September, 2007

Poetic Weekly Security Review

September 28th, 2007 2 comments

Another week has come and gone
and still the Internet hums along.
Despite predictions that are quite dour
like taking down our nation’s power.

Government security made the press
vendors, hackers, the DHS.
Google Apps and Cross-Site-Scripting,
through our mail the perps are sifting.

TJX, Canadians found, deployed Wifi
that wasn’t sound.

VMWare’s bugs in DHCP
shows there’s risk virtually

HD Moore’s become quite adroit
at extending the reach of Metasploit
hacking tools found a new home
run ’em on your cool iPhone!

Speaking of iPhone
Apple’s played a trick,
hack your phone
it becomes a brick!

Missile silos for sale, that’s a fact,
but it seems the auctioneer’s been hacked!
Applied to Gap as would-be clerks?
They lost your data, careless jerks!

Microsoft updated computers in stealth
which affected the poor machines good health
It seems the risk analysis battle’s won
who needs ISO 2-7-00-1?

Maynor was back in the news,
as his sick days he did abuse.
He claimed to contract Pleurisy,
but was at home with Halo3.

More fun’s in store with M&A
another deal, another day;
Huawei and 3Com getting hitched
who knows if TippingPoint gets ditched?

It’s never boring in InfoSec
Like watching a slow-mo car-crash wreck.
I wish you well my fellow geek
until this time, same place, next week.

/Hoff

Categories: Jackassery Tags:

Opening VMM/HyperVisors to Third Parties via API’s – Goodness or the Apocalypse?

September 27th, 2007 2 comments

Holding_breath
This is truly one of those times that we’re just going to have to hold our breath and wait and see…

Prior to VMworld, I blogged about the expected announcement by Cisco and VMware that the latter would be opening the HyperVisor to third party vendors to develop their virtual switches for ESX.

This is extremely important in the long term because security vendors today who claim to have security solutions for virtualized environments are basically doing nothing more than making virtual appliance versions of their software that run as yet another VM on a host along side critical applications.

These virtual appliances/applications are the same you might find running on their stand-alone physical appliance counterparts, and they have no access to the HyperVisor (or the vSwitch) natively.  Most of them therefore rely upon enabling promiscuous mode on vSwitches to gain visibility to inter-VM traffic which uncorks a nasty security genie of its own.  Furthermore, they impose load and latencies on the VM’s as they compete for resources with the very assets they seek to protect.

The only exception to this that I know of currently is Blue Lane who actually implements their VirtualShield product as a HyperVisor Plug-in which gives them tremendous advantages over products like Reflex and Catbird (which I will speak about all of these further in a follow-on post.)  Ed: I have been advised that this statement needs revision based upon recent developments — I will, as I mention, profile a comparison of Blue Lane, Catbird and Reflex in a follow-on post.  Sorry for the confusion.

At any rate, the specific vSwitch announcement described above was not forthcoming at the show, but a more important rumbling became obvious on the show floor after speaking with several vendors such as Cisco, Blue Lane, Catbird and Reflex; VMware was quietly beginning to provide third parties access to the HyperVisor by exposing API’s per this ZDNet article titled "VMware shares secrets in security drive":

Virtualization vendor VMware has quietly begun sharing some of
its software secrets with the IT security industry under an unannounced
plan to create better ways of securing virtual machines. 

VMware
has traditionally restricted access to its hypervisor code and, while
the vendor has made no official announcement about the API sharing
program tentatively called "Vsafe," VMware founder and chief scientist
Mendel Rosenblum said that the company has started sharing some APIs
(application program interfaces) with security vendors.

I know I should be happy about this, and I am, but now that we’re getting closer to the potential for better VM security, the implementation deserves some scrutiny.  We don’t have that yet because most of the vSafe detail is still hush-hush.

This is a double-edged sword.  While it represents a fantastic opportunity to expose functionality and provide visibility into the very guts of the VMM to allow third party software to interact with and control the HyperVisor and dependent VM/GuestOS’s, opening the Kimono represents a potentially huge new attack surface for potential malicious use.

"We would like at a high level for (VMware’s platform) to be a better
place to run," he said. "To try and realize that vision, we have been
partnering with experts in security, like the McAfees and Symantecs,
and asking them about the security issues in a virtual world."

I’m not quite sure I follow that logic.  McAfee and Symantec are just as clueless as the bulk of the security world when it comes to security issues in a virtual world.  Their answer is usually "do what you normally do and please make sure to buy a license for our software on each VM!" 

The long-term for McAfee and Symantec can’t be to continue to deploy bloatware on every VM.  Users won’t put up with the performance hit or the hit in their wallet.  They will have to re-architect to take advantage of the VMM API’s just like everyone else, but they have a more desperate timefame:

Mukil Kesavan, a VMware intern studying at the University of Rochester,
demonstrated his research into the creation of a host-based antivirus
scanning solution for virtualized servers at the conference. Such a
solution would enable people to pay for a single antivirus solution
across a box running multiple virtual servers, rather than having to
buy an antivirus solution for each virtual machine.

Licensing is going to be very critical to companies like these two very shortly as it’s a "virtual certainty" that the cost savings enjoyed by consolidating physical servers will place pressure on reducing the software licensing that goes along with it — and that includes security.


Rosenblum says that some of the traditional tools used to protect a hardware server work just as well in a virtualized environment, while others "break altogether."


"We’re trying to fix the things that break, to bring ourselves up to
the level of security where physical machines are," he said. "But we
are also looking to create new types of protection."

Rosenblum said the APIs released as part of the initiative
offer security vendors a way to check the memory of a processor, "so
they can look for viruses or signatures or other bad things."

Others allow a security vendor to check the calls an
application within a virtual machine is making, or at the packets the
machine is sending and receiving, he said.

I think Rosenblum’s statement is interesting in a couple of ways:

  1. His goal, as quoted,  is to fix the things that virtualization breaks and bring security up to the level of physical servers.  Unlike every other statement from VMware spokesholes, this statement therefore suggests that virtualized environments are less secure than physical ones.  Huh.
  2. I think this area of focus — when combined with the evolution of the determina acquisition — will yield excellent security gains.  Extending the monitoring and visibility into the isolated memory spaces of the virtual processors in a VM means that we may be able to counter attacks without having to depend upon solely on the virtual switches; it gives you application-level visibility without the need for another agent.

The Determina acquisition is really a red herring for VMware.  Determina’s "memory firewall" seeks to protect a system "…from buffer overflow
attacks, while still allowing the system to run at high speeds. It also
developed "hot-patching" technology–which allows servers to be patched
on the fly, while they are still running."  I’ve said before that this acquisition was an excellent move.  Let’s hope the integration goes well.

If you imagine this chunk built into the VMM, the combination of exposed VMM API’s with a lightweight VMM running in hardware (flash) embedded natively into a server less the bloated service console, it really is starting tohead down an interesting path.  This is what ESX Server 3i is designed to provide:

ESX Server 3i has considerable advantages over its predecessors
from a security standpoint. In this latest release, which will be
available in November, VMware has decoupled the hypervisor from the
service console it once shipped with. This console was based on a
version of the Red Hat Linux operating system.


As such, ESX 3i is a mere 32MB in size, rather than 2GB.

Some 50 percent of the vulnerabilities that VMware was patching in
prior versions of its software were attributable to the Red Hat piece,
not the hypervisor.

"Our hope is that those vulnerabilities will all be gone in 3i," Rosenblum said.

Given Kris Lamb’s vulnerability distribution data from last week, I can imagine that everyone hopes that these vulnerabilities will all be gone, too.  I wonder if Kris can go back and isolate how many of the vulns listed as "First Party" were attributable to the service console (the underlying RH Linux OS) accompanying the HyperVisor.  This would be good to know.  Kris? 😉

At any rate, life’s about trade-offs and security’s no different.  I think that as we see the API’s open up, so will more activity designed to start tearing at the fleshy underbelly of the VMM’s.  I wonder if we’ll see attacks specific to flash hardware when 3i comes out?

/Hoff

(P.S. Not to leave XenSource or Veridian out of the mix…I’m sure that their parent companies (Citrix & microsoft) who have quite a few combined security M&A transactions behind them are not dragging their feet on security portfolio integration, either.)

 

Categories: Virtualization, VMware Tags:

Wartermarking & DRM Round 2: Amazon.com Watermarking Their MP3’s…

September 26th, 2007 No comments

Amazonbust
About a month ago, I posted about a CNET article by Matt Rosoff which suggested that digital watermarking would replace DRM.  My suggestion was that it was pretty obvious that watermarking won’t "replace" DRM, it is merely another accepted application of it.

Here’s a really interesting story from Gizmodo about how, as mentioned in the article, Amazon is now claiming they are DRM Free whilst embedding digital watermarking into their purchased MP3’s.  The article is titled "Still DRM Free: Amazon’s MP3s Contain Watermarks, But Not the Privacy-Invading Variety." 

Interestingly, the author (Adam Frucci) shows an image featuring the audio substrates of the original recording, the watermarked encoding and the resultant subtracted watermarked artifacts:Watermarkwaveform

Amazon.com’s
new MP3 store watermarks its MP3s, but only with information stating
where the songs were purchased, not who did the purchasing, according
to the online uberstore.

That’s the good news. The bad news is that
this issue has inspired me to ramble about the stupidity of the whole
idea of watermarking tracks with identifying info.

I mean, what would be the point? Most music that gets widely pirated
comes from scene groups that do rips from CDs, not from people who
legally purchase music online. It’s the same thing I never understood
about DRM: it only takes one copy getting ripped or spread around for
something to be easily accessed in the pirate-o-sphere, so why waste so
much time keeping normal people from sharing? I mean, even if they did
find some Kanye song in a girl’s shared Soulseek folder and it was ID’d
with some dude’s name, what does that prove? Not much. In any case,
Amazon doesn’t look to be doing anything of the sort, so bravo to that,
and another kudos to them for selling only straight-up MP3s. Now just
get all the labels on board and we’ll have the music store we’ve all be
clamoring for for so long.

I agree with the author that should we assume that the watermark just describes where the song is purchased, it does little good other than the concept that was raised in the previous article I referenced above in terms of what Universal plans to use watermarking for:

Universal can then use this data to
help decide whether the risk of piracy outweighs the increased sales
from DRM-free MP3 files, segmenting this decision by particular
markets. For example, it might find that new Top 40 singles are more
likely to find their way onto file-trading networks than classic rock
from the 1970s.

But that’s really not the reason for this post.  The reason for this post is the bold-faced, underlined text in the fourth paragraph above "according
to the online uberstore.
"
  The author is simply going on Amazon’s word that the artifacts only contain purchase origin data and nothing regarding the purchaser?

I find it odd that he’s not particularly concerned with validating Amazon’s claims and is willing to take them on face value that this is all the watermarks contain in order to support such a lofty title for the article. 

/Hoff

 

Categories: Uncategorized Tags:

What Do the Wicked Witch of the East and a Stranded House Ditched on the Freeway Have to Do with Rogue Virtualization Deployments?

September 26th, 2007 1 comment

OK, hang on tight.  This one’s full of non-sequiturs, free-form associations, housemoving debacles, and several "Wizard of Oz" references…

First comes the setup care of BoingBoing wherein a man who has permission to take one route moving his house down a specific freeway, takes another route instead without telling anyone :

Apparently some guy ditched his house on the Hollywood Freeway, and it’s been there since Saturday.
200709250912Patrick
Richardson’s now immobile home was being moved Saturday from Santa
Monica to Santa Clarita when several mishaps _ including a
roof-shredding blow while attempting to pass beneath an overpass _
slowed its progress and it fell off its trailer.

Richardson, 45, got an oversized load permit from the California
Department of Transportation. But instead of following the authorized
Santa Monica-San Diego-Golden State freeways route, authorities said,
he headed through downtown Los Angeles and then onto the Hollywood
Freeway.

In the downtown area, the wheels started falling off, California Highway Patrol Officer Jason McCutcheon said.

Now the punchline courtesy of ComputerWorld wherein IT managers describe taking their own interesting routes unannounced whilst adopting virtualization. 

A couple of these choice snippets seem to indicate that many corporate IT managers are ignoring posted routes, choosing different off-ramps, and often experience the virtual equivalent of losing their roofs, feeling the wheels come off and leaving their infrastructure stuck on the information superhighway:

IT managers at some companies can feel forced to hide plans
from end users and vendors in order to overcome potential objections to
virtualization
, said IT professionals and analysts attending Computerworld’s Infrastructure Management World (IMW) conference, held earlier this month in Scottsdale, Ariz.

In some cases, end users object to virtualization because
they’re concerned that virtual machines lack the security and
performance of dedicated servers.

Companies are taking a variety of measures to overcome such
obstacles, including adopting “don’t ask, don’t tell” policies in order
to get virtual applications running without notifying users and
vendors.

Some IT professionals at the conference defended decisions to keep
users out of the loop, while others said such dishonest dealings could
prove tricky.

“It’s not like we’re hiding anything,” said Wendy Saadi, a
virtualization project manager for the city government of Mesa, Ariz.

My users don’t care what servers we run their applications on, for the most part, as long as it all works.” 

However, Saadi noted that an initial effort by a small Mesa IT
team to implement virtualization without notifying users — or the rest
of the IT organization — did force a change in direction.

“When we first started, [the small team] watched training
videos about how to virtualize everything without asking anyone first,”
Saadi said. “So they did that, and we were getting a reputation [among
users and other Mesa IT managers] as ‘that’ server group. We put the
brakes on everything.”


Software vendors are also erecting barriers to efforts to set up virtual computing systems, according to IMW attendees. 

Some vendors won’t support their software at all if it’s run
on virtual machines, they said. Those that do support virtualized
deployments have widely varied pricing schemes

David Hodge, manager of computer systems at Systech Inc., a Woodridge,
Ill.-based vendor of billing and dispatch software for concrete mixers,
is one IT staffer who doesn’t tell his vendors and end users about
virtualization projects right away. However, his employer is a software
vendor that prohibits users from virtualizing its software.

“We’re one of those vendors that doesn’t allow our customers
to do virtualization, but I’m off in my corner doing it,” he
acknowledged. “It makes my job easier to just put it out there and then
tell [users] later.
I eventually do tell them, but just not during the
initial period.”

Herb…cleanup, aisle seven!

Wicked_witch_2
Wow.  This is why trying to fix social problems with technology will never work.  The last time we tried to mix magic and housemoving we got this:

Sure, it all ended well, but the Scarecrow (InfoSec,) the Lion (compliance/audit) and the Tinman (IT) went through hell to get there…I guess there’s no place like /var/home

Clicking our heels ain’t gonna make stuff like this better anytime soon.  We need to get our arms around the policies regarding virtualization deployments *before* they start happening, or else you can expect to be pulling folks out from under the collapsed weight of their datacenters.

…if I only had a brain…you got all the references, right?  I knew that you would!

/Hoff

Categories: Virtualization Tags:

Amrit: I Love You, Man…But You’re Still Not Getting My Bud Lite

September 26th, 2007 1 comment

Medium_budlightotter
I’ve created a monster!

Well, a humble, well-spoken and intelligent monster who — like me — isn’t afraid to admit that sometimes it’s better to let go than grip the bat too tight.  That doesn’t happen often, but when it does, it’s a wonderful thing.

I reckon that despite having opinions, perhaps sometimes it’s better to listen with two holes and talk with one, shrugging off the almost autonomic hardline knee-jerks of defensiveness that come from having to spend years of single minded dedication to cramming good ideas down people’s throats.

It appears Amrit’s been speaking to my wife, or at least they read the same books.

So it is with the utmost humility that I take full credit for nudging along Amrit’s renaissance and spiritual awakening as evidenced in this, his opus magnum of personal growth titled "Embracing Humility – Enlightened Information Security" wherein a dramatic battle of the Ego and Id is played out in daring fashion before the world:


Too often in IT ego drives one to be rigid and stubborn. This results
in a myopic and distorted perspective of technology that can limit ones
ability to gain an enlightened view of dynamic and highly volatile
environments. This defect is especially true of information security
professionals that tend towards ego driven dispositions that create
obstacles to agility. Agility is one of the key foundational tenets to
achieving an enlightened perspective on information security; humility
enables one to become agile.  Humility, which is far different from
humiliation, is the wisdom to realize one’s own ignorance,
insignificance, and limitations of intellect, without which one cannot
see the truth.

19th century philosopher Herbert Spencer captured this sentiment in
an oft-cited quote “There is a principle which is a bar against all
information, which is proof against all arguments and which cannot fail
to keep a man in everlasting ignorance – that principle is contempt
prior to investigation.”

The security blogging community is one manifestation of the
information security profession, based upon which one could argue that
security professionals lack humility and generally propose contempt for
an idea prior to investigation. I will relate my own experience to
highlight this concept.

Humility and the Jericho Forum
I was one of the traditionalists that was vehemently opposed to the
ideas, at least my understanding of the ideas, put forth by the Jericho
forum. In essence all I heard was “de-perimeterization”, “Firewalls are
dead and you do not need them”, and “Perfect security is achieved
through the end-point” – I lacked the humility required to properly
investigate their position and debated against their ideas blinded by
ego and contempt. Reviewing the recent spate of blog postings related
to the Jericho forum I take solace in knowing that I was not alone in
my lack of humility. The reality is that there is a tremendous amount
of wisdom in realizing that the traditional methods of network security
need to be adjusted to account for a growing mobile workforce, coupled
with a dramatic increase in contractors, service providers and non pay
rolled actors, all of which demand access to organizational assets, be
it individuals, information or infrastructure. In the case of the
Jericho forum’s ideas I lacked humility and it limited my ability to
truly understand their position, which limits my ability to broaden my
perspective’s on information security.


Good stuff.

It takes a lot of chutzpah to privately consider changing one’s stance on matters; letting go of preconceived notions and embracing a sense of openness and innovation.  It’s quite another thing to do it publicly.   I think that’s very cool.  It’s always been a refreshing study in personal growth when I’ve done it. 

I know it’s still very hard for me to do in certain areas, but my kids — especially my 3 year old — remind me everyday just how fun it can be to be wrong and right within minutes of one another without any sense of shame.

I’m absolutely thrilled if any of my posts on Jericho and the ensuing debate has made Amrit or anyone else consider for a moment that perhaps there are other alternatives worth exploring in the way in which we think, act and take responsibility for what we do in our line of work.

I could stop blogging right now and…

Yeah, right.  Stiennon, batter up!

/Hoff

(P.S. Just to be clear, I said "batter" not "butter"…I’m not that open minded…)

Prediction: Google Will Acquire ThePudding…Parsing Voice Calls for Targeted Ad Delivery…

September 24th, 2007 5 comments

Google_news
A couple of weeks ago I blogged about the potential coming of the GooglePhone as follow-on to all things Google and their impending World Domination Tour™

The highlight of the GooglePhone rambling was my fun little illustration of how, if Google won the spectrum auction and became a mobile operator, they would offer free wireless service on the GooglePhone underwritten with ad revenues utilizing some unique applications of some of their new and existing services:

So, without the dark overlord overtones, let’s say that Google wins the auction.  They become a mobile operator —
or they can likely lease that space back to others with some element of
control over the four conditions above.  Even if you use someone else’s
phone and resold service, Google wins.

This means that they pair the GooglePhone which will utilize
the newly acquired GoogleFi (as I call it) served securely cached out
of converged IMS GooglePOPs
which I blogged about earlier.   If the GooglePhone has some form of
WiFi capabilities, I would expect it will have the split capability to
use that network connectivity, also.

…but here’s the rub.  Google makes it’s dough from serving Ads.
What do you think will subsidize the on-going operation and assumed
"low cost" consumer service for the GooglePhone.

Yup.  Ads.

So, in between your call to Aunt Sally (or perhaps before,
during or after) you’ll get an Ad popping up on your phone for sales on
Geritol.
  An SMS will be sent to your GooglePhone which will be placed
in your GoogleMail inbox.  It’ll then pop up GoogleMaps directing you
to the closest store.  When you get to the store, you can search
directly for the Geritol product you want by comparing it to pictures
provided by Google Photos and interact in realtime with a pharmacist
using Google Talk whereupon you’ll be able to pay for said products
with Google Checkout.

All. From. Your. GooglePhone.

All driven, end-to-end, through GoogleNet.  Revenue is shared
throughout the entire transaction and supply chain driven from that one
little ad.

I got a ton of emails suggesting I was a little GoogleMad and that the blue/underlined section above was neither possible or sustainable from a business model perspective.  To address the former point regarding the technical possibility of what amounts to electronic parsing of audio — of course it is.  I’ve blogged about that before in my DRM/DLP/Watermarking discussions.

To the latter point regarding using this as a base for a business model, check this out from TechCrunch today:

Pudding_2
The New York Times is reporting today on a new service called ThePudding that provides free, PC-based phone calls to anywhere in the US or Canada.

The big catch: computers in Fremont, CA will eavesdrop on and
analyze every word of your conversation so they can serve up
advertisements tailored to the topic at hand.

So all this takes is a move to a platform like the GooglePhone (what’s a "PC" today, anyway?") to enable this in the mobile market…looks like these guys were born to be bought!

Users initiate a phone call simply by visiting ThePudding’s website
(currently in private beta) and entering a phone number into the
browser. After the call begins, advertisements tailored to the
conversation will begin to appear on screen. The NYT has a good
screenshot of what these advertisements will look like here.

That’s the exact model I suggested in the underlined section above!  Quite honestly, with the "privacy specter" aside, this would be pimp!  It’s the natural voice-operated semantic web!

Phone conversations are monitored only by computers, not actual
human beings. The company also does not record any of the conversations
or log any of the topics discussed. Therefore, advertisements are
tailored to each particular phone call and not to trends in users’
calling behavior.

ThePudding has already experienced a fair amount of backlash, with some calling it
a terrible idea because users will not be comfortable enough with
allowing their phone conversations to be monitored. There is also the
concern that niche users will not be swayed by this completely free
offering, because they already pay very little for services like Skype. However, ThePudding may be a potential acquisition target for Skype itself, which may be interested in developing an ad-based revenue model.

While Skype is mentioned, I’d add a whole host of others to this list if they’re smart…

Despite the criticism, ThePudding does not seem all that different
to me from a privacy perspective than Gmail. If users are comfortable
with letting computers analyze their email messages and display
targeted advertisements alongside them, why won’t they be comfortable
with allowing the same thing with their verbal communications? Perhaps
there is an important psychological factor at play here that will
always make people unwilling to let strangers monitor what they
actually speak. But consumers are caring less and less about how much
information they provide online about themselves to unverified
companies, so it doesn’t seem implausible to me that with time many
people will overcome their anxieties about this type of service.

I totally agree.

While ThePudding is currently only available through the web browser
on PCs, the company has plans to expand into mobile (and to display
advertisements on the screens of handheld devices).

ThePudding is a service of Pudding Media,
which was founded by two Israelis with experience in military
intelligence and telecommunications. The company is based in San Jose,
California.

So whether it’s Google, Skype, Yahoo or Cisco, you can expect this technology to make its way into/onto communications platforms in the near future; it’s a natural extension of data mining…we get targeted ads today in search engines, unified communications is next.  i wonder who’s going to pony up the cash. I still bet on Google — it’s a natural integration into GrandCentral!

…still waiting for my GooglePhone, although the iPhone would be a pretty damned good platform for this, too 😉

/Hoff

P.S. Did you see that Google is now sinking it’s own transpacific oceanic fiber cable…

Categories: Google Tags:

All Your COTS Multi-Core CPU’s With Non-Optimized Security Software Are Belong To Us…

September 24th, 2007 3 comments

Intel_quadcore{No, I didn’t forget to spell-check the title.  Thanks for the 20+ emails on the topic, though.  For those of you not familiar with the etymology of "..all your base are belong to us," please see here…}

I’ve been in my fair share of "discussions" regarding the perceived value in the security realm of proprietary custom hardware versus that of COTS (Commercial Off The Shelf) multi-core processor-enabled platforms.

Most of these debates center around what can be described as a philosophical design divergence suggesting that given the evolution and availability of muti-core Intel/AMD COTS processors, the need for proprietary hardware is moot. 

Advocates of the COTS approach are usually pure software-only vendors who have no hardware acceleration capabilities of their own.  They often reminisce about the past industry failures of big-dollar fixed function ASIC-based products (while seeming to ignore re-purposeable FPGA’s) to add weight to the theory that Moore’s Law is all one needs to make security software scream.

What’s really interesting and timely about this discussion is the notion of how commoditized the OEM/ODM appliance market has become when compared to the price/performance ratio offered by COTS hardware platforms such as Dell, HP, etc. 

Combine that with the notion of how encapsulated "virtual appliances" provided by the virtualization enablement strategies of folks like VMWare will be the next great equalizer in the security industry and it gets much more interesting…


There’s a Moore’s Law for hardware, but there’s also one for software…didja know?


Sadly, one of the most overlooked points in the multi-core argument is
that exponentially faster hardware does not necessarily translate to
exponentially improved software
performance.  You may get a performance  bump by forking multiple single-threaded instances of software or even pinning them to CPU/core affinity by spawning networking off to one processor/core and (example) firewalling to another, but that’s just masking the issue.

This is just like tossing a 426 Hemi in a Mini Cooper; your ability to use all that horsepower is limited by what torque/power you can get to the ground when the chassis isn’t designed to efficiently harness it.

For the record, as you’ll see below, I advocate a balanced approach: use proprietary, purpose-built hardware for network processing and offload/acceleration functions where appropriate and ride the Intel multi-core curve on compute stacks for general software execution with appropriately architected software crafted to take advantage of the multi-core technology.

Here’s a good example.

Cbspms
In my last position with Crossbeam, the X-Series platform modules relied on a combination of proprietary network processing featuring custom NPU’s, multi-core MIPS security processors and custom FPGA’s paired with Intel reference design multi-core compute blades (basically COTS-based Woodcrest boards) for running various combinations of security software from leading ISV’s.

What’s interesting about the bulk of the security software from these best-of-breed players you all know and love that run on those Intel compute blades, is that even with two sockets and dual cores per, it is difficult to squeeze large performance gains out of the ISV’s software.

Why?  There are lots of reasons.  Kernel vs. user mode, optimization for specific hardware and kernels, no-packet copy network drivers, memory and data plane architectures and the like.   However, one of the most interesting contributors to this problem is the fact that many of the core components of these ISV’s software were written 5+ years ago.

While these applications were born as tenants in single and dual processors, it has become obvious that developers cannot depend upon the increased clock speeds of processors or the availability of multi-core sockets alone to accelerate their single-threaded applications.

To take advantage of the increase in hardware performance, developers must redesign their applications to
run in a threaded environment as multi-core CPU architectures feature two or more processor compute engines (cores) and provide fully
parallellized hyperthreaded execution of multiple software threads.


Enter the Impending Muti-Core Crisis


But there’s a wrinkle with the pairing of this mutually-affected hardware/software growth curve that demonstrates a potential crisis with multi-core evolution.  This crisis will effect the way in which developers evaluate how to move forward with both their software and the hardware it runs on.

This comes from the blog post titled "Multicore Crisis" from SmoothSpan’s Blog:

Clockspeeds_2

The Multicore Crisis has to do with a shift in the behavior of Moore’s Law.
The law basically says that we can expect to double the number of
transistors on a chip every 18-24 months.  For a long time, it meant
that clock speeds, and hence the ability of the chip to run the same program faster,
would also double along the same timeline.  This was a fabulous thing
for software makers and hardware makers alike.  Software makers could
write relatively bloated software (we’ve all complained at Microsoft
for that!) and be secure in the knowledge that by the time they
finished it and had it on the market for a short while, computers would
be twice as fast anyway.  Hardware makers loved it because with
machines getting so much faster so quickly people always had a good
reason to buy new hardware.

Alas this trend has ground to a halt!  It’s easy to see from the
chart above that relatively little progress has been made since the
curve flattens out around 2002.  Here we are 5 years later in 2007.
The 3GHz chips of 2002 should be running at about 24 GHz, but in fact,
Intel’s latest Core 2 Extreme is running at about 3 GHz.  Doh!  I hate
when this happens!  In fact, Intel made an announcement in 2003 that
they were moving away from trying to increase the clock speed and over
to adding more cores.  Four cores are available today, and soon there
will be 8, 16, 32, or more cores.

What does this mean?  First, Moore’s Law didn’t stop working.  We
are still getting twice as many transistors.  The Core 2 now includes 2
complete CPU’s for the price of one!  However, unless you have software
that’s capable of taking advantage of this, it will do you no good.  It
turns out there is precious little software that benefits if we look at
articles such Jeff Atwood’s comparison of 4 core vs 2 core performance.  Blah!  Intel says that software has to start obeying Moore’s Law.
What they mean is software will have to radically change how it is
written to exploit all these new cores.  The software factories are
going to have to retool, in other words.

With more and more computing moving into the cloud on the twin
afterburners of SaaS and Web 2.0, we’re going to see more and more
centralized computing built on utility infrastructure using commodity
hardware.  That’s means we have to learn to use thousands of these
little cores.  Google did it, but only with some pretty radical new tooling.

This is fascinating stuff and may explain why many of the emerging appliances from leading network security vendors today that need optimized performance and packet processing do not depend solely on COTS multi-core server platforms. 

This is even the case with new solutions that have been written from the ground-up to take advantage of multi-core capabilities; they augment the products (much like the Crossbeam example above) with NPU’s, security processors and acceleration/offload engines.

If you don’t have acceleration hardware, as is the case for most pure software-only vendors, this means that a fundamental re-write is required in order to take advantage of all this horsepower.  Check out what Check Point has done with CoreXL which is their "…multi-core acceleration technology
[that] takes advantage of multi-core processors to provide high levels of
security inspection by dynamically sharing the load across all cores of
a CPU."

We’ll have to see how much more juice can be squeezed from the software and core stacking as the gap narrows on the processor performance increases (see above) as balanced against core density without a  complete re-tooling of software stacks versus doing it in hardware. 

Otherwise, combined with this smoothing/dipping of the Moore’s Law hardware curve, not retooling software will mean that proprietary processors may play an increasing role of importance as the cycle replays.

Interesting, for sure.

/Hoff

 

Categories: Technology Review Tags:

Can We End the “Virtualization Means You’re Less/More Secure” Intimation

September 22nd, 2007 1 comment

Childscratchinghead_2
I’d like to frame this little ditty with a quote that Marcus Ranum gave in a face-off between he and Bruce Schneier in this month’s Information Security Magazine wherein he states:

"Will the future be more secure? It’ll be just as insecure as it
possibly can, while still continuing to function. Just like it is today."

Keep that in mind as you read this post on virtualization security, won’t you?

Over the last few months we’ve had a serious malfunction in the supply chain management of Common Sense.  It’s simply missing from the manifest in many cases.

Such is the case wherein numerous claims of undelivered security common sense are being filed as instead of shipping clue in boxes filled with virtualization goodness, all we get are those styrofoam marketing peanuts suggesting that we’re either "more" or "less" secure instead.  More or less compared to what, exactly?

Monkeys_2
It’s unfortunate that it’s still not clear enough at this point that we’re at a crossroads with virtualization.

I believe it’s fair to suggest that the majority of us know that the technology represents fantastic opportunities, but vendors and customers alike continue to cover their ears, eyes and mouths and ignore certain realities inherent in the adoption of any new application or technology when it comes to assessing the risk associated with deploying this technology.

Further, generalizations regarding virtualization as being "more" or "less" secure than non-virtualized platforms represents an exercise in tail-chasing that seems more and more to be specious claims delivered in many cases without substantiated backing…

Here’s a perfect example of this sort of thing from a CMP ChannelWeb story titled "Plotting Security Strategy in a Virtual World":

"In many ways, securing virtual servers is little different from securing physical servers, said Patrick Lin, senior director of product management at VMware."

We’ve talked about this before.  This is true, except (unfortunately) for the fact that we’ve lost a tremendous amount of visibility from the network security practitioner’s perspective as now the "computer is the network" and many of the toolsets and technologies are not adapted to a accomodate a virtualized instantiation of controls and detection mechanisms such as Firewalls, IDS/IPS and other typical gateway security functions.

"At the end of the day, they are just Windows machines," Lin said. "When you turn a physical server into a virtual server, it’s no more vulnerable that it was before. There are not new avenues of attack all of a sudden."

That statement is false, misleading and ironic given the four vulnerabilities we just saw reported over the last 3 days that are introduced onto a virtual host thanks to the VMware software which enables the virtualization capabilities.

If you aren’t running VMWare’s software, then you’re not vulnerable to exploit from these vulnerabilities.  This
is an avenue of attack.  This represents a serious vulnerability.  This is a new threat/attack surface "all of a sudden."

[Ed: I simply had to add this excerpt from Kris Lamb’s fantastic blog post (IBM/ISS X-Force) that summarized empirically the distribution of VMware-specific vulnerabilities from 1999-2007.]

We pulled all known vulnerabilities across all of VMware’s products
since 1999. I then focused on categorizing by year, by severity, by
impact, by vector and by whether the vulnerability was in VMware’s
proprietary first-party components or in third-party components that
they use in their products.

Once I pulled all the data, sorted and structured it in various ways, it summarized like this:

VMware Vulns by Year Total Vulns High Risk Vulns Remote Vulns Vulns in First Party Code Vulns in 3rd Party Code
Vulns in 1999 1 1 0 1 0
Vulns in 2000 1 1 0 1 0
Vulns in 2001 2 0 0 2 0
Vulns in 2002 1 1 1 1 0
Vulns in 2003 9 5 5 5 4
Vulns in 2004 4 2 0 2 2
Vulns in 2005 10 5 5 4 6
Vulns in 2006 38 13 27 10 28
Vulns in 2007 34 18 19 22 12
TOTALS 100 46 57 48 52


So what are some of the interesting trends?

  • There have been 100 vulnerabilities disclosed across all of VMware’s virtualization products since 1999.
  • 57% of the vulnerabilities discovered in VMware products are remotely accessible, while 46% are high risk vulnerabilities.
  • 72% of all the vulnerabilities in VMware virtualization products have been discovered since 2006.
  • 48% of the vulnerabilities in VMware virtualization products
    are in first-party code and 62% are in third-party code that their
    non-hosted Linux-based products use.
  • Starting in 2007, the number of vulnerabilities discovered
    in first-party VMware components almost doubled that of vulnerabilities
    discovered in third-party VMware components. 2007 is the first time
    where first-party VMware vulnerabilities were greater than third-party
    VMware vulnerabilities.

How do I interpret these trends?

  • It is clear that with the increase in popularity, relevance and
    deployment of virtualization starting in 2006, vulnerability discovery
    energies have increasingly focused on finding ways to exploit
    virtualization technologies.
  • Combine the vulnerabilities in virtualization software,
    vulnerabilities in operating systems and applications that still exist
    independent of the virtualization software, the new impact of virtual
    rootkits and break-out attacks with the fact that in a virtual
    environment all your exploitation risks are now consolidated into one
    physical target where exploiting one system could potentially allow
    access and control of multiple systems on that server (or the server
    itself). In total, this adds up to a more complex and risky security
    environment.
  • Virtualization does not equal security!

I’ve already blog-leeched enough of Kris’ post, so please read his blog entry to see the remainder of his findings, but I think this does a really good job of putting to rest some of the FUD associated with this point.

Let’s continue to deconstruct at Mr. Lin’s commentary:

Even so, server virtualization vendors are taking steps to ensure that their technology is itself up-to-date in terms of security.

OK, I’ll buy that based upon my first hand experience thus far.  However, here’s where the example takes a bit of a turn as it seeks to prove that a certification somehow suggests that a virtualization platform is more secure:

Lin said VMware Server ESX is currently certified at Common Criteria Level 2 (CCL2), a security standard, and is in the process of applying for CCL4 for its Virtual Infrastructure 3 (VI3) product suite.

Just to be clear, Common Criteria Evaluation Assurance Levels don’t guarantee the security of a product; they demonstrate that under controlled evaluation, a product configured in a specific way meets certain assurance requirements that relay a higher level of confidence that a product’s security functions will be performed correctly and effectively. 

CC certification does not mean that a system is not vulnerable to or resilient against many prevalent classes of attack.  Also, in many ways, CC provides the opposite of being "up-to-date" from a security perspective because once a system has been certified, modifying it substantially with security code additions or even patches, renders the certification invalid and requires re-test.

If you’re interested, here’s a story describing some interesting aspects of CC certification and what it may mean for a product’s security and resilience titled "Common Criteria is Bad for You."

I’m working very hard to pull together a document which outlines exposure and risk associated with deploying virtualization with as much contextual and temporal relevance as I can muster.  Numerous other people are, also.  This way we can quantify the issues at hand rather than listening to marketing squawk boxes yelp from their VoxHoles about how secure/insecure virtualization platforms are without examples or solutions…

Mouth_tape
In the meantime, as a friendly piece of advice, might I suggest that the virtualization vendors such as VMWare kindly limit the outflow of how security information regarding virtualization is communicated? 

Statements like those above made by product managers who are not security domain experts only seeks to erode the trust and momentum you’re trying to gain.

There are certainly areas in which virtualization provides interesting, useful, unique and (in some cases) enhanced security over those in non-virtualized environments.  The converse can also be held as true.

Let’s work on communicating these differences in specifics instead of generalities.

/Hoff

Categories: Virtualization, VMware Tags:

More Security Prose – Weekly Security Review

September 22nd, 2007 6 comments

This week in security,
it’s time to review.
What new vulnerability
are you subject to?

Let’s scan Full Disclosure
and find us a bug.
Some new crafty malware
from a cyber-crook thug?

What poor security choice
has some CSO made?
First the VA, then Pfizer, 
now A-mer-iTrade?

All things virtual are scary
vulns are real, take a look
and the TSA’s profiling
your choices of book

Some MIT looney
with a fake bomb on her chest
almost got lit up
by New England’s best

Compliance and legal
are all such a mess
Sarbanes-Oxley and HIPAA
PCI’s DSS

Raytheon bought Oakley,
Shimel got GoogleJacked
while some poor Joe from CITI
had his LimeWire hacked

Peer to Peer and those BotNets
will be our dear network’s death
The next malware vector is
ye olde PDF!

Maynor’s been holed up
with guns, pills and code
Now the statutes are lifted
he’s blowing his load

Curphey’s gone Blue
Ptacek’s gone MIA
Newby’s gone English
Mogull’s rejoined the fray

McAfee’s Dewalt
went on a tirade
seems that cybercrime’s
bigger than the world’s whole drug trade

De-perimeterization,
the Jericho way
doesn’t mean sell your firewall
on Craigslist or eBay

To model or measure
metrics or SWOT
Just don’t define Lindstrom
as something he’s not

Rothman’s now helping
Grandma secure her kit

from malware like trojans and botnets
and shit

Pescatore says we need Security-three-point-oh.
InfoSec costs too much and has nowhere to go
He casually proffers his bold Gartner bet
by the year 2010 we’ll be ahead of the threat.

That’s it boys and girls
till I rhyme once again
Stay happy, stay secure
and now…
EOM

Categories: Poetry Tags:

Virtualization Threat Surface Expands: We Weren’t Kidding…

September 21st, 2007 No comments

Skyisfalling
First the Virtualization Security Public Service Announcement:

By now you’ve no doubt heard that Ryan Smith and Neel Mehta from IBM/ISS X-Force have discovered vulnerabilities in VMware’s DHCP implementation that could allow for "…specially crafted packets to gain system-level privileges" and allow an attacker to execute arbitrary code on the system with elevated privileges thereby gaining control of the system.   

Further, Dark Reading details that Rafal Wojtczvk (whose last name’s spelling is a vulnerability in and of itself!) from McAfee discovered the following vulnerability:

A vulnerability
that could allow a guest operating system user with administrative
privileges to cause memory corruption in a host process, and
potentially execute arbitrary code on the host. Another fix addresses a
denial-of-service vulnerability that could allow a guest operating
system to cause a host process to become unresponsive or crash.

…and yet another from the Goodfellas Security Research Team:

An additional update, according to the advisory, addresses a
security vulnerability that could allow a remote hacker to exploit the
library file IntraProcessLogging.dll to overwrite files in a system. It
also fixes a similar bug in the library file vielib.dll.

It is important to note that these vulnerabilities have been mitigated by VMWare at the time of this announcement.  Further information regarding mitigation of all of these vulnerabilities can be found here.


You can find details regarding these vulnerabilities via the National Vulnerability Database here:

CVE-2007-0061The DHCP server in EMC VMware Workstation before 5.5.5 Build 56455 and
6.x before 6.0.1 Build 55017, Player before 1.0.5 Build 56455 and
Player 2 before 2.0.1 Build 55017, ACE before 1.0.3 Build 54075 and ACE
2 before 2.0.1 Build 55017, and Server before 1.0.4 Build 56528 allows
remote attackers to execute arbitrary code via a malformed packet that
triggers "corrupt stack memory.

CVE-2007-0062 – Integer overflow in the DHCP server in EMC VMware Workstation before
5.5.5 Build 56455 and 6.x before 6.0.1 Build 55017, Player before 1.0.5
Build 56455 and Player 2 before 2.0.1 Build 55017, ACE before 1.0.3
Build 54075 and ACE 2 before 2.0.1 Build 55017, and Server before 1.0.4
Build 56528 allows remote attackers to execute arbitrary code via a
malformed DHCP packet that triggers a stack-based buffer overflow.

CVE-2007-0063 – Integer underflow in the DHCP server in EMC VMware Workstation before
5.5.5 Build 56455 and 6.x before 6.0.1 Build 55017, Player before 1.0.5
Build 56455 and Player 2 before 2.0.1 Build 55017, ACE before 1.0.3
Build 54075 and ACE 2 before 2.0.1 Build 55017, and Server before 1.0.4
Build 56528 allows remote attackers to execute arbitrary code via a
malformed DHCP packet that triggers a stack-based buffer overflow.

CVE-2007-4496 – Unspecified vulnerability in EMC
VMware Workstation before 5.5.5 Build 56455 and 6.x before 6.0.1 Build
55017, Player before 1.0.5 Build 56455 and Player 2 before 2.0.1 Build
55017, ACE before 1.0.3 Build 54075 and ACE 2 before 2.0.1 Build 55017,
and Server before 1.0.4 Build 56528 allows authenticated users with
administrative privileges on a guest operating system to corrupt memory
and possibly execute arbitrary code on the host operating system via
unspecified vectors.

CVE-2007-4155 – Absolute path traversal vulnerability in a certain ActiveX control in
vielib.dll in EMC VMware 6.0.0 allows remote attackers to execute
arbitrary local programs via a full pathname in the first two arguments
to the (1) CreateProcess or (2) CreateProcessEx method.


I am happy to see that VMware moved on these vulnerabilities (I do not have the timeframe of this disclosure and mitigation available.)  I am convinced that their security team and product managers truly take this sort of thing seriously.

However, this just goes to show you that as the virtualization platforms enter further-highlighted mainstream adoption, exploitable vulnerabilities will continue to follow as those who follow the money begin to pick up the scent. 

This is another phrase that’s going to make a me a victim of my own Captain Obvious Award, but it seems like we’ve been fighting this premise for too long now.  I recognize that this is not the first set of security vulnerabilities we’ve seen from VMware, but I’m going to highlight them for a reason.

It seems that due to a lack of well-articulated vulnerabilities that extended beyond theoretical assertions or POC’s, the sensationalism of research such as Blue Pill has desensitized folks to the emerging realities of virtualization platform attack surfaces.

I’ve blogged about this over the last year and a half, with the latest found here and an interview here.  It’s really just an awareness campaign.  One I’m more than willing to wage given the stakes.  If that makes me the noisy canary in the coal mine, so be it.

These very real examples are why I feel it’s ludicrous to take seriously any comments that suggest by generalization that virtualized environments are "more secure" by design; it’s software, just like anything else, and it’s going to be vulnerable.

I’m not trying to signal that the sky is falling, just the opposite.  I do, however, want to make sure we bring these issues to your attention.

Happy Patching!

/Hoff