Archive

Archive for August, 2007

Harvard Business Review: Excellent Data Breach Case Study…

August 25th, 2007 6 comments

Hbrcover_2
I read the Harvard Business Review frequently and find that the quality of writing and insight it provides is excellent.  This month’s (September 2007) edition is no exception as it features a timely data breach case study written by Eric McNulty titled "Boss, I think Someone Stole Out Customer Data."

The format of the HBR case studies are well framed because they ultimately ask you, the reader, to conclude what you would do in the situation and provide many — often diametrically opposed — opinions from industry experts.

This month’s commentators were Bill Boni (CISO, Motorola,) James E. Lee (SVP ChoicePoint,) John Coghlan (former President & CEO of Visa,) and Jay Foley (Executive Director of the Identity Theft Resource Center)

The fictitious company profiled is Flayton Electronics, a regional electronics chain with 32 stores across six states.  The premise of the fictitious data breach focuses on the manner in which Flayton Electronics decides what to do, how to interact with LEO, and how/if to communicate the alleged data breach consisting of potentially thousands of their customer’s credit cards. 

What I liked about the article are the classic quote gems that highlight the absolute temporal absurdity of PCI compliance and the false sense of security it provides to the management of companies — especially in response to a breach.

You know, "We’re compliant, thus we’re secure, ergo we’re at less risk."

Now, I’m not suggesting that compliance initiatives don’t make things "better," in some sense, but they don’t necessarily make a company more "secure."  I think the case study demonstrates that well enough and the readership of this blog certainly doesn’t need to be convinced.

So, why write about it then?  The quote snippets below illustrate reality — sometimes hysterically.  You’ll have to read the entire story to gain true context and to appreciate the angst this sort of thing brings, but I chuckled a couple of times when reading these quotes:

“What’s our potential exposure?” Brett inquired matter-of-factly.
Quietly he wondered whether the firm’s PCI compliance would provide
sufficient protection.

“Why do we have to notify customers at all?” Brett asked, genuinely
puzzled. “Haven’t the banks already informed them that their accounts
have been compromised?

“What about some kind of coincidence?” Brett was grasping at straws.
Perhaps 1,500 of our customers just had the same bad luck?”

“We’re still trying to determine what happened,” the CIO offered meekly.

But we are sure that our PCI systems were working, right?” Brett pushed.

Becoming PCI compliant is
complicated,
” Sergei hedged, “especially when you’re constantly
improving your own technology.” He ran through a laundry list of the
complexities of recent improvements. At any given moment, Sergei had
three or four high-priority tech projects in various stages of
implementation. It was a constant juggling act.

Brett, in a rare display of anger, pounded his fist on Sergei’s
desk. “Are you saying, Sergei, that we’re not actually PCI compliant?”

Sergei stiffened. “We meet about 75% or so of the PCI requirements.
That’s better than average for retailers of our size.
” The response was
defensive but honest.

How have we been able to get away with that?” Brett growled. He
knew that PCI compliance, which was mandated by all the major credit
card companies, required regular scans by an outside auditor to ensure
that a company’s systems were working—with stiff penalties for failure.

They don’t scan us every day,” Sergei demurred. “Compliance really is up to us, to me, in the end.

Sergei reported finding a hole—a disabled firewall that was supposed to
be part of the wireless inventory-control system, which used real-time
data from each transaction to trigger replenishment from the
distribution center and automate reorders from suppliers.

“How did the firewall get down in the first place?” Laurie snapped.

“Impossible to say,” said Sergei resolutely. “It could have been
deliberate or accidental. The system is relatively new, so we’ve had
things turned off and on at various times as we’ve worked out the bugs.
It was crashing a lot for a while. Firewalls can often be problematic.”

Sounds like a typical Monday morning staff meeting to me…I think you could be a fly in the wall in many mid-size (or large, for that matter) companies and hear this same set of quotes — regardless of how many millions of dollars the company may have spent on compliance initiatives.  It is indeed sad to see how many of these folks don’t realize that "compliance" is merely the floor, not the ceiling.  <sigh>

If you pay close attention to the dynamics of the management team within the story, you’ll bear witness to all seven distinct stages of the data breach grieving process:

  • Shock or Disbelief

  • Denial

  • Bargaining

  • Guilt

  • Anger

  • Depression

  • Acceptance and Hope

I’m not really aiming for a punchline here, but I will suggest that you read the entire story to appreciate the tale in the grandest of its context.  The commentary from the industry experts is also very interesting…

/Hoff

P.S. I think it’s very cool the HBR allows you to access these stories without paying or registering and allows one to use up to 500 words on blogs and the like for the non-commercial purpose of summarizing the story.  Nice policy. 

Categories: PCI Tags:

I Know It’s Been 4 Months Since I Said it, but “NO! DLP is (Still) NOT the Next Big Thing In Security!”

August 24th, 2007 5 comments

Evolution3
Nope.  Haven’t changed my mind.  Sorry.  Harrington stirred it up and Chuvakin reminded me of it.

OK, so way back in April, on the cusp of one of my normal rages against the (security) machine, I blogged how Data Leakage Protection (DLP) is doomed to be a feature and not a market

I said the same thing about NAC, too.  Makin’ friends and influencin’ people.  That’s me!

Oh my how the emails flew from the VP’s of Marketing & Sales from the various "Flying V’s" (see below)  Good times, good times.

Here’s snippets of what I said:


Besides having the single largest collection of vendors that begin with
the letter ‘V" in one segment of the security space (Vontu, Vericept,
Verdasys, Vormetric…what the hell!?) it’s interesting to see how
quickly content monitoring and protection functionality is approaching
the inflection point of market versus feature definition.

The "evolution" of the security market marches on.

Known by many names, what I describe as content monitoring and
protection (CMP) is also known as extrusion prevention, data leakage or
intellectual property management toolsets.  I think for most, the
anchor concept of digital rights management (DRM) within the Enterprise
becomes glue that makes CMP attractive and compelling; knowing what and
where your data is and how its distribution needs to be controlled is
critical.

The difficulty with this technology is the just like any other
feature, it needs a delivery mechanism.  Usually this means yet another
appliance; one that’s positioned either as close to the data as
possible or right back at the perimeter in order to profile and control
data based upon policy before it leaves the "inside" and goes "outside."

I made the point previously that I see this capability becoming a
feature in a greater amalgam of functionality;  I see it becoming table
stakes included in application delivery controllers, FW/IDP systems and
the inevitable smoosh of WAF/XML/Database security gateways (which I
think will also further combine with ADC’s.)

I see CMP becoming part of UTM suites.  Soon.

That being said, the deeper we go to inspect content in order to
make decisions in context, the more demanding the requirements for the
applications and "appliances" that perform this functionality become.
Making line speed decisions on content, in context, is going to be
difficult to solve. 

CMP vendors are making a push seeing this writing on the wall, but
it’s sort of like IPS or FW or URL Filtering…it’s going to smoosh.

Websense acquired PortAuthority.  McAfee acquired Onigma.  Cisco will buy…

I Never Metadata I Didn’t Like…

I didn’t even bother to go into the difficulty and differences in classifying, administering, controlling and auditing structured versus unstructured data, nor did I highlight the differences between those solutions on the market who seek to protect and manage information from leaking "out" (the classic perimeter model) versus management of all content ubiquitously regardless of source or destination.  Oh, then there’s the whole encryption in motion, flight and rest thing…and metadata, can’t forget that…

Yet I digress…let’s get back to industry dynamics.  It seems that Uncle Art is bound and determined to make good on his statement that in three years there will be no stand-alone security companies left.  At this rate, he’s going to buy them all himself!

As we no doubt already know, EMC acquired Tablus. Forrester seems to think this is the beginning of the end of DLP as we know it.  I’m not sure I’d attach *that* much gloom and doom to this specific singular transaction, but it certainly makes my point:

  August 20, 2007

Raschke_2EMC/RSA Drafts Tablus For Deeper Data-Centric Security
The Beginning Of The End Of The Standalone ILP Market

by
Thomas Raschke

with
Jonathan Penn, Bill Nagel, Caroline Hoekendijk

EXECUTIVE SUMMARY

EMC expects Tablus to play a key role in
its information-centric security and storage lineup. Tablus’ balanced
information leak prevention (ILP) offering will benefit both sides of
the EMC/RSA house, boosting the latter’s run at the title of
information and risk market leader. Tablus’ data classification
capabilities will broaden EMC’s Infoscape beyond understanding
unstructured data at rest; its structured approach to data detection
and protection will provide a data-centric framework that will benefit
RSA’s security offerings like encryption and key management. While
holding a lot of potential, this latest acquisition by one of the
industry’s heavyweights will require comprehensive integration efforts
at both the technology and strategic level. It will also increase the
pressure on other large security and systems management vendors to
address their organization’s information risk management pain points.
More importantly, it will be remembered as the turning point that led
to the demise of the standalone ILP market as we know it today.

So Mogull will probably (still) disagree, as will the VP’s of Marketing/Sales working for the Flying-V’s who will no doubt barrage me with email again, but it’s inevitable.  Besides, when an analyst firm agrees with you, you can’t be wrong, right Rich!?

/Hoff

 

Anyone interested in an ISO17799-Aligned Set of IT/Information Security P&P’s – Great Rational Starter Kit for a Security Program!

August 22nd, 2007 13 comments

Dilbert
I have spent a lot of time, sweat and tears in prior lives chipping away at building a template set of IT/Information Security policies and procedures that were aligned to (and audited against) various regulatory requirements and the 10 Domains/127 Controls of ISO17799.

This consolidated set of P&P’s is intact and well written.  Actual business people have been able to read, understand and (gasp!) comply with them.  I know, "impossible!" you say.  Nay, ’tis rational is all…

As part of my effort to give back, I thought that many of you maybe at a point where while you have lots of P&P’s specific to your business, not having to reinvent the wheel by drafting this sort of polished package yourself or paying someone to do it might be useful.

The P&P’s are a complete package that outline at a high-level the basis of an ISO-aligned security program; you could basically search/replace and be good to go for what amounts to 99% of the basic security coverage you’d need to address most elements of a well-stocked security pantry.

You can use this "English" high-level summary set to point to indexed detailed P&P mechanics or standards that are specific to your organization.

Would this be of some use to you?  I would need to do some work to take care of some rough spots and sanitize the word doc, but if there is enough interest I’ll do it and post it for whomsoever would like it.  Just to be clear, the P&P’s are already written, I’ll just make it SEARCH/REPLACE friendly.

I’m not trying to tease anyone, I just don’t want to do the up-front work if nobody is interested.

Let me know in the comments; no need to leave website links (for obvious reasons) just let me know by your comment if this is something you’d like.  If I get enough demand, I’ll "get her done!"

OK, good enough.  Thanks for the comments.  I’ll post it up in the next few days.  Thanks guys.

/Hoff

Wells Fargo System “Crash” Spools Up Phishing Attempts But Did It Also Allow for Bypassing Credit/Debit Card Anti-Fraud Systems?

August 22nd, 2007 3 comments

Wellsfargo
Serendipity is a wonderful thing.  I was in my local MA bank branch on Monday arranging for a wire transfer from my local account to a Wells Fargo account I maintain in CA.  I realized that I didn’t have the special ABA Routing Code that WF uses for wire transfers so I hopped on the phone to call customer service to get it.  We don’t use this account much at all but wanted to put some money in it to keep up the balance which negates the service fee.

The wait time for customer service was higher than normal and I sat for about 20 minutes until I was connected to a live operator.  I told him what I wanted and he was able to give me the routing code but I also needed the physical address of the branch that my account calls home.  He informed me that he couldn’t give me that information.

The reason he couldn’t give me that information was that the WF "…computer systems have been down for the last 18 hours."  He also told me that "…we lost a server somewhere; people couldn’t even use their ATM cards yesterday."

This story was covered here on Computerworld and was followed up with another article which described how Phishers and the criminal element were spooling up their attacks to take advantage of this issue:

August 21, 2007   (IDG News Service)  — Wells Fargo & Co.
customers may have a hard time getting an up-to-date balance statement
today, as the nation’s fifth-largest bank continues to iron out service
problems related to a Sunday computer failure.

The outage knocked the company’s Internet, telephone and ATM banking
services offline for several hours, and Wells Fargo customers continued
to experience problems today.

Wells Fargo didn’t offer many details about the system failure, but
it was serious enough that the company had to restore from backup.

"Using our backup facilities, we restored Internet banking service in about one hour and 40 minutes," the company said in a statement today. "We thank the hundreds of team members in our technology group for working so hard to resolve this problem."

Other banking services such as point-of-sale transactions, loan
processing and wire transfers were also affected by the outage, and
while all systems are now fully operational, some customers may
continue to see their Friday bank balances until the end of the day,
Wells Fargo said.

I chuckled uneasily because I continue to be directly impacted by critical computer systems failures such as two airline failures (the United Airlines and the TSA/ICE failure at LAX,) the Skype outage, and now this one.  I didn’t get a chance to blog about it other than a comment on another blog, but if I were you, I’d not stand next to me in a lightning storm anytime soon!  I guess this is what happens when you’re a convenient subscriber to World 2.0?

I’m sure WF will suggest this is because of Microsoft and Patch Tuesday, too… 😉

So I thought this would be the end of this little story (until the next time.)  However, the very next day, my wife came to me alarmed because she found a $375 charge on the same account as she was validating that the wire went through.

She asked me if I made a purchase on the WF account recently and I had not as we don’t use this account much.  Then I asked her who the vendor was.  The charge was from Google.com.  Google.com?

Huh?  I asked her to show me the statement; there was no reference transaction number, no phone number and the purchase description was "general merchandise."

My wife immediately called WF anti-fraud and filed a fraudulent activity report.  The anti-fraud representative described the transaction as "odd" because there was no contact information available for the vendor.

She mentioned that she was able to see that the vendor executed both an auth. (testing to see that funds were available) followed then a capture (actually charging) but told us that unfortunately she couldn’t get any more details because the computer systems were experiencing issues due to the recent outage!

This is highly suspicious to me.

Whilst the charge has been backed out, I am concerned that this is a little more than serendipity and coincidence. 

Were the WF anti-fraud and charge validation processes compromised during this "crash" and/or did their failure allow for fraudulent activity to occur?

Check your credit/debit card bills if you are a Wells Fargo customer!

/Hoff

Take5 (Episode #5) – Five Questions for Allwyn Sequeira, SVP of Product Operations, Blue Lane

August 21st, 2007 18 comments

This fifth episode of Take5 interviews Allwyn Sequeira, SVP of Product Operations for Blue Lane.  

First a little background on the victim:

Allwyn
Allwyn Sequeira is Senior Vice President of Product Operations at Blue
Lane Technologies, responsible for managing the overall product life
cycle, from concept through research, development and test, to delivery
and support. He was previously the Senior Vice President of Technology
and Operations at netVmg, an intelligent route control company acquired
by InterNap in 2003, where he was responsible for the architecture,
development and deployment of the industry-leading flow control
platform. Prior to netVmg, he was founder, Chief Technology Officer and
Executive Vice President of Products and Operations at First Virtual
Corporation (FVC), a multi-service networking company that had a
successful IPO in 1998. Prior to FVC, he was Director of the Network
Management Business Unit at Ungermann-Bass, the first independent local
area network company. Mr. Sequeira has previously served as a Director
on the boards of FVC and netVmg.


Mr. Sequeira started his career as a software developer at HP in the
Information Networks Division, working on the development of TCP/IP
protocols. During the early 1980’s, he worked on the CSNET project, an
early realization of the Internet concept. Mr. Sequeira is a recognized
expert in data networking, with twenty five years of experience in the
industry, and has been a featured speaker at industry leading forums
like Networld+Interop, Next Generation Networks, ISP Con and RSA
Conference.

Mr. Sequeira holds a Bachelor of Technology degree in Computer
Science from the Indian Institute of Technology, Bombay, and a Master
of Science in Computer Science from the University of Wisconsin,
Madison.   

Allwyn, despite all this good schoolin’ forgot to send me a picture, so he gets what he deserves 😉
(Ed: Yes, those of you quick enough were smart enough to detect that the previous picture was of Brad Pitt and not Allwyn.  I apologize for the unnecessary froth-factor.)

 Questions:

1) Blue Lane has two distinct product lines, VirtualShield and PatchPoint.  The former is a software-based solution which provides protection for VMware Infrastructure 3 virtual servers as an ESX VM plug-in whilst the latter offers a network appliance-based solution for physical servers.  How are these products different than either virtual switch IPS’ like Virtual Iron or in-line network-based IPS’s?

IPS technologies have been charged with the incredible mission of trying to protect everything from anything.  Overall they’ve done well, considering how much the perimeter of the network has changed and how sophisticated hackers have become. Much of their core technology, however, was relevant and useful when hackers could be easily identified by their signatures. As many have proclaimed, those days are coming to an end.

A defense department official recently quipped, "If you offer the same protection for your toothbrushes and your diamonds you are bound to lose fewer toothbrushes and more diamonds."  We think that data center security similarly demands specialized solutions.  The concept of an enterprise network has become so ambiguous when it comes to endpoints and devices and supply chain partners, etc. we think its time to think more realistically in terms of trusted, yet highly available zones within the data center.

It seems clear at this point that different parts of the network need very different security capabilities.  Servers, for example need highly accurate solutions that do not block or impede good traffic and can correct bad traffic, especially when it comes to closing network-facing vulnerability windows.  They need to maintain availability with minimal latency for starters; and that has been a sort of Achilles heel for signature-based approaches.  Of course, signatures also bring considerable management burdens over and beyond their security capabilities.

No one is advocating turning off the IPS, but rather approaching servers with more specialized capabilities.  We started focusing on servers years ago and established very sophisticated application and protocol intelligence, which has allowed us to correct traffic inline without the noise, suspense and delay that general purpose network security appliance users have come to expect.

IPS solutions depend on deep packet inspection typically at the perimeter based on regexp pattern matching for exploits.  Emerging challenges with this approach have made alert and block modes absolutely necessary as most IPS solutions aren’t accurate enough to be trusted in full library block. 

Blue Lane uses a vastly different approach.  We call it deep flow inspection/correction for known server vulnerabilities based on stateful decoding up to layer 7.  We can alert, block and correct, but most of are deployments are in correct mode, with our full capabilities enabled. From an operational standpoint we have substantially different impacts.

A typical IPS may have 10K signatures while experts recommend turning on just a few hundred.  That kind of marketing shell game (find out what really works) means that there will be plenty of false alarms, false positives and negatives and plenty of tuning.  With polymorphic attacks signature libraries can increase exponentially while not delivering meaningful improvements in protection. 

Blue Lane supports about 1000 inline security patches across dozens of very specific server vulnerabilities, applications and operating systems.  We generate very few false alarms and minimal latency.  We don’t require ANY tuning.  Our customers run our solution in automated, correct mode.

The traditional static signature IPS category has evolved into an ASIC war between some very capable players for the reasons we just discussed.Exploding variations of exploits and vectors means that exploit-centric approaches will require more processing power.

Virtualization is pulling the data center into an entirely different direction, driven by commodity processors.  So of course our VirtualShield solution was a much cleaner setup with a hypervisor; we can plug into the hypervisor layer and run on top of existing hardware, again with minimal latency and footprint.

You don’t have to be a Metasploit genius to evade IPS signatures.  Our higher layer 7 stateful decoding is much more resilient. 

2) With zero-days on the rise, pay-for-play vulnerability research and now Zero-Bay (WabiSabiLabi) vulnerability auctions and the like, do you see an uptake in customer demand for vulnerability shielding solutions?

Exploit-signature technologies are meaningless in the face of evanescent, polymorphic threats, resulting in 0-day exploits. Slight modifications to signatures can bypass IPSes, even against known vulnerabilities.  Blue Lane technology provides 0-day protection for any variant of an exploit against known vulnerabilities.  No technology can provide ultimate protection against 0-day exploits based on 0-day vulnerabilities. However, this requires a different class of hacker.

3) As large companies start to put their virtualization strategies in play, how do you see customers addressing securing their virtualized infrastructure?  Do they try to adapt existing layered security methodologies and where do these fall down in a virtualized world?

I’ve explored this topic in depth at the Next Generation Data Center conference last week. Also, your readers might be interested in listening to a recent podcast: The Myths and Realities of Virtualization Security: An Interview. 

To summarize, there are a few things that change with virtualization, that folks need to be aware of.  It represents a new architecture.  The hypervisor layer represents the un-tethering and clustering of VMs, and centralized control.  It introduces a new virtual network layer.  There are entirely new states of servers, not anticipated by traditional static security approaches (like instant create, destroy, clone, suspend, snapshot and revert to snapshot). 

Then you’ll see unprecedented levels of mobility and new virtual appliances and black boxing of complex stacks including embedded databases.  Organizations will have to work out who is responsible for securing this very fluid environment.  We’ll also see unprecedented scalability with Infiniband cores attaching LAN/SAN out to 100’s of ESX hypervisors and thousands of VMs.

Organizations will need the capability to shield these complex, fluid environments; because trying to keep track of individual VMs, states, patch levels, locations will make tuning an IPS for polymorphic attacks look like childs play in comparison.   Effective solutions will need to be highly accurate, low latency solutions deployed in correct mode. Gone will be the days of man-to-man blocking and tuning.  Here to stay are the days of zone defense.

4) VMware just purchased Determina and intends to integrate their memory firewall IPS product as an ESX VM plug-in.  Given your early partnership with VMware, are you surprised by this move?  Doesn’t this directly compete with the VirtualSheild offering?

I wouldn’t read too much into this. Determina hit the wall on sales, primarily because it’s original memory wall technology was too intrusive, and fell short of handling new vulnerabilities/exploits.

This necessitated the LiveShield product, which required ongoing updates, destroying the value proposition of not having to touch servers, once installed. So, this is a technology/people acquisition, not a product line/customer-base acquisition.

VMware was smart to get a very bright set of folks, with deep memory/paging/OS, and a core technology that would do well to be integrated into the hypervisor for the purpose of hypervisor hardening, and interVM isolation. I don’t see VMware entering the security content business soon (A/V, vulnerabilities, etc.). I see Blue Lane’s VirtualShield technology integrated into the virtual networking layer (vSwitch), as a perfect complement to anything that will come out of the Determina acquisition.

5) Citrix just acquired XenSource.  Do you have plans to offer VirtualShield for Xen? 

A smart move on Citrix’s part to get back into the game. Temporary market caps don’t matter. Virtualization matters. If Citrix can make this a two or three horse race, it will keep the VMware, Citrix, Microsoft triumvirate on their toes, delivering better products, and net good for the customer.

Regarding BlueLane, and Citrix/Xensource, we will continue to pay attention to what customers are buying as they virtualize their data centers. For now, this is a one horse show 🙂

Quick Post of a Virtualization Security Presentation: “Virtualization and the End of Network Security As We Know It…”

August 20th, 2007 7 comments

Virtualizationagenda
"Virtualization and the End of Network Security As We Know It…
The feel good hit of the summer!"

Ye olde blog gets pinged quite a lot with searches and search engine redirects for folks looking for basic virtualization and virtualized security information. 

I had to drum up a basic high-level virtualization security presentation for the ISSA Charlotte Metro gathering back in April and I thought I may as well post it.

It’s in .PDF format.  If you want it in .PPT or Keynote, let me know, I’ll be glad to send it to you.  If it’s useful or you need some explanation regarding the visual slides, please get back to me and I’ll be more than glad to address anything you want.  I had 45 minutes to highlight how folks were and might deal with "securing virtualization by virtualizing security."

Yes, some of it is an ad for the company I used to work for who specializes in virtualized security service layers (Crossbeam) but I’m sure you can see how it is relevant in the preso.  You’ll laugh, you’ll cry, you’ll copy/paste the text and declare your own brilliance.  Here’s the summary slide so those of you who haven’t downloaded this yet will know the sheer genius you will be missing if you don’t:

Issavirtualization034

At any rate, it’s not earth shattering but does a decent job at the high level of indicating some of the elements regarding virtualized security. I apologize for the individual animation slide page build-ups.  I’ll re-upload without them when I can get around to it. (Ed: Done.  I also uploaded the correct version 😉

Here’s the PDF.

/Hoff

(As of 11pm EST — 5.5 hours later 1:45pm EST the next day, you lot have downloaded this over 150 380 times.  Since there are no comments, it’s either the biggest piece of crap I’ve ever produced or you are all just so awe stricken you are unable to type.  Newby, you are not allowed to respond to this rhetorical question…)

Oh SNAP! VMware acquires Determina! Native Security Integration with the Hypervisor?

August 19th, 2007 12 comments

Determinalogo
Hot on the trails of becoming gigagillionaires, the folks at VMware make my day with this.  Congrats to the folks @ Determina.

Methinks that for the virtualization world, it’s a very, very good thing.  A step in the right direction.

I’m going to prognosticate that this means that Citrix will buy Blue Lane or Virtual Iron next (see bottom of the post) since their acquisition of XenSource leaves them with the exact same problem that this acquisition for VMware tries to solve:

VMware Inc., the market leader in virtualization software, has acquired
Determina Inc., a Silicon Valley maker of host intrusion prevention
products.

…the security of virtualized
environments has been something of an unknown quantity due to the
complexity of the technology and the ways in which hypervisors interact
with the host OS. 
Determina’s technology is designed specifically to protect the OS
from malicious code, regardless of the origin of the attack, so it
would seem to be a sensible fit for VMware, analysts say.Memoryfirewall

In his analysis of the deal, Gartner’s MacDonald sounded many of
the same notes. "By potentially integrating Memory Firewall into the
ESX hypervisor, the hypervisor itself can provide an additional level
of protection against intrusions. We also believe the memory protection
will be extended to guest OSs as well: VMware’s extensive use of binary
emulation for virtualization puts the ESX hypervisor in an advantageous
position to exploit this style of protection," he wrote.

I’ve spoken a lot recently  about how much I’ve been dreading the notion that security was doomed to repeat itself with the accelerated take off of server virtualization since we haven’t solved many of the most basic security problem classes.  Malicious code is getting more targeted and more intelligent and when you combine an emerging market using hot technology without an appropriate level of security… 

Basically, my concerns have stemmed from the observation that if we can’t do a decent job protecting physically-seperate yet interconnected network elements with all the security fu we have, what’s going to happen when the "…network is the computer" (or vice versa.)  Just search for "virtualization" via the Lijit Widget above for more posts on this…

Some options for securing virtualized guest OS’s in a VM are pretty straight foward:

  1. Continue to deploy layered virtualized security services across VLAN segments of which each VM is a member (via IPS’s, routers, switches, UTM devices…)
  2. Deploy software like Virtual Iron’s which looks like a third party vSwitch IPS on each VM
  3. Integrate something like Blue Lane’s ESX plugin-in which interacts with and at the VMM level
  4. As chipset level security improves, enable it
  5. Deploy HIPS as part of every guest OS.

Each of these approaches has its own sets of pros and cons, and quite honestly, we’ll probably see people doing all five at the same time…layered defense-in-depth.  Ugh.

What was really annoying to me, however, is that it really seemed that in many cases, the VM solution providers were again expecting that we’d just be forced to bolt security ON TO our VM environments instead of BAKING IT IN.  This was looking like a sad reality.

I’ll get into details in another post about Determina’s solution, but I am encouraged by VMware’s acquisition of a security company which will be integrated into their underlying solution set.  I don’t think it’s  a panacea, but quite honestly, the roadmap for solving these sorts of problems were blowing in the wind for VMware up until this point.

"Further, by
using the LiveShield capabilities, the ESX hypervisor could be used
‘introspectively’ to shield the hypervisor and guest OSs from attacks
on known vulnerabilities in situations where these have not yet been
patched. Both Determina technologies are fairly OS- and
application-neutral, providing VMware with an easy way to protect ESX
as well as Linux- and Windows-based guest OSs."

Quite honestly, I hoped they would have bought Blue Lane since the ESX Hypervisor is now going to be a crowded space for them…

We’ll see how well this gets integrated, but I smiled when I read this.

Oh, and before anyone gets excited, I’m sure it’s going to be 100% undetectable! 😉

/Hoff

Watermarking and DRM – One Replacing the Other?

August 17th, 2007 5 comments

Drmprotestor_450x600
I sat staring at at my screen today with a squinty look in my eyes and a soured puss as my wife asked me why I looked so funny.  "Meh!" I replied tersely.

The real answer was that I was pondering a question asked by the title of a topical piece penned by CNET’s Matt Rosof which begged: "Watermarking to Replace DRM?"

I think the reason I looked so perturbed is that it was an overtly stupid innocent question given that it’s pretty obvious that watermarking won’t "replace" DRM, it is merely another accepted application of it.

It doesn’t take much to remember that the ‘M’ in ‘DRM’ stands for management.  Tracking how files move around is part of the M.  Why is this any different?  The point of monitoring anything is either to: (a) gather intelligence which can be used to (b) implement a control or effect a disposition based upon said intelligence.

It’s interesting that in many cases we risk giving up our ‘R’ but that’s a topic for a different post.

So here’s the premise of watermarking — something I think most of us understand:


So what’s watermarking? It’s the insertion of extra data into an audio
stream that can help identify where that audio came from. It’s not
enough to attach data to a digital audio file–users can just burn that
file to a CD and then re-rip it, changing the file format and stripping
off all the data associated with the original file. (This is also the
classic way users get around DRM.) Instead, the data is inserted into
the audio track itself. It’s inaudible to human ears, but detectible by
various other tools.

What I found interesting from a security and technology perspective was the following:


In the case of Universal, the watermarking data won’t identify each individual file–a
method that would allow the company to trace pirated files back to
their first purchaser. Instead, it will only identify the particular
song. Eventually, Universal will look at popular file-trading networks,
and see which of the DRM-free songs released through its experimental
program ended up on these networks.

Firstly, I don’t believe the first sentence.  Sorry, I’m a skeptic.  Secondly, this technology and its application isn’t new at all.  I have it on very, very good authority that existing technology has been used in this exact manner for the last several years by the RIAA in order to track and monitor P2P file swapping which includes audio.  It’s used by government and military operators, also.

How do you think those subpoenas get issued specifically against those 12 year old girls swapping Shakira MP3’s?  They can definitively link a specifically watermarked MP3 with the IP address of the downloader after it’s injected into the network and consumed…by using watermarking.

(Ed: Comments below by Jordan suggest that this practice is not used heavily.  I cannot dispute this assertion, but I maintain that the technology has been used in this manner.  See the comments for an interesting perspective.)

It’s the same technology used by DLP and DRM solutions in the enterprise today.  So, watermarking is just another means to the end.  Period.

This is the funny part of the story:

Universal can then use this data to
help decide whether the risk of piracy outweighs the increased sales
from DRM-free MP3 files, segmenting this decision by particular
markets. For example, it might find that new Top 40 singles are more
likely to find their way onto file-trading networks than classic rock
from the 1970s.

Sure it will… 😉  I feel all warm and fuzzy now.

/Hoff

* Picture Credit: CNET

Categories: Uncategorized Tags:

Citrix Buying XenSource — It’s About Time(ing)

August 16th, 2007 No comments

Citrix
This will be short and sweet.  Citrix’s announcement that they will clip a swell $500 Million to acquire XenSource on the tail of VMware’s IPO makes nothing but sense.  The timing is interesting; waiting for VMware’s IPO both validated the move but one has to wonder if it jacked the price any.

Xensourcetitleimage
I can’t wait to see how this maps out over time across Citrix’s product lines which are still fairly siloed at this point.  Leveraging XenSource’s technology is a force multiplier across many elements of their offerings. It’s clear what the first moves will be, but I’m really interested in the longer term play.

At any rate, this is a fantastic strategic move for Citrix; these guys are poised to continue their march to take on Cisco as they become a robust platform for application and content delivery.*   If you take a look at their M&A activity over the last few years, it’s on a direct collision course with Cisco in many vectors. 

The big difference is, you can bolt their solution on instead of having to bake it in and these guys already have a footprint and expertise in the server and client consolidation markets.

Orthogonally, I wonder what effect this might have on f5?  Any thoughts there?

Then there’s Microsoft.  This may be a huge opportunity for other players such as SWsoft  to reinforce defensive positioning by shoring up relationships that otherwise might have gone XS’s way.

It’s going to get messy boys and girls.

This acquisition certainly has its challenges, but it really positions Citrix with as a complement to their existing product offerings.

/Hoff

*It gets more interesting strategically from a defensive position given Cisco’s recent investment of $150M in VMware prior to their IPO and my commentary on the matter here.

Categories: Citrix, Virtualization Tags:

On-Demand SaaS Vendors Able to Secure Assets Better than Customers?

August 16th, 2007 4 comments

Puzzle
I’m a big advocate of software as a service (SaaS) — have been for years.  This evangelism started for me almost 5 years ago when I become a Qualys MSSP customer listening to Philippe Courtot espouse the benefits of SaaS for vulnerability management.  This was an opportunity to allow me to more efficiently, effectively and cheaply manage my VA problem.  They demonstrated how they were good custodians of the data (my data) that they housed and how I could expect they would protect it.

I did not, however, feel *more* secure because they housed my VA data.  I felt secure enough that how they housed it should not fall into the wrong hands.  It’s called an assessment of risk and exposure.  I performed it and was satisfied it matched my company’s appetite and business requirements.

Not one to appear unclear on where I stand, I maintain that the SaaS can bring utility, efficiency, cost effectiveness, enhanced capabilities and improved service levels to a corporation depending upon who, what, why, how, where and when the service is deployed.  Sometimes it can bring a higher level of security to an organization, but so can an armed squadron of pissed off armed Oompa Loompa’s — it’s all a matter of perspective.

Oompa
I suggest that attempting to qualify the benefits of SaaS by generalizing in any sense is, well, generally a risky thing to do.  It often turns what could be a valid point of interest into a point of contention.

Such is the case with a story I read in a UK edition of IT Week by Phil Muncaster titled "On Demand Security Issues Raised."  In this story, the author describes the methods in which the security posture of SaaS vendors may be measured, comparing the value, capabilities and capacity of the various options and the venue for evaluating an SaaS MSSP:  hire an external contractor or rely on the MSSP to furnish you the results of an internally generated assessment.

I think this is actually a very useful and valid discussion to have — whom to trust and why?  In many cases, these vendors house sensitive and sometimes confidential data regarding an enterprise, so security is paramount.  One would suggest that anyone looking to engage an MSSP of any sort, especially one offering a critical SaaS, would perform due diligence in one form or another before signing on the dotted line.

That’s not really what I wanted to discuss, however.

What I *did* want to address was the comment in the article coming from Andy Kellett, an analyst for Burton, that read thusly:

"Security is probably less a problem than in the end-user organisations
because [on-demand app providers] are measured by the service they provide,"
Kellett argued.

I *think* I probably understand what he’s saying here…that security is "less of a problem" for an MSSP because the pressures of the implied penalties associated with violating an SLA are so much more motivating to get security "right" that they can do it far more effectively, efficiently and better than a customer.

This is a selling point, I suppose?  Do you, dear reader, agree?  Does the implication of outsourcing security actually mean that you "feel" or can prove that you’re more secure or better secured than you could do yourself by using a SaaS MSSP?

"I don’t agree the end-user organisation’s pen tester of choice
should be doing the testing. The service provider should do it and make that
information available."

Um, why?  I can understand not wanting hundreds of scans against my service in an unscheduled way, but what do you have to hide?  You want me to *trust* you that you’re more secure or holding up your end of the bargain?  Um, no thanks.  It’s clear that this person has never seen the results of an internally generated PenTest and how real threats can be rationalized away into nothingness…

Clarence So of Salesforce.com
agreed, adding that most chief information officers today understand that
software-as-a-service (SaaS) vendors are able to secure data more effectively
than they can themselves.

Really!?  It’s not just that they gave into budget pressures, agreed to transfer the risk and reduce OpEx and CapEx?  Care to generalize more thoroughly, Clarence?  Can you reference proof points for me here?  My last company used Salesforce.com, but as the person who inherited the relationship, I can tell you that I didn’t feel at all more "secure" because SF was hosting my data.  In fact, I felt more exposed.

"I’m sure training companies have their own motives for advocating the need
for in-house skills such as penetration testing," he argued. "But any
suggestions the SaaS model is less secure than client-server software are well
wide of the mark."

…and any suggestion that they are *more* secure is pure horsecock marketing at its finest.  Prove it.  And please don’t send me your SAS-70 report as your example of security fu.

So just to be clear, I believe in SaaS.  I encourage its use if it makes good business sense.  I don’t, however, agree that you will automagically be *more* secure.  You maybe just *as* secure, but it should be more cost-effective to deploy and manage.  There may very well be cases (I can even think of some) where one could be more or less secure, but I’m not into generalizations.

Whaddya think?

/Hoff