Archive

Archive for September, 2007

Google Makes Its Move To The Corporate Enterprise Desktop – Can It Do It Securely?

September 10th, 2007 4 comments

Googleapps
Coming (securely?) soon to a managed enterprise desktop near you, GoogleApps.  As discussed previously in my GooglePOP post demonstrating how Google will become the ASP of choice, outsouring and IT Consultancy CapGeminiCapgemini
announced it is going to offer Google’s Apps as a managed SaaS desktop option to its corporate enterprise customers, the Guardian says today:

Google has linked up with IT consultancy and outsourcing specialist
CapGemini to target corporate customers with its range of desktop
applications, in the search engine’s most direct move against the
dominance of Microsoft.

CapGemini, which already runs the
desktops of more than a million corporate workers, will provide its
customers with "Google Apps" such as email, calendar, spreadsheets and
word processing.

"Microsoft
is an important partner to us as is IBM," said the head of partnerships
at CapGemini’s outsourcing business, Richard Payling. "In our client
base we have a mix of Microsoft users and Lotus Notes users and we now
have our first Google Apps user. But CapGemini is all about freedom,
giving clients choice of the most appropriate technology that is going
to fit their business environment."

Google’s applications such as
its Google Docs word processing and spreadsheet service allow several
people to work on one document and see changes in real time.

"If
you look at the traditional desktop it is very focused on personal
productivity," said Robert Whiteside, Google enterprise manager, UK and
Ireland. "What Google Apps brings is team productivity."

…If you’re wondering how they’re going to make money from all this:

CapGemini will collect the £25 ($50) licence fee charged by Google for its applications, which launched in February.

It
will make further revenues from helping clients use the new
applications, providing helpdesk services and maintenance. It will also
provide help with corporate security, especially for applications such
as email, as well as storage and back-up services.

CapGemini
expects customers to mix and match products, providing some users with
expensive Microsoft tools and others with cheaper and lower-spec Google
Apps.

You can check out the differences between the free and for-pay versions here.

Besides being a very good idea from an SaaS "managed services" perspective, it shows that Google (and global outsourcers) see a target market waiting to unfold in the corporate enterprise space based upon the collaboration sale.

What’s really interesting from a risk management perspective, continuing to ride the theme of Google’s Global Domination, is that Google’s SaaS play will draw focus on the application of security as regulatory compliance issues continue to bite at the heels of productivity gains offered by the utility of centrally hosted collaboration-focused toolsets such as GoogleApps.

Interestingly, Nick Carr points out that GoogleApps’ "outsourced" application hosting capability hasn’t caught on with the large corporate enterprise set largely due to "enterprise readiness," security and compliance concerns, a suggestion that Steve Jones, a Capgemini outsourcing executive who oversees the firm’s work with software-as-a-service applications, maintains is not an issue:

"[Carr] asked Jones about the commonly heard claim that Google Apps, while
fine for little organizations, isn’t "enterprise-ready." He scoffed at
the notion, saying that the objection is just a smokescreen that some
CIOs are "hiding behind." Google Apps, he says, is "already being used
covertly" in big companies, behind the backs of IT staffers. The time
has come, he argues, to bring Apps into the mainstream of IT management
in order to ensure that important data is safeguarded and compliance
requirements are met. Jones foresees "a lot of big companies"
announcing the formal adoption of Apps.

Remember, these applications and their data are hosted on Google’s infrastructure.  Think about the audit, privacy, security and compliance implications of that; folks that utilize ASP services are perhaps used to this, but the question is, what can Google do to suggest it’s hosting model is secure enough, after all, Hoff’s 9th law represents:

Secconven

Since Google’s app. suite isn’t quite complete yet, Microsoft’s not entirely in danger of seeing it’s $12 Billion office empire crumble, but it’s got to start somewhere…

/Hoff

Off to VMWorld 2007 Next Week…

September 8th, 2007 1 comment

Vmworld_2007
Hey gang.

I’m off to VMWorld 2007 this coming week. 

I’ll unfortunately miss the first day on the 11th, but will be there the 12th and 13th. 

I’m really surprised and saddened by the apparent lack of security practitioner participation (by informal poll) — there are reportedly 10,000 attendees at this show, and given then impact virtualization is already having and will continue to have on our industry, I am surprised we won’t see more of you there.

When Cisco’s Chambers is keynoting, it’s likely somewhere you ought attend because it means something big is occurring, just in case you haven’t read those smoke signals before.

Ah well, guess it’s all up to me 😉

Any of you who may be attending, give me a ping via eMail; comes to my phone.

See you there!

/Hoff

Categories: Travel Tags:

Security Haiku…Or Is It Alliterative Iambic Pentameter?

September 6th, 2007 13 comments

Williamshakespeareportrait_4
Uncle Mike suggested that I be tasked with something worthy of my "innovation" title.

I thought that while I let something else percolate around in my little brain, I should flex my creative muscle a little and demonstrate the value I add to the security community.

It’s all about giving back, people.

Had I adequately prepared, I would have had 3-4 coffees prior to writing this, but I’m in Reston, VA and it seems you need a jet car to get anywhere.  I should have chartered that chopper.

So I am stuck here, decaffeinated and trying to get this other idea out of my brain and down on "paper" before my head explodes.

(Read to the cadence of ‘Twas the Night Before Christmas)

Remember when firewalls were firewalls, my friend?
it suggested our security problems would end.
They promised the perimeter breach to abate,
but alas became products we just loved to hate.

The attackers got smarter, and the exploits malicious,
the perimeter’s holes made the threatscape pernicious.
Sadly the breaches were never quite stopped,
whilst we measured our value in per packets dropped!

IDS soon was added, let us know we were sunk
yet we kept buying more costly security junk.
So we took the bit blocking, tuned our IDS mess,
yet again our risk metrics still didn’t trend less

As we patiently waited for our career ascension,
it seems IDS died, but LONG LIVE PREVENTION!
While signatures worked and were certainly handy
NBA as a feature would surely be dandy.

We looked for the good stuff and blocked bad behavior,
but NBA wasn’t our security savior.
But now we blocked traffic all up/down the stack
we were sure to have something to repel an attack.

UTM came along, married IPS to AV,
our security god boxes hummed along merrily.
And finally it came, our salvation arrived
NAC promised to secure us from all the bad guys.

Pre-auth, and post-auth, we had tons of checks,
It still didn’t fix it, we need 802-dot-one-X!
Admission or Access, we must have control,
and deeper we went down the NAC rabbit hole.

So Cisco blew that one, and we all looked confused
should we turn on that feature that nobody used?
But relax, do not worry, we’ll secure that border,
find another new feature, want fries with that order?

Stand your watch, remain valiant, stand that post at your station,
for the next frontier’s here…YES!  Virtualization!
Like perimeter viagra, from our security Pfizer,
we’re all solid now, all hail…Hypervisor!

Blue Pills and Red Pills, detection’s a bust,
but protecting our VM’s security’s a must!
What to do, what to do…what next shall I add?
What new valley startup will become the next fad

Is it content, DRM, or perhaps DLP?
Ask Rothman, ask Mogull, just please, don’t ask me.

/Hoff

Categories: Jackassery Tags:

CIS Releases Virtual Machine Security Guidelines

September 5th, 2007 1 comment

Cis_2
The Center for Internet Security has released their v1.0 guidelines for generic virtual machine security.  I will say that this is a basic, concise and generally helpful overview to practical things one might consider when deploying, configuring and beginning to secure a virtual machine.

It also does a good job of describing general threat classes and mitigation considerations.

CIS’ summary and representation of this document, its scope and audience are accurately represented by this paragraph from the text:

Recommendations contained in the Products ("Recommendations") result from a consensus-building process that involves many security experts and are generally generic in nature. The Recommendations are intended to provide helpful information to organizations attempting to evaluate or improve the security of their networks, systems, and devices.
Proper use of the Recommendations requires careful analysis and adaptation to specific
user requirements. The Recommendations are not in any way intended to be a "quick fix" for anyone’s information security needs.   

This first effort is focused on non-vendor specific virtualization platforms, and CIS is planning on releasing a similar set of documents that speak specifically to securing VMware ESX’s virtualization platforms.  They suggest they will also consider other virtualization platforms such as XenSource.

You can read more on the background of this work on the Computerworld Blog.

/Hoff

Categories: Virtualization Tags:

Oh, Wait…Now We Should Take Virtualization Security Seriously, Mr. Wittmann?

September 4th, 2007 7 comments

Virtualprotection_dog
Back in April, when apparently virtualization and the securing the mechanics thereof appeared not be that interesting, Art Wittmann wrote a piece in Network Computing titled "
Strategy Session: Server Consolidation: Just Do It"

You may remember that I responded rather vehemently to this article because of a quote that unreasonably marginalized the security impact that virtualization and consolidation have in the data center as well as suggesting that the security "hype" surrounding virtualization was due to "nattering nabobs of negativity" (that would be you and me) who were just being our old obstructionist security selves.  Art said:

"While the security threat inherent in virtualization is real, it’s also overstated."

Overstated? Here are a couple of other choice quotes from his article:

"That
leaves security as the final question.  You can bet that everyone who
can make a dime on questioning the security of virtualization will be
doing so; the drumbeat has started and is increasing in volume.

…which apparently meant that Art was dancing to a different beat, and…

If you can eliminate 10 or 20 servers running outdated versions of NT
in favor of a single consolidated pair of servers, the task of securing
the environment should be simpler or at least no more complex.  If you’re considering a server consolidation project, do it.  Be mindful of security, but don’t be dissuaded by the nattering nabobs of negativity."

I’m not sure Art ever deployed an ESX cluster with virtualized storage and networking, because if he had, I don’t think he would suggest that it’s "…simpler or at least no more complex."

Furthermore, in terms of security issues of late, I guess that besides the BluePill debacle, evading VM Jails and API exploitation just aren’t serious enough glimpses of what is coming down the pike to warrant concern?

Why am I dragging this back up to the surface?  Because I am one of those "nattering nabobs" who has spent the last year plus drawing attention to the very issues Art previously suggested were overstated and yet now proudly flies as a badge of honor on the NWC Virtualization Immersion Center Blog with this posting titled (strangely enough) "Taking Virtualization Security Seriously":

Virtualization security
has been on the minds of a lot of IT folks lately. There’s no doubt
that virtualization changes the security game – and because it involves
new software – the potential for new exploits exists

While I’m happy to see that Art has softened his tune and admitted that virtualization security is important and is not "overstated" I find it ironic that he, himself, is now dancing to the same drumbeat to which all of those money-hungry vendor scum and nabobers were shuffling along to when we were just hyping this all up…

Now that I’ve gotten rid of that bitter little pill, I will say that I think that Joe Hernick’s (seems to write for Information Week also) article titled "Virtualization Security Heats Up" did a good job of summarizing what I’ve been writing about for the last year specifically regarding virtualization security, and you should read it…but be warned, you might come away feeling a little less secure.

If you want to replay the most recent articles I wrote regarding virtualization and security, you can check out the listing here. I’m glad that Art and Crew are drawing attention to virtualization and the security ramifications thereof.  That’s a good thing.

/Hoff

Categories: Virtualization Tags:

Generalizing About Security/Privacy as a Competitive Advantage is a Waste of Perfectly Good Electrons

September 4th, 2007 6 comments

Advantage
Curphey gets right to the point in this blog post by decrying that security and privacy do not constitute a competitive advantage to those companies who invest in it because consumers have shown time and time again that despite breaches of security, privacy and trust, they continue to do business with them.  I think.

He tends to blur the lines between corporate and consumer "advantage" without really defining either, but does manage to go so far as to hammer the point home with allegory that unites the arguments of security ROI, global warming and the futility of IT overall.  Time for coffee and some happy pills, Mark? 😉

Just for reference, let’s see how those goofy Oxfordians define "advantage":

advantage |ədˈvantij| noun a condition or circumstance that puts one in a favorable or superior position : companies with a computerized database are at an advantage | she had an advantage over her mother’s generation. • the opportunity to gain something; benefit or profit : you could learn something to your advantage | he saw some advantage in the proposal. • a favorable or desirable circumstance or feature; a benefit : the village’s proximity to the town is an advantage. • Tennis a player’s score in a game when they have won the first point after deuce (and will win the game if they win the next point). verb [ trans. ] put in a favorable or more favorable position.

Keep that in your back pocket for a minute.

OK, Mark, I’ll bite:

Many security vendors army of quota
carrying foot soldiers brandish their excel sheets that prove security
is important and why you should care. They usually go on to show
irrefutable numbers demonstrating security ROI models and TCO. I think
its all “bull shitake”!

…and those armies of security drones are fueled by things like compliance mandates put forth by legislation as a direct result of things like breaches, so it’s obviously important to someone.  Shitake or not, those "someones" are also buying.

You’ve already doomed this argument by polarizing it with the intractable death ray of ROI.  We’ve already gone ’round and ’round on the definition of "value" as it relates to ROI and security, so a good majority of folks have already signed off an aren’t reading past this point…yet I digress.

Wired has the scoop;

Privacy
is fast becoming the trendy concept in online marketing. An increasing
number of companies are flaunting the steps they’ve taken to protect
the privacy of their customers. But studies suggest consumers won’t pay
even 25 cents to protect their data.

Why should consumers pay anything to protect their data!? Security and privacy are table stakes expectations (see below) on the consumer front.  Companies invest millions in security and compliance initiatives driven by legislation brought on by representatives in local, state and federal government to help make it so.  Furthermore, given the fact that if someone utilizes my credit card to commit fraud, I’m not responsible; it’s written off!  If you change the accountability model, you can bet consumers would be a little more concerned with protecting their data.  I wager they’d pay a hell of a lot more than $0.25 for it, too.

They aren’t, because despite being inconvenienced, they don’t care.  They don’t have to.  But before you assume I’m just agreeing with your point, read on.

After the TJX debacle I remember seeing predictions that people will vote with their feet. Of course they didn’t, sales actually went up 9%. The same argument was made for Ruby Tuesdays who lost some credit cards. It just doesn’t happen. Lake Chad and disasters on a global scale continue to plague us due to climate change yet still people refuse to stop buying SUV’s.

See previous paragraph above.   When bad things happen, consumers expect that someone will put the hammer down and things will get better.  New legislation.  More safeguards.  Extended protection. They often do. 

Furthermore, with your argument, one could suggest that security/privacy have become a competitive advantage for TJX now since given their uptake and revenues, the following definition seems to apply:

Competitive advantage (CA) is a position that a firm
occupies in its competitive landscape. Michael Porter posits that a
competitive advantage, sustainable or not, exists when a company makes economic rents,
that is, their earnings exceed their costs (including cost of capital).
That means that normal competitive pressures are not able to drive down
the firm’s earnings to the point where they cover all costs and just
provide minimum sufficient additional return to keep capital invested.
Most forms of competitive advantage cannot be sustained for any length
of time because the promise of economic rents drives competitors to
duplicate the competitive advantage held by any one firm.

It looks to me that based upon your argument, TJX benefited from not only their renewed investment in security/privacy but from the breach itself!  I think the last statement resonates with your Carr’s commentary (below)  but you aren’t talking about "sustainable" competitive advantage.  Or are you?

Right, wrong or indifferent, this is how it works.  Corporate incrementalism is an acceptable go to market strategy to overall bolster one’s strategy over a competitor; it’s the entire long tail approach to marketing.  You can’t be surprised by this?

This is why we have hybrid SUV’s now…

Nicholas Carr discusses this in IT Doesn’t Matter.
To start with technologies can become competitive differentials like
the railroads or the telephone. But once everyone has it, the paying
field levels and it becomes table stakes. Its a competitive
disadvantage if you aren’t in the game (i.e. insecure) but the economic
cost of developing a service or technology that is so compelling as to
become an advantage ain’t on the radar (for the most part).

So getting back to what I thought was your original premise, and escape the low-earth orbit of the affliction of the human condition, global warming and ROI… 🙁

For the sake of argument, let’s assume that I agree with your lofty generalizations that security and privacy do not represent a competitive advantage.  Please turn off your firewall now.  Deactivate your anti-virus and ant-spam.  Turn off that IDS/IPS.  Remove those WebApp firewall-enabled load balancers…

Yes, IT (and security/privacy) are table stakes (as I established above) but NOT having them would be a competitive disadvantage. THAT is the point.  It’s a referential argument and a silly one at that.

…almost as silly as suggesting that you shouldn’t try to measure the effectiveness of security; it seems that people want to hang language on these topics and debate that instead of the core issue itself.

The threat models dictate how investments are made and how they are perceived to be advantageous or not.  They’re also cyclical and temporal, so over time, their value depreciates until the next wave requires more investment.  Basic economics.

Generalizing about security and privacy as not being competitive advantages is a waste of time.  I’d love to see an ad from a company that says they’re NOT investing in security and privacy and that their Corporate credo is "screw it, you don’t care, anyway…"

I’m going to get on my bike and ride down to the store to buy a cup of coffee with my credit card now…

/Hoff

How the DOD/Intel Communities Can Help Save Virtualization from the Security Trash Heap…

September 3rd, 2007 5 comments

Sausagemachine
If you’ve been paying attention closely over the last year or so, you will have noticed louder-than-normal sucking sounds coming from the virtualization sausage machine as it grinds the various ingredients driving virtualization’s re-emergence and popularity together to form the ideal tube of tasty technology bologna. 

{I rather liked that double entendre, but if you find it too corrosive, feel free to substitute your own favorite banger marque in its stead. 😉 }

Virtualization is a hot topic; from clients to servers  applications to datastores, and networking to storage, virtualization is coming back full-circle from its MULTICS and LPAR roots and promises to change everything.  Again.

Unfortunately, one of the things virtualization isn’t changing quickly enough (for my liking) and in enough visible ways is the industry’s approach to engineering security into the virtualization product lifecycle early enough to allow us to deploy a more secure product out of the box.   

Sadly, most of the commercial virtualization offerings as well as the open source platforms have lacked much in the way of guidance as how to secure VMs beyond the common sense approach of securing non-virtualized instances, and the security industry has been slow to see any more than a few innovative solutions to the problems virtualization introduces or in some way intensifies.

You can imagine then the position that leaves customers.

I’m from the Government and I’m here to help…

However, here’s where some innovation from what some might consider an unlikely source may save this go-round from another security wreck stacked in the IT boneyard: the DoD and Intelligence communities and a high-profile partnering strategy for virtualized security.

Both the DoD and Intel agencies are driven, just like the private sector, to improve efficiency, cut costs, consolidate operationally and still maintain an ever vigilant high level of security.

An example of this dictate is the Global Information Grid (GIG.) The GIG represents:

"…a net-centric system
operating in a global context to provide processing, storage,
management, and transport of information to support all Department of
Defense (DoD), national security, and related Intelligence Community
missions and functions-strategic, operational, tactical, and
business-in war, in crisis, and in peace.

GIG capabilities
will be available from all operating locations: bases, posts, camps,
stations, facilities, mobile platforms, and deployed sites. The GIG
will interface with allied, coalition, and non-GIG systems.

One of the core components of the GIG is building the capability and capacity to securely collapse and consolidate what are today physically separate computing enclaves (computers, networks and data) based upon the classification, sensitivity and clearances of information and personnel which govern the access to data by those who try to access it.

Multi-Level Security Marketing…

This represents the notion of multilevel security or MLS.  I am going to borrow liberally from this site authored by Dr. Rick Smith to provide a quick overview, as the concepts and challenges of MLS are really critical to fully appreciate what I’m about to describe.  Oddly enough, the concept and work is also 30+ years old and you’d recognize the constructs as being those you’ll find in your CISSP test materials…You remember the Bell-LaPadula model, don’t you?

The MLS Problem

We use the term multilevel
because the defense community has classified both people and
information into different levels of trust and sensitivity. These
levels represent the well-known security classifications: Confidential,
Secret, and Top Secret. Before people are allowed to look at classified
information, they must be granted individual clearances that are based
on individual investigations to establish their trustworthiness. People
who have earned a Confidential clearance are authorized to see
Confidential documents, but they are not trusted to look at Secret or
Top Secret information any more than any member of the general public.
These levels form the simple hierarchy shown in Figure 1.
The dashed arrows in the figure illustrate the direction in which the
rules allow data to flow: from "lower" levels to "higher" levels, and
not vice versa.

Figure 1: The hierarchical security levels
 

When speaking about these levels, we use three different terms:

  • Clearance level
    indicates the level of trust given to a person with a security
    clearance, or a computer that processes classified information, or an
    area that has been physically secured for storing classified
    information. The level indicates the highest level of classified
    information to be stored or handled by the person, device, or location.
  • Classification level
    indicates the level of sensitivity associated with some information,
    like that in a document or a computer file. The level is supposed to
    indicate the degree of damage the country could suffer if the
    information is disclosed to an enemy.
  • Security level is a generic term for either a clearance level or a classification level.

The
defense community was the first and biggest customer for computing
technology, and computers were still very expensive when they became
routine fixtures in defense organizations. However, few organizations
could afford separate computers to handle information at every
different level: they had to develop procedures to share the computer
without leaking classified information to uncleared (or insufficiently
cleared) users. This was not as easy as it might sound. Even when
people "took turns" running the computer at different security levels
(a technique called periods processing), security officers had to worry
about whether Top Secret information may have been left behind in
memory or on the operating system’s hard drive.
Some sites purchased
computers to dedicate exclusively to highly classified work, despite
the cost, simply because they did not want to take the risk of leaking
information.

Multiuser
systems, like the early timesharing systems, made such sharing
particularly challenging. Ideally, people with Secret clearances should
be able to work at the same time others were working on Top Secret
data, and everyone should be able to share common programs and
unclassified files. While typical operating system mechanisms could
usually protect different user programs from one another, they could
not prevent a Confidential or Secret user from tricking a Top Secret
user into releasing Top Secret information via a Trojan horse.

When
a user runs the word processing program, the program inherits that
user’s access permissions to the user’s own files. Thus the Trojan
horse circumvents the access permissions by performing its hidden
function when the unsuspecting user runs it. This is true whether the
function is implemented in a macro or embedded in the word processor
itself. Viruses and network worms are Trojan horses in the sense that
their replication logic is run under the context of the infected user.
Occasionally, worms and viruses may include an additional Trojan horse
mechanism that collects secret files from their victims. If the victim
of a Trojan horse is someone with access to Top Secret information on a
system with lesser-cleared users, then there’s nothing on a
conventional system to prevent leakage of the Top Secret information.
Multiuser systems clearly need a special mechanism to protect
multilevel data from leakage.

Think about the challenges of supporting modern-day multiuser Windows Operating Systems (virtualized or not,) together onto a single compute platform while also consolidating multiple networks of various classifications (including the Internet) into a single network transport while providing ZERO tolerance for breach.

What’s also different here from the compartmentalization requirements of "basic" virtualization is that the segmentation and isolation is
critically driven by the classification and sensitivity of the data itself and the clearance of those trying to access it.   

To wit:

VMware and General Dynamics are partnering to provide the NSA with the next evolution of their High Assurance Platform (HAP) to solve the following problem:

… users with multiple security clearances, such as members of the U.S. Armed Forces and Homeland Security personnel, must
use separate physical workstations. The result is a so-called "air gap"
between systems to access information in each security clearance level
in order to uphold the government’s security standards.

VMware
said it will provide an extra layer of security in its virtualization
software, which lets these users run the equivalent of physically
isolated machines with separate levels of security clearance on the
same workstation.

HAP builds on the current
solution based on VMware, called NetTop, which allows simultaneous
access to classified information on the same platform in what the
agency refers to as low-risk environments.

For HAP, VMware has added a thin API of
fewer than 5,000 lines of code to its virtualization software that can
evolve over time. NetTop is more static and has to go through a lengthy
re-approval process as changes are made. "This code can evolve over
time as needs change and the accreditation process is much quicker than
just addressing what’s new." 

HAP encompasses standard Intel-based commercial hardware that
could range from notebooks and desktops to traditional workstations. Government agencies will see a minimum 60 percent
reduction in their hardware footprints and greatly reduced energy
requirements.

HAP
will allow for one system to maintain up to six simultaneous virtual
machines. In addition to Windows and Linux, support for
Sun’s Solaris operating system is planned."

This could yield some readily apparent opportunities for improving the security of virtualized environments in many sensitive applications.  There are also other products on the market that offer this sort of functionality such as Googun’s Coccoon and Raytheon’s Guard offerings, but they are complex and costly and geared for non-commercial spaces.  Also, with VMware’s market-force dominance and near ubiquity, this capability has the real potential of bleeding over into the commerical space.

Today we see MLS systems featured in low risk environments, but it’s still not uncommon to see an operator tasked with using 3-4 different computers which are sometimes located in physically isolated facilities.

While this offers a level of security that has physical air gaps to help protect against unauthorized access, it is costly, complex, inefficient and does not provide for the real-time access needed to support the complex mission of today’s intelligence operatives, coalition forces or battlefield warfighters.

It may sound like a simple and mundane problem to solve, but in today’s distributed and collaborative Web2.0 world (which is one the DoD/Intel crowd are beginning to utilize) we find it more and more difficult to achieve.   Couple the information compartmentalization issue with the recent virtualization security grumblings: breaking out of VM Jails, Hypervisor
Rootkits and exploiting VM API’s for fun and profit…

This functionality has many opportunities to provide for
more secure virtualization deployments that will utilize MLS-capable
OS’s in conjunction with strong authentication, encryption, memory
firewalling and process isolation.  We’ve seen the first steps toward that already.

I look forward to what this may bring to the commercial space and the development of more secure virtualization platforms in general.  It’s building on decades of work in the information assurance space, but it’s getting closer to being cost-effective and reasonable enough for deployment.

/Hoff

Reflections on Recent Failures in the Fragile Internet Ecosystem Due to Service Monoculture…

September 2nd, 2007 3 comments

Crumblefoundation_2
Our digital lives and the transactions that enable them are based upon crumbling service delivery foundations and we’re being left without a leg to stand on…

I’ve blogged about this subject before, and it’s all a matter of perspective, but the latest high-profile Internet-based service failure which has had a crippling effect on users dependent upon its offerings is PayPal

Due to what looks to be a recent roll-out of code gone bad, subscription payment processing went belly-up. 

On September 1st, PayPal advised those affected that the issue should be fixed "…by September 5 or 6, and that all outstanding subscription payments would be collected."  That’s 4 days on top of the downtime sustained already.

This has been a tough last few weeks for parent company eBay as one of its other famous children, Skype, suffered its own highly-visible flame-outs due to an issue they company blamed upon overwhelmed infrastructure due to a Microsoft Patch Tuesday download.  This outage left several million users who were "dependent" upon Skype for communicating with others without a service to do so.

This is getting to the point that the services we take for granted will always be up are showing their vulnerable side, for lots of different reasons.  Some of these services are free, so that introduces a confusing debate relating to service levels and availability when one doesn’t pay for said service.

The failures are increasing in frequency and downtime.  Scary still is that I now count five recent service failures in the last four months that have affected me directly.  Not all of them are Internet-based, but they indicate a reliance on networked infrastructure that is obviously fragile:

1) United Airlines  Flight Operations Computer System Failure
2) San Francisco Power Grid Failure
3) LAX Passenger Screening System Computer System Failure
4) Skype Down for Days, and finally…
5) PayPal Subscription Processing Down

That’s quite a few, isn’t it?  Did you realize these were all during the last few months?

Most of these failures caused me inconvenience at best; some missed flights, inability to blog, failed subscription processing for web services, inability to communicate with folks…none of them life-threatening, and none of them dramatically impacting my ability to earn a wage.  But that’s me and my "luck."  Other people have not been so lucky.

Some have reasonably argued that these services do not represent "critical" infrastructure and at the level of things such as national defense, health and safety, etc. I’d have to agree.  But they could, and if our dependence on these services increases, they will.

As these services evolve and enable the economic plumbing of an entire generation of folks who expect ever-presence and conduct the bulk of their lives online, this sort of thing will turn from an inconvenience to a disaster. 

Even more interesting is a number of these services are now owned and delivered by what I call service monocultures; eBay provides not only the auction services, but PayPal and Skype, too.  Google gives you mail, apps, search, video, ads and soon wireless and payment.

While the investment these M&A/consolidation activities generates means bigger and better services, it also increases the likelihood of cascading failure domains in an ever-expanding connectedness, especially when they are operated by a single entity.

There’s a lot of run-and-gun architecture servicing these utilities in the software driven world that isn’t as resilient as it ought to be up and down the stack.  We haven’t even scratched the tip of the iceberg on this one folks…it’s going to get nasty.  Web2.0 is just the beginning.

I think we’d have a civil war if YouTube, FaceBook, Orkut or MySpace went down.

What would people do without Google if it were to disappear for 2-3 days.

Yikes.

Knock on (virtual) wood.

/Hoff

Categories: Software as a Service (SaaS) Tags:

Failure Modality Responses Different in Firewalls versus IPS devices?

September 2nd, 2007 9 comments

Failure
I had an interesting email this last week from a former co-worker that I found philosophically interesting (if not alarming.)  It was slightly baited, but the sender is a smart cookie who was obviously looking for a little backup.

Not being one to shy away from discourse (or a good old-fashioned geek debate on security philosophy) I pondered the topic.

Specifically, the query posed was centered on a suggested diametrically-opposed set of opinions on how, if at all, IPS devices and firewalls ought to behave differently when they fail:

I was having a philosophical discussion with [He who shall not be named]
today about uptime expectations of IPS vs. Firewall. The discussion was
in reference to a security admin's expectation of IPS "upness" vs. Firewall's.


Basic question: if a firewall goes down we naturally expect it to BLOCK
all traffic. However, if an IPS goes down, the prevailing theory is that
the IPS should ALLOW all traffic, or in other words fail open.

[He who shall not be named] says this is because best practices say that
a firewall is a default DENY ALL device, whereas an IPS is a default ALLOW ALL
device.


My thinking is trying to be a little more progressive. If Firewalls
protect at Layer 3 and IPSes at L4-7, then why would you open yourself
up at L4-7 when the device fails? I know that the concept of "firewall"
is morphing these days especially to include more L4-7 inspection. But
the question is the same. Are security admins starting to consider
protocol and payload analysis as important as IP and Port protection? Or
are we all still playing with sticks and fire in the mud?

I know you're all focused on virtualization these days, but how about a
good old religious firewall debate!

I responded to this email with my own set of beliefs and foundational arguments which challenged several of the statements above, but I’m interested in two things from you, dear reader, and hope you’ll comment back with your opinions:

  1. Do you recognize that there are two valid perspectives here?  Would you fail open on one and closed on another?
  2. If your answer to question #1 is yes, which do you support and why?

You can assume, for sake or argument, that you have only a firewall, only an IPS or both devices in-line with one-another.   Talk amongst yourselves…

General comments on the setup are also welcomed 😉

/Hoff


			
Categories: Firewalls Tags:

So a Bayesian and an Objectivist Walk Into a Bar…

September 2nd, 2007 1 comment

Cartoonwalkbar280_2
As Shrdlu and CWalsh point out, there’s a fight brewin’ and it’s a good one.

A pugilistic pummeling of perplexing probability proportions!

Cage match.  Two men enter, one man leave.

Bejtlich and Hutton.

Pay-Per-View?

/Hoff

Categories: Risk Management Tags: