Archive

Archive for May, 2007

The Last Word on Schneier’s “Why Security Shouldn’t Matter” Post…

May 10th, 2007 No comments

Monkey
All this bruhaha over Schneier’s commentary in Wired regarding the existence of and need for IT Security is addressed brilliantly by Paul McNamara here.  Read it and let Bruce get back to posting about bombs, the government and giant squids, won’t you?

Anyone else who took the bait (as Bruce designed, obviously) and actually attempted to argue against what was admittedly unarguable circuitous and rhetorical sets of disjointed constructs paid service and tribute to the process as designed.  There’s one born every minute.  Yes, this is a candidate for the "Captain Obvious Award" and Bruce is no dummy, but obviously some of us who read this stuff and treat everything as a literal next-action need to chill.

Obviously Bruce has made a career from IT Security — and he recently sold his company to another that hopes to do the same, so accept the piece for what it is: a provocation to challenge the status quo and improve Technorati ratings 😉

This piece was meant to agitate us, as was Art Coviello’s address at RSA wherein he stated that the security industry will cease to exist in 3 years.

Thinking about this stuff is good for business — in all senses.

/Hoff

Categories: General Rants & Raves Tags:

Liability of Reverse Engineering Security Vulnerability Research?

May 8th, 2007 5 comments

Eula(Ed.: Wow, some really great comments came out of this question.  I did a crappy job framing the query but there exists a cohesiveness to both the comments and private emails I have received that shows there is confusion in both terminology and execution of reverse engineering. 

I suppose the entire issue of reverse engineering legality can just be washed away by what appeared to me as logical and I stated in the first place — there is no implied violation of an EULA or IP if one didn’t agree to it in the first place (duh!) but I wanted to make sure that my supposition was correct.]

I have a question that hopefully someone can answer for me in a straightforward manner.  It  popped into my mind yesterday in an unrelated matter and perhaps it’s one of those obvious questions, but I’m not convinced I’ve ever seen an obvious answer.

If I as an individual or as a representative of a company that performs vulnerability research and assurance engages in reverse engineering of a product that is covered by patent/IP protection and/or EULA’s that expressly forbids reverse engineering, how would I deflect liability for violating these tenets if I disclose that I have indeed engaged in reverse engineering?

HID and Cisco have both shown that when backed into a corner, they will litigate and the researcher and/or company is forced to either back down or defend (usually the former.) (Ed:. Poor examples as these do not really fall into the same camp as the example I give below.)

Do you folks who do this for a living (or own/manage a company that does) simply count on the understanding that if one can show "purity" of non-malicious motivation that nothing bad will occur?

It’s painfully clear that the slippery slope of full-disclosure plays into this, but help me understand how
the principle of the act (finding vulnerability and telling the company/world about it) outweighs the liability involved.

Do people argue that if you don’t purchase the equipment you’re not covered under the EULA?  I’m trying to rationalize this.  How does one side-step the law in these cases without playing Russian Roulette?

Here’s an example of what I mean.  If you watch this video, the researchers that demonstrated the
Cisco NAC attack @ Black Hat clearly articulate the methods they used to reverse engineer Cisco’s products.

I’m not looking for a debate on the up/downside of full disclosure, but
more specifically the mechanics of the process used to identify that a
vulnerability exists in the first place — especially if reverse
engineering is used.

Perhaps this is a naive question or an uncomfortable one to answer, but I’m really interested.

Thanks,

/Hoff

Cisco as a Bellweather…where’s all the commentary?

May 7th, 2007 4 comments

Ciscoslow(Ed.: I wanted to clarify that issues external to security
vulnerabilities and advanced technology most definitely caused the impact and commentary
noted here — global economic dynamics nothwithstanding, I’m just
surprised at the lack of chatter around the ol’ Blogosphere on this)

From the "I meant to comment on this last week" Department…

A couple of weeks ago, analyst reports announced that Cisco was indicating a general slow-down of their enterprise business and they were placing pressure on the service provider business units to make up the difference.  Furthermore, deep discounts to the channel and partners were crafted in order to incentivize  Q2 customer purchases:

Cisco is headed for a disappointing quarter, according to a cautionary research note issued Monday from a research analyst, reports Barron’s Online.

Samuel Wilson, an analyst at JMP Securities writes that the slow down in U.S. enterprise business during Cisco’s fiscal second quarter has continued into its current quarter, according to Barron’s.

According to the Barron’s story: "Wilson writes that ‘according to
resellers, top Cisco sales staff have recently expressed concerns about
making their April quarter numbers.” He says that the company has
apparently increased “partner-focused incentives’ designed to shift
business in from the July quarter. ‘Based on the past three months,
many resellers now believe that U.S. enterprises have begun to delay
discretionary spending above and beyond normal seasonality typical of
the [calendar] first quarter.’

Wilson also wrote that Cisco has cut headcount and expenses in its
enterprsie switching business unit. He forecasts Cisco’s fiscal third
quarter revenue to be $38.1 billion, down from the consensus estimates
of $39.4 billion, according to Barron’s.

Given how Cisco is a bellweather stock for not only IT but in many case an indicator of overall enterprise spend trends, why isn’t there more concern in the air?  Maybe it’s just rumor and innuendo, but when analysts start press releases about Mr. Chambers’ neighborhood, they’re usually pretty conservative.

Rothman practically needed a Wet-Nap when he commented on Cisco’s Q1 announcement (Cisco Takes it to the Next Level) but nary a word from the "All things including the kitchen sink will go into a Cat65K" camp on this news?  What, no gleeful prognostication on rebounds or doom?

Interestingly, from here, Goldman advises to buy ahead of Q3 announcement:

We believe that management will put concerns around slower U.S. large
cap tech spending to rest. It represents only 13% of sales and we
believe is seeing indications of a rebound. We believe management is
likely to reaffirm positive longer-term trends in emerging markets, new
technologies and the impact of video on networks as key drivers of
sustained double-digit top-line growth.

We’ll see.  Focusing on all the advanced technology projects and not focusing on core competencies can bite a company — even Cisco — when they least expect it.  Couple that with the continued vulnerabilities in their security products (another one today across Pix/ASA) and I’d say folks might start talking…

I wonder how the security products have weathered through all this?

…but that’s just me.  Lash away, boys.

/Hoff

Categories: Cisco, Information Security Tags:

Clean Pipes – Less Sewerage or More Potable Water?

May 6th, 2007 2 comments

Pipesprev
Jeff Bardin over on the CSO blog pitched an interesting stake in the ground when he posited "Connectivity As A Utility: Where are My Clean Pipes?"

Specifically, Jeff expects that his (corporate?) Internet service functions in the same manner as his telephone service via something similar to a "do not call list."  Basically, he opts out by placing himself on the no-call list and telemarketers cease to call. Others might liken it to turning on a tap and getting clean, potable water; you pay for a utility and expect it to be usable.  All of it.

Many telecommunications providers want to charge you for having
clean pipes, deploying a suite of DDoS services that you have to buy to
enhance your security posture.   Protection of last mile bandwidth is
very key to network availability as well as confidentiality and
integrity. If I am subscribing for a full T1, shouldn’t I get the full
T1 as part of the price and not just a segment of the T1? Why do I have
to pay for the spam, probes, scans, and malicious activity that my
telecommunications service provider should prevent at 3 miles out
versus my having to subscribe to another service to attain clean pipes
at my doorstep?

I think that most people would agree with the concept of clean pipes in principle.  I can’t think of any other utility where the service levels delivered are taken with such a lackadaisical best effort approach and where the consumer can almost always expect that some amount (if not the majority) of the utility is unusable. 

Over the last year, I’ve met with many of the largest ISP’s, MSSP’s, TelCo’s and Mobile Operators on the planet and all are in some phase of deploying some sort of clean pipes variant.  Gartner even predicts a large amount of security to move "into the cloud."

In terms of adoption, EMEA is leaps and bounds ahead of the US and APAC in these sorts of services and will continue to be.  The relative oligopolies associated with smaller nation states allows for much more agile and flexible service definition and roll-outs — no less complex, mind you.  It’s incredible to see just how disparate and divergent the gap is between what consumers (SME/SMB/Mobile as well as large enterprise) are offered in EMEA as opposed to the good-ol’ U S of A.

However, the stark reality is that the implementation of clean pipes by your service provider(s) comes down to a balance of two issues: efficacy and economics, with each varying dramatically with the market being served; the large enterprise’s expectations and requirements look very, very different from the SME/SMB.

Let’s take a look at both of these elements.

ECONOMICS

If you ask most service providers about so-called clean pipes up to a year ago, you could expect to get an answer that was based upon a "selfish" initiative aimed at stopping wasteful bandwidth usage upstream in the service provider’s network, not really protecting the consumer. 

The main focus here is really on DDoS and viri/worm propagation.  Today, the closest you’ll come to "clean pipes" is usually some combination of the following services deployed both (still) at the customer premises as well as somewhere upstream:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

What is interesting about these services is that they basically define the same functions you can now get in those small little UTM boxes that consolidate security functionality at the "perimeter."  The capital cost of these devices and the operational levies associated with their upkeep are pretty close in the SME/SMB and when you balance what you get in "good enough" services for this market as well as the overall availability of these "in the cloud" offerings, UTM makes more sense for many in the near term.

For the large enterprise, the story is different.  Outsourcing some level of security to an MSSP (or perhaps even the entire operation) or moving some amount upstream is a matter of core competence and leveraging the focus of having internal teams focus on the things that matter most while the low hanging fruit can be filtered out and monitored by someone else.  I describe that as filtering out the lumps.  Some enormous companies have outsourced not only their security functions but their entire IT operations and data center assets in this manner.  It’s not pretty, but it works.

I’m not sure they are any more secure than they were before, however.  The risk simply was transferred whilst the tolerance/appetite for it didn’t change at all.  Puzzling.

Is it really wrong to think that companies (you’ll notice I said companies, not "people" in the general sense) should pay for clean pipes?  I don’t think it is.  The reality is that for non-commercial subscribers such as home users, broadband or mobile users, some amount of bandwidth hygiene should be free — the potable water approach.

I think, however, that should a company which expects elevated service levels and commensurate guarantees of such, want more secure connectivity, they can expect to ante up.  Why?  Because the investment required to deliver this sort of service costs a LOT of money — both to spin up and to instantiate over time.  You’re going to have to pay for that somewhere.

I very much like Jeff’s statistics:

We stop on average for our organization nearly 600
million malicious emails per year at our doorstep averaging 2.8
gigabytes of garbage per day. You add it up and we are looking at
nearly a terabyte of malicious email we have to stop. Now add in probes
and scans against HTTP and HTTPS sites and the number continues to
skyrocket.

Again, even though Jeff’s organization isn’t small by any means, the stuff he’s complaining about here is really the low-hanging fruit.  It doesn’t bear a dent against the targeted, malicious and financially-impacting security threats that really demands a level of service no service provider will be able to deliver without a huge cost premium.

I won’t bore you with the details, but the level of high-availability,
resilience, performance, manageability, and provisioning required to
deliver even this sort of service is enormous.  Most vendors simply can’t do
it and most service providers are slow to invest in proprietary
solutions that won’t scale economically with the operational models in
place.

Interestingly, vendors such as McAfee even as recently as 2005 announced with much fanfare that they were going to deliver technology, services and a united consortium of participating service providers with the following lofty clean pipe goals (besides selling more product, that is):

The initiative is one
part of a major product and services push from McAfee, which is
developing its next generation of carrier-grade security appliances and
ramping up its enterprise security offerings with NAC and secure
content management product releases planned for the first half of next
year, said Vatsal Sonecha, vice president of market development and
strategic alliances at McAfee, in Santa Clara, Calif.

Clean Pipes will be a major expansion of McAfee’s managed
services offerings. The company will sell managed intrusion prevention;
secure content management; vulnerability management; malware
protection, including anti-virus, anti-spam and anti-spyware services;
and mobile device security, Sonecha said.

McAfee is working with Cable
and Wireless PLC, British Telecommunications PLC (British Telecom),
TelefĂłnica SA and China Network Communications (China Netcom) to tailor
its offerings through an invitation-only group it calls the Clean Pipes
Consortium.

http://www.eweek.com/article2/0,1895,1855188,00.asp

Look at all those services!  What have they delivered as a service in the cloud or clean pipes?  Nada. 

The chassis-based products which were to deliver these services never materialized and neither did the services.  Why?  Because it’s really damned hard to do correctly.  Just ask Inkra, Nexi, CoSine, etc.  Or you can ask me.  The difference is, we’re still in business and they’re not.  It’s interesting to note that every one of those "consortium members" with the exception of Cable and Wireless are Crossbeam customers.  Go figure.

EFFICACY

Once the provider starts filtering at the ingress/egress, one must trust that the things being filtered won’t have an impact on performance — or confidentiality, integrity and availability.  Truth be told, as simple as it seems, it’s not just about raw bandwidth.  Service levels must be maintained and the moment something that is expected doesn’t make its way down the pipe, someone will be screaming bloody murder for "slightly clean" pipes.

Ask me how I know.  I’ve lived through inconsistent application of policies, non-logged protocol filtering, dropped traffic and asymmetric issues introduced by on-prem and in-the-cloud MSSP offerings.  Once the filtering moves past your prem. as a customer, your visibility does too.  Those fancy dashboards don’t do a damned bit of good, either.  Ever consider the forensic impact?

Today, if you asked a service provider what constitutes their approach to clean pipes, most will refer you back to the same list I referenced above:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

The problem is that most of these solutions are disparate point products run by different business units at different parts of the network.  Most are still aimed at the perimeter service — it’s just that the perimeter has moved outward a notch in the belt.

Look, for the SME/SMB (or mobile user,) "good enough" is, for the most part, good
enough.  Having an upstream provider filter out a bunch of spam and
viri is a good thing and most firewall rules in place in the SME/SMB
block everything but a few inbound ports to DMZ hosts (if there are
any) and allow everything from the inside to go out.  Not very
complicated and it doesn’t take a rocket scientist to see how, from the
perspective of what is at risk, that this service doesn’t pay off
handsomely.

From the large enterprise I’d say that if you are going to expect that operational service levels will be met, think again.  What happens when you introduce web services, SOA and heavy XML onto externally-exposed network stubs.  What happens when Web2/3/4.x technologies demand more and more security layers deployed alongside the mechanics and messaging of the service?

You can expect issues and the lack of transparency will be an issue on all but the most simple of issues.

Think your third party due diligence requirements are heady now?  Wait until this little transference of risk gets analyzed when something bad happens — and it will.  Oh how quickly the pendulum will swing back to managing this stuff in-house again.

This model doesn’t scale and it doesn’t address the underlying deficiencies in the most critical elements of the chain: applications, databases and end-point threats such as co-opted clients as unwilling botnet participants.

But to Jeff’s point, if he didn’t have to spend money on the small stuff above, he could probably spend it elsewhere where he needs it most.

I think services in the cloud/clean pipes makes a lot of sense.  I’d sure as hell like to invest less in commoditizing functions at the perimeter and on my desktop.  I’m just not sure we’re going to get there anytime soon.

/Hoff

*Image Credit: CleanPipes

Read more…

Unified Risk Management (URM) and the Secure Architecture Blueprint

May 6th, 2007 5 comments

Gunnar once again hits home with an excellent post defining what he calls the Security Architecture Blueprint (SAB):

The purpose of the security architecture blueprint is to bring focus to the key areas of
concern for the enterprise, highlighting decision criteria and context for each domain.
Since security is a system property it can be difficult for Enterprise Security groups to
separate the disparate concerns that exist at different system layers and to understand
their role in the system as a whole. This blueprint provides a framework for
understanding disparate design and process considerations; to organize architecture and
actions toward improving enterprise security.

Securityarchitectureroadmap

I appreciated the graphical representation of the security architecture blueprint as it provides some striking parallels to the diagram that I created about a year ago to demonstrate a similar concept that I call the Unified Risk Management (URM) framework

(Ed.: URM focuses on business-driven information survivability architectures that describes as much risk tolerance as it does risk management.)

Here are both the textual and graphical representations of URM: 

Managing risk is fast becoming a lost art.  As the pace of technology’s evolution and adoption overtakes our ability to assess and manage its impact on the business, the overrun has created massive governance and operational gaps resulting in exposure and misalignment.  This has caused organizations to lose focus on the things that matter most: the survivability and ultimate growth of the business.

Overwhelmed with the escalation of increasingly complex threats, the alarming ubiquity of vulnerable systems and the constant onslaught of rapidly evolving exploits, security practitioners are ultimately forced to choose between the unending grind of tactical practices focused on deploying and managing security infrastructure versus the strategic art of managing and institutionalizing risk-driven architecture as a business process.

URM illustrates the gap between pure technology-focused information security infrastructure and business-driven, risk-focused information survivability architectures and show how this gap is bridged using sound risk management practices in conjunction with best of breed consolidated Unified Threat Management (UTM) solutions as the technology anchor tenant in a consolidated risk management model.

URM demonstrates how governance organizations, business stakeholders, network and security teams can harmonize their efforts to produce a true business protection and enablement strategy utilizing best of breed consolidated UTM solutions as a core component to effectively arrive at managing risk and delivering security as an on-demand service layer at the speed of business.  This is a process we call Unified Risk Management or URM.

Urmv12

(Updated on 5/8/07 with updates to URM Model)

The point of URM is to provide a holistic framework against which one may measure and effectively manage risk.  Each one of the blocks above has a set of sub-components that breaks out the specifics of each section.  Further, my thinking on URM became the foundation of my exploration of the Security Services Oriented Architecture (SSOA) model. 

You might also want to check out Skybox Security’s Security Risk Management (SRM) Blueprint, also.

Thanks again to Gunnar as I see some gaps that I have to think about based upon what I read in his SAB document.

/Hoff

The Operational Impact of Virtualizing Security…

May 6th, 2007 No comments

Operationalizingsecurity2
A benefit of a show such as Infosec UK is that one is given the opportunity to organize customer meetings and very unique roundtables because everyone clusters around the show.

Last year we organized a really interesting roundtable discussion with 13 of the UK’s most compelling members of the financial services and telco/service provider industries.  This year we did another similar event with equal representation from industry.

The agenda of this meeting revolves around a central topic about which the group first introduces one another and then adds color and experiential commentary regarding the issue at hand.  The interesting thing is that by the time the "introductions" are complete, we’ve all engaged in fantastic discussion with most people sharing key experiential data and debate that has stretched the time allotment of the event.

This year’s topic was "The Operational Impact of Virtualizing Security."  It was a fascinating topic for me since I was quite interested in seeing how virtualized security was taking hold in these organizations and how operationalizing security was impacted by virtualizing it.

Virtualization (in the classic data center consolidation via virtual machine implementations) is ultimately fueled by two things: the reclamation and reduction of spending (a) time and (b) money.  In the large enterprise it’s about less boxes and services on demand to serve the business.  With Telcos/Mobile Operators/Service Providers it’s about increasing the average revenue per subscriber/customer and leveraging common infrastructure to deliver security as a service (for fun and profit.)

The single largest differentiator between the two (or so) markets really boils down to scale; how many things are you trying to protect and at what cost.  Novel idea, eh?

It was evident that those considering virtualizing their security were motivated primarily by the same criteria, but in many cases politics, religion, regulatory requirements, imprecise use cases, bad (or non-existent) metrics, not aligning security to the business goals, fear and also some very real concerns from the security or network "purists" dramatically impacted people’s opinions regarding whether or not to virtualize their security architecture.

In most cases, it became evident that the most critical issues related to separation of duties, single points of failure, transparency (or lack thereof) fault-isolation domains, silos of administration, and the fact that many of the largest networks on the planet are largely still "flat" which makes virtualization hard. There were some hefty visualization and management concerns, but almost none of the issues were really technical.

I related a story wherein I had to spend an hour on the phone trying to convince some senior security folks at a very large company that VLANs, while they could be misconfigured and misused like any other technology, were not inherently evil.  Imagine the fun involved when I recounted the virtualization of transport, policy and security applications across a cluster of load-balanced application processing modules in a completely virtualized overlaid security services layer!

So, what the discussion boiled down to was that the operational impact of virtualizing security is compelling on many fronts, especially when discussing the economics of time and money.  When it came to downsides, most were the same old song of the fact that with the size of the Fortune 2000, where budgets are certainly larger than anywhere else, it’s still "easier" to just deploy single function boxes because one doesn’t need to think, organize differently, re-architect or buffer the status quo. 

It takes more than a simple firewall refresh to start thinking differently about how, why and where we deploy security.   Sometimes one has to think outside the box, and other times it just takes redefining what the box looks like in the first place.

/Hoff

Categories: Virtualization Tags: