Archive

Archive for the ‘Clean Pipes’ Category

On-Demand SaaS Vendors Able to Secure Assets Better than Customers?

August 16th, 2007 4 comments

Puzzle
I’m a big advocate of software as a service (SaaS) — have been for years.  This evangelism started for me almost 5 years ago when I become a Qualys MSSP customer listening to Philippe Courtot espouse the benefits of SaaS for vulnerability management.  This was an opportunity to allow me to more efficiently, effectively and cheaply manage my VA problem.  They demonstrated how they were good custodians of the data (my data) that they housed and how I could expect they would protect it.

I did not, however, feel *more* secure because they housed my VA data.  I felt secure enough that how they housed it should not fall into the wrong hands.  It’s called an assessment of risk and exposure.  I performed it and was satisfied it matched my company’s appetite and business requirements.

Not one to appear unclear on where I stand, I maintain that the SaaS can bring utility, efficiency, cost effectiveness, enhanced capabilities and improved service levels to a corporation depending upon who, what, why, how, where and when the service is deployed.  Sometimes it can bring a higher level of security to an organization, but so can an armed squadron of pissed off armed Oompa Loompa’s — it’s all a matter of perspective.

Oompa
I suggest that attempting to qualify the benefits of SaaS by generalizing in any sense is, well, generally a risky thing to do.  It often turns what could be a valid point of interest into a point of contention.

Such is the case with a story I read in a UK edition of IT Week by Phil Muncaster titled "On Demand Security Issues Raised."  In this story, the author describes the methods in which the security posture of SaaS vendors may be measured, comparing the value, capabilities and capacity of the various options and the venue for evaluating an SaaS MSSP:  hire an external contractor or rely on the MSSP to furnish you the results of an internally generated assessment.

I think this is actually a very useful and valid discussion to have — whom to trust and why?  In many cases, these vendors house sensitive and sometimes confidential data regarding an enterprise, so security is paramount.  One would suggest that anyone looking to engage an MSSP of any sort, especially one offering a critical SaaS, would perform due diligence in one form or another before signing on the dotted line.

That’s not really what I wanted to discuss, however.

What I *did* want to address was the comment in the article coming from Andy Kellett, an analyst for Burton, that read thusly:

"Security is probably less a problem than in the end-user organisations
because [on-demand app providers] are measured by the service they provide,"
Kellett argued.

I *think* I probably understand what he’s saying here…that security is "less of a problem" for an MSSP because the pressures of the implied penalties associated with violating an SLA are so much more motivating to get security "right" that they can do it far more effectively, efficiently and better than a customer.

This is a selling point, I suppose?  Do you, dear reader, agree?  Does the implication of outsourcing security actually mean that you "feel" or can prove that you’re more secure or better secured than you could do yourself by using a SaaS MSSP?

"I don’t agree the end-user organisation’s pen tester of choice
should be doing the testing. The service provider should do it and make that
information available."

Um, why?  I can understand not wanting hundreds of scans against my service in an unscheduled way, but what do you have to hide?  You want me to *trust* you that you’re more secure or holding up your end of the bargain?  Um, no thanks.  It’s clear that this person has never seen the results of an internally generated PenTest and how real threats can be rationalized away into nothingness…

Clarence So of Salesforce.com
agreed, adding that most chief information officers today understand that
software-as-a-service (SaaS) vendors are able to secure data more effectively
than they can themselves.

Really!?  It’s not just that they gave into budget pressures, agreed to transfer the risk and reduce OpEx and CapEx?  Care to generalize more thoroughly, Clarence?  Can you reference proof points for me here?  My last company used Salesforce.com, but as the person who inherited the relationship, I can tell you that I didn’t feel at all more "secure" because SF was hosting my data.  In fact, I felt more exposed.

"I’m sure training companies have their own motives for advocating the need
for in-house skills such as penetration testing," he argued. "But any
suggestions the SaaS model is less secure than client-server software are well
wide of the mark."

…and any suggestion that they are *more* secure is pure horsecock marketing at its finest.  Prove it.  And please don’t send me your SAS-70 report as your example of security fu.

So just to be clear, I believe in SaaS.  I encourage its use if it makes good business sense.  I don’t, however, agree that you will automagically be *more* secure.  You maybe just *as* secure, but it should be more cost-effective to deploy and manage.  There may very well be cases (I can even think of some) where one could be more or less secure, but I’m not into generalizations.

Whaddya think?

/Hoff

Tell Me Again How Google Isn’t Entering the Security Market? GooglePOPs will Bring Clean Pipes…

July 9th, 2007 2 comments

Googledatacenter
Not to single out Jeremiah, but in my Take5 interview with him, I asked him the following:

3) What do you make of Google’s foray into security?  We’ve seen them crawl sites and index malware.  They’ve launched a security  blog.  They acquired GreenBorder.  Do you see them as an emerging force to be reckoned with in the security space?

…to which he responded:

I doubt Google has plans to make this a direct revenue generating  exercise. They are a platform for advertising, not a security company. The plan is probably to use the malware/solution research  for building in better security in Google Toolbar for their users.  That would seem to make the most sense. Google could monitor a user’s  surfing habits and protect them from their search results at the same time.

To be fair, this was a loaded question because my opinion is diametrically opposed to his.   I believe Google *is* entering the security space and will do so in many vectors and it *will* be revenue generating. 

This morning’s news that Google is acquiring Postini for $625 Million dollars doesn’t surprise me at all and I believe it proves the point. 

In fact, I reckon that in the long term we’ll see the evolution of the Google Toolbar morph into a much more intelligent and rich client-side security application proxy service whereby Google actually utilizes client-side security of the Toolbar paired with the GreenBorder browsing environment and tunnel/proxy all outgoing requests to GooglePOPs.

What’s a GooglePOP?

These GooglePOPs (Google Point of Presence) will house large search and caching repositories that will — in conjunction with services such as those from Postini — provide a "clean pipes service to the consumer.  Don’t forget utility services that recent acquisitions such as GrandCentral and FeedBurner provide…it’s too bad that eBay snatched up Skype…

Google will, in fact, become a monster ASP.  Note that I said ASP and not ISP.  ISP is a commoditized function.  Serving applications and content as close to the user as possible is fantastic.  So pair all the client side goodness with security functions AND add GoogleApps and you’ve got what amounts to a thin client version of the Internet.

Remember all those large sealed shipping containers (not unlike Sun’s Project Blackbox) that Google is rumored to place strategically around the world — in conjunction with their mega datacenters?  I think it was Cringley who talked about this back in 2005:

In one of Google’s underground parking garages in Mountain View …
in a secret area off-limits even to regular GoogleFolk, is a shipping
container. But it isn’t just any shipping container. This shipping
container is a prototype data center.

Google hired a pair of
very bright industrial designers to figure out how to cram the greatest
number of CPUs, the most storage, memory and power support into a 20-
or 40-foot box. We’re talking about 5000 Opteron processors and 3.5
petabytes of disk storage that can be dropped-off overnight by a
tractor-trailer rig.

The idea is to plant one of these puppies
anywhere Google owns access to fiber, basically turning the entire
Internet into a giant processing and storage grid.

Imagine that.  Buy a ton of dark fiber, sprout hundreds of these PortaPOPs/GooglePOPs and you’ve got the Internet v3.0

Existing transit folks that aren’t Yahoo/MSN will ultimately yield to the model because it will reduce their costs for service and they will basically pay Google to lease these services for resale back to their customers (with re-branding?) without the need to pay for all the expensive backhaul.

Your Internet will be served out of cache…"securely."  So now instead of just harvesting your search queries, Google will have intimate knowledge of ALL of your browsing — scratch that — all of your network-based activity.   This will provide for not only much more targeted ads, but also the potential for ad insertion, traffic prioritization to preferred Google advertisers all the while offering "protection" to the consumer.

SMB’s and the average Joe consumers will be the first to embrace this
as cost-based S^2aaS (Secure Software as a Service) becomes mainstream
and this will then yield a trickle-up to the Enterprise and service
providers as demand will pressure them into providing like levels of service…for free.

It’s not all scary, but think about it…

Akamai ought to be worried.  Yahoo and MSN should be worried.  The ISP’s of the world investing in clean pipes technologies ought to be worried (I’ve blogged about Clean Pipes here.)

Should you be worried?  Methinks the privacy elements of all this will spur some very interesting discussions.

Talk amongst yourselves.

/Hoff

(Didn’t see Newby’s post here prior to writing this…good on-topic commentary.  Dennis Fisher over at the SearchSecurity Blog has an interesting Microsoft == Google perspective.)

My IPS (and FW, WAF, XML, DBF, URL, AV, AS) *IS* Bigger Than Yours Is…

May 23rd, 2007 No comments

Butrule225Interop has has been great thus far.  One of the most visible themes of this year’s show is (not suprisingly) the hyped emergence of 10Gb/s Ethernet.  10G isn’t new, but the market is now ripe with products supporting it: routers, switches, servers and, of course, security kit.

With this uptick in connectivity as well as the corresponding float in compute power thanks to Mr. Moore AND some nifty evolution of very fast, low latency, reasonably accurate deep packet inspection (including behavioral technology,) the marketing wars have begun on who has the biggest, baddest toys on the block.

Whenever this discussion arises, without question the notion of "carrier class" gets bandied about in order to essentially qualify a product as being able to withstand enormous amounts of traffic load without imposing latency. 

One of the most compelling reasons for these big pieces of iron (which are ultimately a means to an end to run software, afterall) is the service provider/carrier/mobile operator market which certainly has its fair share of challenges in terms of not only scale and performance but also security.

I blogged a couple of weeks ago regarding the resurgence of what can be described as "clean pipes" wherein a service provider applies some technology that gets rid of the big lumps upstream of the customer premises in order to deliver more sanitary network transport.

What’s interesting about clean pipes is that much of what security providers talk about today is only actually a small amount of what is actually needed.  Security providers, most notably IPS vendors, anchor the entire strategy of clean pipes around "threat protection" that appears somewhat one dimensional.

This normally means getting rid of what is generically referred to today as "malware," arresting worm propagation and quashing DoS/DDoS attacks.  It doesn’t speak at all to the need for things that aren’t purely "security" in nature such as parental controls (URL filtering,) anti-spam, P2P, etc.  It appears that in the strictest definition, these aren’t threats?

So, this week we’ve seen the following announcements:

  • ISS announces their new appliance that offers 6Gb/s of IPS
  • McAfee announces thei new appliance that offers 10Gb/s of IPS

The trumpets sounded and the heavens parted as these products were announced touting threat protection via IPS at levels supposedly never approached before.  More appliances.  Lots of interfaces.  Big numbers.  Yet to be seen in action.  Also, to be clear a 2U rackmount appliance that is not DC powered and non-NEBS certified isn’t normally called "Carrier-Class."

I find these announcements interesting because even with our existing products (which run ISS and Sourcefire’s IDS/IPS software, by the way) we can deliver 8Gb/s of firewall and IPS today and have been able to for some time.

Lisa Vaas over @ eWeek just covered
the ISS and McAfee announcements and she was nice enough to talk about
our products and positioning.  One super-critical difference is that along with high throughput and low latency you get to actually CHOOSE which IPS you want to run — ISS, Sourcefire and shortly Check Point’s IPS-1.

You can then combine that with firewall, AV, AS, URL filtering, web app. and database firewalls and XML security gateways in the same chassis to name a few other functions — all best of breed from top-tier players — and this is what we call Enterprise and Provider-Class UTM folks.

Holistically approaching threat management across the entire spectrum is really important along with the speeds and feeds and we’ve all seen what happens when more and more functionality is added to the feature stack — you turn a feature on and you pay for it performance-wise somewhere else.  It’s robbing Peter to pay Paul.  The processing requirements necessary at 10G line rates to do IPS is different when you add AV to the mix.

The next steps will be interesting and we’ll have to see how the switch and overlay vendors rev up to make their move to have the biggest on the block.  Hey, what ever did happen to that 3Com M160?

Then there’s that little company called Cisco…

{Ed: Oops.  I made a boo-boo and talked about some stuff I shouldn’t have.  You didn’t notice, did you?  Ah, the perils of the intersection of Corporate Blvd. and Personal Way!  Lesson learned. 😉 }

 

Clean Pipes – Less Sewerage or More Potable Water?

May 6th, 2007 2 comments

Pipesprev
Jeff Bardin over on the CSO blog pitched an interesting stake in the ground when he posited "Connectivity As A Utility: Where are My Clean Pipes?"

Specifically, Jeff expects that his (corporate?) Internet service functions in the same manner as his telephone service via something similar to a "do not call list."  Basically, he opts out by placing himself on the no-call list and telemarketers cease to call. Others might liken it to turning on a tap and getting clean, potable water; you pay for a utility and expect it to be usable.  All of it.

Many telecommunications providers want to charge you for having
clean pipes, deploying a suite of DDoS services that you have to buy to
enhance your security posture.   Protection of last mile bandwidth is
very key to network availability as well as confidentiality and
integrity. If I am subscribing for a full T1, shouldn’t I get the full
T1 as part of the price and not just a segment of the T1? Why do I have
to pay for the spam, probes, scans, and malicious activity that my
telecommunications service provider should prevent at 3 miles out
versus my having to subscribe to another service to attain clean pipes
at my doorstep?

I think that most people would agree with the concept of clean pipes in principle.  I can’t think of any other utility where the service levels delivered are taken with such a lackadaisical best effort approach and where the consumer can almost always expect that some amount (if not the majority) of the utility is unusable. 

Over the last year, I’ve met with many of the largest ISP’s, MSSP’s, TelCo’s and Mobile Operators on the planet and all are in some phase of deploying some sort of clean pipes variant.  Gartner even predicts a large amount of security to move "into the cloud."

In terms of adoption, EMEA is leaps and bounds ahead of the US and APAC in these sorts of services and will continue to be.  The relative oligopolies associated with smaller nation states allows for much more agile and flexible service definition and roll-outs — no less complex, mind you.  It’s incredible to see just how disparate and divergent the gap is between what consumers (SME/SMB/Mobile as well as large enterprise) are offered in EMEA as opposed to the good-ol’ U S of A.

However, the stark reality is that the implementation of clean pipes by your service provider(s) comes down to a balance of two issues: efficacy and economics, with each varying dramatically with the market being served; the large enterprise’s expectations and requirements look very, very different from the SME/SMB.

Let’s take a look at both of these elements.

ECONOMICS

If you ask most service providers about so-called clean pipes up to a year ago, you could expect to get an answer that was based upon a "selfish" initiative aimed at stopping wasteful bandwidth usage upstream in the service provider’s network, not really protecting the consumer. 

The main focus here is really on DDoS and viri/worm propagation.  Today, the closest you’ll come to "clean pipes" is usually some combination of the following services deployed both (still) at the customer premises as well as somewhere upstream:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

What is interesting about these services is that they basically define the same functions you can now get in those small little UTM boxes that consolidate security functionality at the "perimeter."  The capital cost of these devices and the operational levies associated with their upkeep are pretty close in the SME/SMB and when you balance what you get in "good enough" services for this market as well as the overall availability of these "in the cloud" offerings, UTM makes more sense for many in the near term.

For the large enterprise, the story is different.  Outsourcing some level of security to an MSSP (or perhaps even the entire operation) or moving some amount upstream is a matter of core competence and leveraging the focus of having internal teams focus on the things that matter most while the low hanging fruit can be filtered out and monitored by someone else.  I describe that as filtering out the lumps.  Some enormous companies have outsourced not only their security functions but their entire IT operations and data center assets in this manner.  It’s not pretty, but it works.

I’m not sure they are any more secure than they were before, however.  The risk simply was transferred whilst the tolerance/appetite for it didn’t change at all.  Puzzling.

Is it really wrong to think that companies (you’ll notice I said companies, not "people" in the general sense) should pay for clean pipes?  I don’t think it is.  The reality is that for non-commercial subscribers such as home users, broadband or mobile users, some amount of bandwidth hygiene should be free — the potable water approach.

I think, however, that should a company which expects elevated service levels and commensurate guarantees of such, want more secure connectivity, they can expect to ante up.  Why?  Because the investment required to deliver this sort of service costs a LOT of money — both to spin up and to instantiate over time.  You’re going to have to pay for that somewhere.

I very much like Jeff’s statistics:

We stop on average for our organization nearly 600
million malicious emails per year at our doorstep averaging 2.8
gigabytes of garbage per day. You add it up and we are looking at
nearly a terabyte of malicious email we have to stop. Now add in probes
and scans against HTTP and HTTPS sites and the number continues to
skyrocket.

Again, even though Jeff’s organization isn’t small by any means, the stuff he’s complaining about here is really the low-hanging fruit.  It doesn’t bear a dent against the targeted, malicious and financially-impacting security threats that really demands a level of service no service provider will be able to deliver without a huge cost premium.

I won’t bore you with the details, but the level of high-availability,
resilience, performance, manageability, and provisioning required to
deliver even this sort of service is enormous.  Most vendors simply can’t do
it and most service providers are slow to invest in proprietary
solutions that won’t scale economically with the operational models in
place.

Interestingly, vendors such as McAfee even as recently as 2005 announced with much fanfare that they were going to deliver technology, services and a united consortium of participating service providers with the following lofty clean pipe goals (besides selling more product, that is):

The initiative is one
part of a major product and services push from McAfee, which is
developing its next generation of carrier-grade security appliances and
ramping up its enterprise security offerings with NAC and secure
content management product releases planned for the first half of next
year, said Vatsal Sonecha, vice president of market development and
strategic alliances at McAfee, in Santa Clara, Calif.

Clean Pipes will be a major expansion of McAfee’s managed
services offerings. The company will sell managed intrusion prevention;
secure content management; vulnerability management; malware
protection, including anti-virus, anti-spam and anti-spyware services;
and mobile device security, Sonecha said.

McAfee is working with Cable
and Wireless PLC, British Telecommunications PLC (British Telecom),
Telefónica SA and China Network Communications (China Netcom) to tailor
its offerings through an invitation-only group it calls the Clean Pipes
Consortium.

http://www.eweek.com/article2/0,1895,1855188,00.asp

Look at all those services!  What have they delivered as a service in the cloud or clean pipes?  Nada. 

The chassis-based products which were to deliver these services never materialized and neither did the services.  Why?  Because it’s really damned hard to do correctly.  Just ask Inkra, Nexi, CoSine, etc.  Or you can ask me.  The difference is, we’re still in business and they’re not.  It’s interesting to note that every one of those "consortium members" with the exception of Cable and Wireless are Crossbeam customers.  Go figure.

EFFICACY

Once the provider starts filtering at the ingress/egress, one must trust that the things being filtered won’t have an impact on performance — or confidentiality, integrity and availability.  Truth be told, as simple as it seems, it’s not just about raw bandwidth.  Service levels must be maintained and the moment something that is expected doesn’t make its way down the pipe, someone will be screaming bloody murder for "slightly clean" pipes.

Ask me how I know.  I’ve lived through inconsistent application of policies, non-logged protocol filtering, dropped traffic and asymmetric issues introduced by on-prem and in-the-cloud MSSP offerings.  Once the filtering moves past your prem. as a customer, your visibility does too.  Those fancy dashboards don’t do a damned bit of good, either.  Ever consider the forensic impact?

Today, if you asked a service provider what constitutes their approach to clean pipes, most will refer you back to the same list I referenced above:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

The problem is that most of these solutions are disparate point products run by different business units at different parts of the network.  Most are still aimed at the perimeter service — it’s just that the perimeter has moved outward a notch in the belt.

Look, for the SME/SMB (or mobile user,) "good enough" is, for the most part, good
enough.  Having an upstream provider filter out a bunch of spam and
viri is a good thing and most firewall rules in place in the SME/SMB
block everything but a few inbound ports to DMZ hosts (if there are
any) and allow everything from the inside to go out.  Not very
complicated and it doesn’t take a rocket scientist to see how, from the
perspective of what is at risk, that this service doesn’t pay off
handsomely.

From the large enterprise I’d say that if you are going to expect that operational service levels will be met, think again.  What happens when you introduce web services, SOA and heavy XML onto externally-exposed network stubs.  What happens when Web2/3/4.x technologies demand more and more security layers deployed alongside the mechanics and messaging of the service?

You can expect issues and the lack of transparency will be an issue on all but the most simple of issues.

Think your third party due diligence requirements are heady now?  Wait until this little transference of risk gets analyzed when something bad happens — and it will.  Oh how quickly the pendulum will swing back to managing this stuff in-house again.

This model doesn’t scale and it doesn’t address the underlying deficiencies in the most critical elements of the chain: applications, databases and end-point threats such as co-opted clients as unwilling botnet participants.

But to Jeff’s point, if he didn’t have to spend money on the small stuff above, he could probably spend it elsewhere where he needs it most.

I think services in the cloud/clean pipes makes a lot of sense.  I’d sure as hell like to invest less in commoditizing functions at the perimeter and on my desktop.  I’m just not sure we’re going to get there anytime soon.

/Hoff

*Image Credit: CleanPipes

Read more…