Archive

Archive for the ‘De-Perimeterization’ Category

Security: “Built-in, Overlay or Something More Radical?”

May 10th, 2007 No comments

Networkpill
I was reading Joseph Tardo’s (Nevis Networks) new Illuminations blog and found the topic of his latest post ""Built-in, Overlay or Something More Radical?" regarding the possible future of network security quite interesting.

Joseph (may I call you Joseph?) recaps the topic of a research draft from Stanford funded by the "Stanford Clean Slate Design for the Internet" project that discusses an approach to network security called SANE.   The notion of SANE (AKA Ethane) is a policy-driven security services layer that utilizes intelligent centrally-located services to replace many of the underlying functions provided by routers, switches and security products today:

Ethane is a new architecture for enterprise networks which provides a powerful yet simple management model and strong security guarantees.  Ethane allows network managers to define a single, network-wide, fine-grain policy, and then enforces it at every switch.  Ethane policy is defined over human-friendly names (such as "bob, "payroll-server", or "http-proxy) and  dictates who can talk to who and in which manner.  For example, a policy rule may specify that all guest users who have not authenticated can only use HTTP and that all of their traffic must traverse a local web proxy.

Ethane has a number of salient properties difficult to achieve
with network technologies today.  First, the global security policy is enforced at each switch in a manner that is resistant to poofing.  Second, all packets on an Ethane network can be
attributed back to the sending host and the physical location in
which the packet entered the network.  In fact, packets collected
in the past can also be attributed to the sending host at the time the packets were sent — a feature that can be used to aid in
auditing and forensics.  Finally, all the functionality within
Ethane is provided by very simple hardware switches.
      

The trick behind the Ethane design is that all complex
functionality, including routing, naming, policy declaration and
security checks are performed by a central
controller (rather than
in the switches as is done today).  Each flow on the network must
first get permission from the controller which verifies that the
communicate is permissible by the network policy.  If the controller allows a flow, it computes a route for the flow to
take, and adds an entry for that flow in each of the switches
along the path.
      

With all complex function subsumed by the controller, switches in
Ethane are reduced to managed flow tables whose entries can only be populated by the controller (which it does after each succesful permission check).  This allows a very simple design for Ethane
      switches using only SRAM (no power-hungry TCAMS) and a little bit
of logic.

   

I like many of the concepts here, but I’m really wrestling with the scaling concerns that arise when I forecast the literal bottlenecking of admission/access control proposed therein.

Furthermore, and more importantly, while SANE speaks to being able to define who "Bob"  is and what infrastructure makes up the "payroll server,"  this solution seems to provide no way of enforcing policy based on content in context of the data flowing across it.  Integrating access control with the pseudonymity offered by integrating identity management into policy enforcement is only half the battle.

The security solutions of the future must evolve to divine and control not only vectors of transport but also the content and relative access that the content itself defines dynamically.

I’m going to suggest that by bastardizing one of the Jericho Forum’s commandments for my own selfish use, the network/security layer of the future must ultimately respect and effect disposition of content based upon the following rule (independent of the network/host):

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component. 

 

Deviating somewhat from Jericho’s actual meaning, I am intimating that somehow, somewhere, data must be classified and self-describe the policies that govern how it is published and consumed and ultimately this security metadata can then be used by the central policy enforcement mechanisms to describe who is allowed to access the data, from where, and where it is allowed to go.

…Back to he topic at hand, SANE:

As Joseph alluded, SANE would require replacing (or not using much of the functionality of) currently-deployed routers, switches and security kit.  I’ll let your imagination address the obvious challenges with this design.

Without delving deeply, I’ll use Joseph’s categorization of “interesting-but-impractical”

/Hoff

Clean Pipes – Less Sewerage or More Potable Water?

May 6th, 2007 2 comments

Pipesprev
Jeff Bardin over on the CSO blog pitched an interesting stake in the ground when he posited "Connectivity As A Utility: Where are My Clean Pipes?"

Specifically, Jeff expects that his (corporate?) Internet service functions in the same manner as his telephone service via something similar to a "do not call list."  Basically, he opts out by placing himself on the no-call list and telemarketers cease to call. Others might liken it to turning on a tap and getting clean, potable water; you pay for a utility and expect it to be usable.  All of it.

Many telecommunications providers want to charge you for having
clean pipes, deploying a suite of DDoS services that you have to buy to
enhance your security posture.   Protection of last mile bandwidth is
very key to network availability as well as confidentiality and
integrity. If I am subscribing for a full T1, shouldn’t I get the full
T1 as part of the price and not just a segment of the T1? Why do I have
to pay for the spam, probes, scans, and malicious activity that my
telecommunications service provider should prevent at 3 miles out
versus my having to subscribe to another service to attain clean pipes
at my doorstep?

I think that most people would agree with the concept of clean pipes in principle.  I can’t think of any other utility where the service levels delivered are taken with such a lackadaisical best effort approach and where the consumer can almost always expect that some amount (if not the majority) of the utility is unusable. 

Over the last year, I’ve met with many of the largest ISP’s, MSSP’s, TelCo’s and Mobile Operators on the planet and all are in some phase of deploying some sort of clean pipes variant.  Gartner even predicts a large amount of security to move "into the cloud."

In terms of adoption, EMEA is leaps and bounds ahead of the US and APAC in these sorts of services and will continue to be.  The relative oligopolies associated with smaller nation states allows for much more agile and flexible service definition and roll-outs — no less complex, mind you.  It’s incredible to see just how disparate and divergent the gap is between what consumers (SME/SMB/Mobile as well as large enterprise) are offered in EMEA as opposed to the good-ol’ U S of A.

However, the stark reality is that the implementation of clean pipes by your service provider(s) comes down to a balance of two issues: efficacy and economics, with each varying dramatically with the market being served; the large enterprise’s expectations and requirements look very, very different from the SME/SMB.

Let’s take a look at both of these elements.

ECONOMICS

If you ask most service providers about so-called clean pipes up to a year ago, you could expect to get an answer that was based upon a "selfish" initiative aimed at stopping wasteful bandwidth usage upstream in the service provider’s network, not really protecting the consumer. 

The main focus here is really on DDoS and viri/worm propagation.  Today, the closest you’ll come to "clean pipes" is usually some combination of the following services deployed both (still) at the customer premises as well as somewhere upstream:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

What is interesting about these services is that they basically define the same functions you can now get in those small little UTM boxes that consolidate security functionality at the "perimeter."  The capital cost of these devices and the operational levies associated with their upkeep are pretty close in the SME/SMB and when you balance what you get in "good enough" services for this market as well as the overall availability of these "in the cloud" offerings, UTM makes more sense for many in the near term.

For the large enterprise, the story is different.  Outsourcing some level of security to an MSSP (or perhaps even the entire operation) or moving some amount upstream is a matter of core competence and leveraging the focus of having internal teams focus on the things that matter most while the low hanging fruit can be filtered out and monitored by someone else.  I describe that as filtering out the lumps.  Some enormous companies have outsourced not only their security functions but their entire IT operations and data center assets in this manner.  It’s not pretty, but it works.

I’m not sure they are any more secure than they were before, however.  The risk simply was transferred whilst the tolerance/appetite for it didn’t change at all.  Puzzling.

Is it really wrong to think that companies (you’ll notice I said companies, not "people" in the general sense) should pay for clean pipes?  I don’t think it is.  The reality is that for non-commercial subscribers such as home users, broadband or mobile users, some amount of bandwidth hygiene should be free — the potable water approach.

I think, however, that should a company which expects elevated service levels and commensurate guarantees of such, want more secure connectivity, they can expect to ante up.  Why?  Because the investment required to deliver this sort of service costs a LOT of money — both to spin up and to instantiate over time.  You’re going to have to pay for that somewhere.

I very much like Jeff’s statistics:

We stop on average for our organization nearly 600
million malicious emails per year at our doorstep averaging 2.8
gigabytes of garbage per day. You add it up and we are looking at
nearly a terabyte of malicious email we have to stop. Now add in probes
and scans against HTTP and HTTPS sites and the number continues to
skyrocket.

Again, even though Jeff’s organization isn’t small by any means, the stuff he’s complaining about here is really the low-hanging fruit.  It doesn’t bear a dent against the targeted, malicious and financially-impacting security threats that really demands a level of service no service provider will be able to deliver without a huge cost premium.

I won’t bore you with the details, but the level of high-availability,
resilience, performance, manageability, and provisioning required to
deliver even this sort of service is enormous.  Most vendors simply can’t do
it and most service providers are slow to invest in proprietary
solutions that won’t scale economically with the operational models in
place.

Interestingly, vendors such as McAfee even as recently as 2005 announced with much fanfare that they were going to deliver technology, services and a united consortium of participating service providers with the following lofty clean pipe goals (besides selling more product, that is):

The initiative is one
part of a major product and services push from McAfee, which is
developing its next generation of carrier-grade security appliances and
ramping up its enterprise security offerings with NAC and secure
content management product releases planned for the first half of next
year, said Vatsal Sonecha, vice president of market development and
strategic alliances at McAfee, in Santa Clara, Calif.

Clean Pipes will be a major expansion of McAfee’s managed
services offerings. The company will sell managed intrusion prevention;
secure content management; vulnerability management; malware
protection, including anti-virus, anti-spam and anti-spyware services;
and mobile device security, Sonecha said.

McAfee is working with Cable
and Wireless PLC, British Telecommunications PLC (British Telecom),
Telefónica SA and China Network Communications (China Netcom) to tailor
its offerings through an invitation-only group it calls the Clean Pipes
Consortium.

http://www.eweek.com/article2/0,1895,1855188,00.asp

Look at all those services!  What have they delivered as a service in the cloud or clean pipes?  Nada. 

The chassis-based products which were to deliver these services never materialized and neither did the services.  Why?  Because it’s really damned hard to do correctly.  Just ask Inkra, Nexi, CoSine, etc.  Or you can ask me.  The difference is, we’re still in business and they’re not.  It’s interesting to note that every one of those "consortium members" with the exception of Cable and Wireless are Crossbeam customers.  Go figure.

EFFICACY

Once the provider starts filtering at the ingress/egress, one must trust that the things being filtered won’t have an impact on performance — or confidentiality, integrity and availability.  Truth be told, as simple as it seems, it’s not just about raw bandwidth.  Service levels must be maintained and the moment something that is expected doesn’t make its way down the pipe, someone will be screaming bloody murder for "slightly clean" pipes.

Ask me how I know.  I’ve lived through inconsistent application of policies, non-logged protocol filtering, dropped traffic and asymmetric issues introduced by on-prem and in-the-cloud MSSP offerings.  Once the filtering moves past your prem. as a customer, your visibility does too.  Those fancy dashboards don’t do a damned bit of good, either.  Ever consider the forensic impact?

Today, if you asked a service provider what constitutes their approach to clean pipes, most will refer you back to the same list I referenced above:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

The problem is that most of these solutions are disparate point products run by different business units at different parts of the network.  Most are still aimed at the perimeter service — it’s just that the perimeter has moved outward a notch in the belt.

Look, for the SME/SMB (or mobile user,) "good enough" is, for the most part, good
enough.  Having an upstream provider filter out a bunch of spam and
viri is a good thing and most firewall rules in place in the SME/SMB
block everything but a few inbound ports to DMZ hosts (if there are
any) and allow everything from the inside to go out.  Not very
complicated and it doesn’t take a rocket scientist to see how, from the
perspective of what is at risk, that this service doesn’t pay off
handsomely.

From the large enterprise I’d say that if you are going to expect that operational service levels will be met, think again.  What happens when you introduce web services, SOA and heavy XML onto externally-exposed network stubs.  What happens when Web2/3/4.x technologies demand more and more security layers deployed alongside the mechanics and messaging of the service?

You can expect issues and the lack of transparency will be an issue on all but the most simple of issues.

Think your third party due diligence requirements are heady now?  Wait until this little transference of risk gets analyzed when something bad happens — and it will.  Oh how quickly the pendulum will swing back to managing this stuff in-house again.

This model doesn’t scale and it doesn’t address the underlying deficiencies in the most critical elements of the chain: applications, databases and end-point threats such as co-opted clients as unwilling botnet participants.

But to Jeff’s point, if he didn’t have to spend money on the small stuff above, he could probably spend it elsewhere where he needs it most.

I think services in the cloud/clean pipes makes a lot of sense.  I’d sure as hell like to invest less in commoditizing functions at the perimeter and on my desktop.  I’m just not sure we’re going to get there anytime soon.

/Hoff

*Image Credit: CleanPipes

Read more…

The Philosophy of Network Security Design

April 3rd, 2007 1 comment

Thinkmanmk2
Thomas and I were barking at each other regarding something last night and today he left a salient and thought-provoking comment that provided a very concise, pragmatic and objective summation of the embedded vs. overlay security quagmire:

     "I think the jury is still out on
how much security policy we   
     should be pushing to middleboxes, and how
smart those   
     middleboxes should be. What I know right now is we spend
     way, way too much time, effort, and money on 19" rack
     mountable chasses
that suck in packets and spit them back
     out again without providing any
measurable impact on the
     security of our networks.  Not a fan."

I couldn’t agree more.  Most of the security components today, including those that run in our little security ecosystem, really don’t intercommunicate.  There is no shared understanding of telemetry or instrumentation and there’s certainly little or no correlation of threats, vulnerabilities, risk or disposition.

The problem is bad inasmuch as even best-of-breed solutions usually
require box sprawl and stacking and don’t necessarily provide for a
more secure posture, especially within context of another of Thomas’
interesting posts on defense in depth/mesh…

That’s changing, however.  Our latest generation of NPMs (Network Processing Modules) allow discrete security ISV’s (which run on intelligently load-balanced Application Processor Modules — Intel blades in the same chassis) to interact with and control the network hardware through defined API’s — this provides the first step in that common telemetry such that while application A doesn’t need to know about the specifics of application B, they can functionally interact based upon the common output of disposition and/or classification of flows between them.

Later, they’ll be able to perhaps control each other through the same set of API’s.

So, I don’t think we’re going to solve the interoperability issue completely anytime soon inasmuch as we’ll go from 0 to 100%, but I think that the consolidation of these functions into smaller footprints that allow for intelligent traffic classification and disposition is a first good step.

I don’t expect Thomas to agree or even resonate with my statements below, but I found his explanation of the problem space to be dead on.  Here’s my explanation of an incremental step towards solving some of the bigger classes of problems in that space which I believe hinges on consolidation of security functionality first and foremost.

The three options for reducing this footprint are as follows:

  1. Proprietary Embedded security in routers/switches (Cisco, Juniper)

    Pros: Supposedly less boxes, better communication between components and good coverage
    given the fact that the security stuff is in the infrastructure.  One vendor from which you get
    your infrastructure and your protection.  Correlation across the network "fabric" will ultimately
    allow for near-time zoning and quarantine.  Single management pane across the Enterprise
    for availability and security.  Did I mention the platform is already there?

    Cons: You rely on a single vendor’s version of the truth and you get closer to a monoculture
    wherein the safeguards protecting the network put at risk the very assets they seek to protect
    because there is no separation of "church and state."  Also, the expertise and coverage as well
    as the agility for product development based upon evolving threats is hampered by the many
    moving parts in this machine.  Utility vs Security?  Utility wins.  Good enough vs. Best of breed?
    Probably somewhere in between.

  2.  

  3. Proprietary Overlay security in a Consolidated Platform (Fortinet 5000, Tipping Point, etc.)

    Pros:  Reduced footprint, consolidated functionality, single management pane across multiple
    security functions within the box.  Usually excels in one specific area like AV and can add "good enough" functionality as the needs arise.  Software moves up and down the scalability stack depending upon performance needed.

    Cons:  You again rely on a single vendor’s version of the truth.  These boxes tend to want to replace switching infrastructure.  Many of these platforms utilize ASICs to accelerate certain functions with the bulk of functionality residing in pure software with limited application or network-level intelligence.  You pay the price in terms of performance and scale given the architectures of these boxes which do not easily allow for the addition of new classes of solutions to thwart new threats.  Not really routers/switches.

  4.  

  5. Open Overlay security in a Consolidated Platform (Crossbeam)

    Pros:  The customer defines best of breed and can rapidly add new security functionality
    at a speed that keeps pace with the threats the customer needs to mitigate.  Utilizing a scalable and high-performance switching architecture combined with all the benefits
    of an open blade-based security application/appliance delivery mechanism gives the best of all
    worlds: self-healing, highly resilient, high performance and highly-available while utilizing
    hardened Linux OS across load-balanced, virtualized security applications running on optimized
    hardware.

    Cons: Currently based upon proprietary (even though Intel reference design) hardware for
    the application processing while also utilizing proprietary networking switching fabric and
    load balancing.  Can only offer software as quickly as it can be adapted and tested on the
    platforms.  No ASICs means small packet performance @ 64byte zero loss isn’t as high as
    ASIC based packet-forwarding engines.  No single pane of management.

I think that option #3 is a damned good start towards solving the consolidation issues whilst balancing the need to overlay syngergistically with the network infrastructure.  You’re not locked into single vendor’s version of the truth and although the hardware may be "proprietary," the operating system and choice in software is not.  You can choose from COTS, Open Source or write your own, all in an scaleable platform that is just as much a collapsed switching/routing platform as it is a consolidated blade server.

I think it has the best chance of evolving to solve more classes of problems than the other two at a rate and level of cost-effectiveness balanced with higher efficacy due to best of breed.

This, of course, depends upon how high the level of integration is between the apps — or at least their dispositions.  We’re working very, very hard on that.

At any rate, Thomas ended with:

"I am a believer in
freezing development of the core protocols and building new
functionality on top of them. I like NAT. I like Paul Francis. I think
the IETF has been hijacked by the leftovers from the OSI standards
committees. I don’t know what you call that philosophy, besides
"end2end originalist".

I like NAT.  I think this is Paul Francis.  The IETF has been hijacked by aliens, actually, and I’m getting a new tattoo:


Og_2

If it walks like a duck, and quacks like duck, it must be…?

April 2nd, 2007 5 comments

Blackhatvswhitehat
Seriously, this really wasn’t a thread about NAC.  It’s a great soundbite to get people chatting (arguing) but there’s a bit more to it than that.  I didn’t really mean to offend those NAC-Addicts out there.

My last post was the exploration of security functions and their status (or even migration/transformation)  as either a market or feature included in a larger set of features.  Alan Shimel responded to my comments; specifically regarding my opinion that NAC is now rapidly becoming a feature and won’t be a competitive market for much longer. 

Always the quick wit, Alan suggested that UTM was a "technology" that is going to become a feature much like my description of NAC’s fate.  Besides the fact that UTM isn’t a technology but rather a consolidation of lots of other technologies that won’t stand alone, I found a completely orthogonal statement that Alan made to cause my head to spin as a security practitioner. 

My reaction stems from the repeated belief that there should be separation of delivery between the network plumbing, the security service layers and ultimately the application(s) that run across them.  Note well that I’m not suggesting that common instrumentation, telemetry and disposition shouldn’t be collaboratively shared, but their delivery and execution ought to be discrete.  Best tool for the job.

Of course, this very contention is the source of much of the disagreement between me and many others who believe that security will just become absorbed into the "network."  It seems now that Alan is suggesting that the model of combining all three is going to be something in high demand (at least in the SME/SMB) — much in the same way Cisco does:

The day is rapidly coming when people will ask why would they buy a box
that all it does is a bunch of security stuff.  If it is going to live
on the network, why would the network stuff not be on there too or the
security stuff on the network box.

Firstly, multi-function devices that blend security and other features on the "network" aren’t exactly new.

That’s what the Cisco ISR platform is becoming now what with the whole Branch Office battle waging, and back in ’99 (the first thing that pops into my mind) a bunch of my customers bought and deployed WhistleJet multi-function servers which had DHCP, print server, email server, web server, file server, and security functions such as a firewall/NAT baked in.

But that’s neither here nor there, because the thing I’m really, really interested in Alan’s decidedly non-security focused approach to prioritizing utility over security, given that he works for a security company, that is.

I’m all for bang for the buck, but I’m really surprised that he would make a statement like this within the context of a security discussion.

That is what Mitchell has been
talking about in terms of what we are doing and we are going to go
public Monday.  Check back then to see the first small step in the leap
of UTM’s becoming a feature of Unified Network Platforms.

Virtualization is a wonderful thing.  It’s also got some major shortcomings.  The notion that just because you *can* run everything under the sun on a platform doesn’t always mean that you *should* and often it means you very much get what you pay for.  This is what I meant when I quoted Lee Iacocca when he said "People want economy and they will pay any price to get it."

How many times have you tried to consolidate all those multi-function devices (PDA, phone, portable media player, camera, etc.) down into one device.  Never works out, does it?  Ultimately you get fed up with inconsistent quality levels, you buy the next megapixel camera that comes out with image stabilization.  Then you get the new video iPod, then…

Alan’s basically agreed with me on my original point discussing features vs. markets and the UTM vs. UNP thing is merely a handwaving marketing exercise.  Move on folks, nothing to see here.

’nuff said.

/Hoff

(Written sitting in front of my TV watching Bill Maher drinking a Latte)

NAC is a Feature not a Market…

March 30th, 2007 7 comments

MarketfeatureI’m picking on NAC in the title of this entry because it will drive
Alan Shimel ape-shit and NAC has become the most over-hyped hooplah
next to Britney’s hair shaving/rehab incident…besides, the pundits come a-flockin’ when the NAC blood is in the water…

Speaking of chumming for big fish, love ’em or hate ’em, Gartner’s Hype Cycles do a good job of allowing
one to visualize where and when a specific technology appears, lives
and dies
as a function of time, adoption rate and utility.

We’ve recently seen a lot of activity in the security space that I
would personally describe as natural evolution along the continuum,
but is often instead described by others as market "consolidation" due to
saturation. 

I’m not sure they are the same thing, but really, I don’t care to argue
that point.  It’s boring.  It think that anyone arguing either side is
probably right.  That means that Lindstrom would disagree with both. 

What I do want to do is summarize a couple of points regarding some of
this "evolution" because I use my blog as a virtual jot pad against which
I can measure my own consistency of thought and opinion.  That and the
chicks dig it.

Without my usual PhD Doctoral thesis brevity, here are just a few
network security technologies I reckon are already doomed to succeed as
features and not markets — those technologies that will, within the
next 24 months, be absorbed into other delivery mechanisms that
incorporate multiple technologies into a platform for virtualized
security service layers:

  1. Network Admission Control
  2. Network Access Control
  3. XML Security Gateways
  4. Web Application Firewalls
  5. NBAD for the purpose of DoS/DDoS
  6. Content Security Accelerators
  7. Network-based Vulnerability Assessment Toolsets
  8. Database Security Gateways
  9. Patch Management (Virtual or otherwise)
  10. Hypervisor-based virtual NIDS/NIPS tools
  11. Single Sign-on
  12. Intellectual Property Leakage/Extrusion Prevention

…there are lots more.  Components like gateway AV, FW, VPN, SSL
accelerators, IDS/IPS, etc. are already settling to the bottom of UTM
suites as table stakes.  Many other functions are moving to SaaS
models.  These are just the ones that occurred to me without much
thought.

Now, I’m not suggesting that Uncle Art is right and there will be no
stand-alone security vendors in three years, but I do think some of this
stuff is being absorbed into the bedrock that will form the next 5
years of evolutionary activity.

Of course, some folks will argue that all of the above will just all be
absorbed into the "network" (which means routers and switches.)  Switch
or multi-function device…doesn’t matter.  The "smoosh" is what I’m
after, not what color it is when it happens.

What’d I miss?

/Hoff

(Written from SFO Airport sitting @ Peet’s Coffee.  Drinking a two-shot extra large iced coffee)

Breaking News: SOA, Web services security hinge on XML gateways!

March 20th, 2007 No comments

Captainobvious
Bloody Hell!

The article below is dated today, but perhaps this was just the TechTarget AutoBlogCronPoster gone awry from 2004? 

Besides the fact that this revelation garners another vote for the RationalSecurity "Captain Obvious" (see right) award, the simple fact that XML gateways as a stand-alone market are being highlighted here is laughable — especially since the article clearly shows the XML Security Gateways are being consolidated and bundled with application delivery controllers and WAF solutions by vendors such as IBM and Cisco.

XML is, and will be everywhere.  SOA/Web Services is only one element in a greater ecosystem impacted by XML.

Of course the functionality provided by XML security gateways are critical to the secure deployment of SOA environments; they should be considered table stakes, just like secure coding…but of course we know how consistently-applied compensating controls are painted onto network and application architectures. 

The dirty little secret is that while they are very useful and ultimately an excellent tool in the arsenal, these solutions are disruptive, difficult to configure and maintain, performance pigs and add complexity to an already complex model.  In many cases, asking a security team to manage this sort of problem introduces more operational risk than it mitigates. 

Can you imagine security, network and developers actually having to talk to one another?!  *gasp*

Here is the link to the entire story.  I’ve snipped pieces out for relevant mockery.

ORLANDO, Fla. — Enterprises are moving forward with service
oriented architecture (SOA) projects to reduce complexity and increase
flexibility between systems and applications, but some security pros
fear they’re being left behind and must scramble to learn new ways to
protect those systems from Web-based attacks.

<snip>

"Most network firewalls aren’t designed to handle the latest
Web services standards, resulting in new avenues of attack for digital
miscreants, said Tim Bond, a senior security engineer at webMethods
Inc. In his presentation at the Infosec World Conference and Expo, Bond
said a growing number of vendors are selling XML security gateways,
appliances that can be plugged into a network and act as an
intermediary, decrypting and encrypting Web services data to determine
the authenticity and lock out attackers.

"It’s not just passing a message through, it’s actually taking
action," Bond said. "It needs to be customized for each deployment, but
it can be very effective in protecting from many attacks."

Bond said that most SOA layouts further expose applications by
placing them just behind an outer layer of defense, rather than placing
them within the inner walls of a company’s security defenses along with
other critical applications and systems. Those applications are
vulnerable, because they’re being exposed to partners, customer
relationship management and supply chain management systems. Attackers
can scan Web services description language (WSDL) — the XML language
used in Web service calls — to find out where vulnerabilities lie,
Bond said.

<snip>

A whole market has grown around protecting WSDL, Bond said.
Canada-based Layer 7 Technologies Inc. and UK-based Vordel are
producing gateway appliances to protect XML and SOAP language in Web
service calls. Reactivity, which was recently acquired by Cisco Systems
Inc. and DataPower, now a division of IBM, also address Web services
security.

Transaction values will be much higher and traditional SSL,
security communications protocol for point-to-point communications,
won’t be enough to protect transactions, Bond said.

<snip>

In addition to SQL-injection attacks, XML is potentially
vulnerable to schema poisoning — a method of attack in which the XML
schema can be manipulated to alter processing information. A
sophisticated attacker can also conduct an XML routing detour,
redirecting sensitive data within the XML path, Bond said.

Security becomes complicated with distributed systems in an
SOA environment, said Dindo Roberts, an application security manager at
New York City-based MetLife Inc. Web services with active interfaces
allow the usage of applications that were previously restricted to
using conventional custom authentication. Security pros need new
methods, such as an XML security gateway to protect those applications,
Roberts said.

<snip>

Another Virtualized Solution for VM Security…

March 19th, 2007 10 comments

Virtualmyspace
I got an email reminder from my buddy Grant Bourzikas today pointing me to another virtualized security solution for servers from Reflex Security called Reflex VSA.  VSA stands for Virtual Security Appliance and the premise appears to be that you deploy this software within each guest VM and it provides what looks a lot like host-based intrusion prevention functionality per VM.

The functionality is defined thusly:

Reflex VSA solves the problem that traditional network security such as
IPS and firewall appliances currently can not solve: detecting and preventing attacks within a virtual server. Because Reflex VSA runs as virtualized
application inside the virtualized environment, it can detect and mitigate
        threats between virtual hosts and networks.

Reflex VSA Features:
        • Access firewall for permission enforcement for intra-host and external network
           communication
        • Intrusion Prevention with inline blocking and filtering for virtualized networks
        • Anomaly, signature, and rate-based threat detection capability
       
        • Network Discovery to discover and map all virtual machines and applications
        • Reflex Command Center, providing a centralized configuration and management
           console, comprehensive reporting tools, and real-time event aggregation and
           correlation
   

Reflex_vsa_deploy
It does not appear to wrap around or plug-in to the HyperVisor natively, so I’m a little confused as to the difference between deploying VSA and whatever HIPS/NIPS agent a customer might already have deployed on "physical" server instantiations.

Blue Lane’s product addresses this at the HyperVisor layer and it would be interesting to me to have the pundits/experts argue the pros/cons of each approach. {Ed. This is incorrect.  Blue Lane’s product runs as a VM/virtual appliance also.  With the exposure via API of the hypervisor/virtual switches, products like Blue Lane and Reflex would take advantage to be more flexible, effective and higher performing.}

I’m surprised most of the other "security configuration management" folks haven’t already re-branded their agents as being "Virtualization Compliant" to attack this nascent marketspace. < :rolleyes here: >

It’s good to see that folks are at least owning up to the fact that intra-VM communications via virtual switches are going to drive a spin on risk models, detection and mitigation tools and techniques.  This is what I was getting at in this blog entry here.

I would enjoy speaking to someone from Reflex to understand their positioning and differentiation better, but isn’t this just HIPS per VM?  How’s that different than firewall, AV, etc. per VM?

/Hoff

Good News! SOA Will Make Your Life Easier…and Easier to Secure!

February 28th, 2007 No comments

Soafortune
I read ZDNet’s coverage of the Wharton Technology Conference in Philadelphia by Larry Dignan and was astounded by what Larry reported was said in regards to comments made by TD Ameritrade’s Chief Security Officer, Bill Edwards.

I’m not trying to pick on Mr. Edwards as I have never met the man, but his comments regarding SOA left me disillusioned about how security and emerging technologies are approached in what continues to be a purely reactive, naive and disconnected manner.

Specifically, SOA is not exactly "new."  The evolution of technology, maturing of standards, proliferation of Web 2.0 and massive deployments of SOA’s in some of the world’s largest companies shouldn’t come as a surprise to anyone…even in the risk averse financial services sector.  That being said, SOA is disruptive and innovative and needs to be approached both strategically as well as tactically.

As a former CISO of a $25 Billion financial services firm, I was embroiled in our first SOA deployments 2.5 years ago.  It’s blood and guts.  It involves dealing with the business, business partners, IT and development staffs in ways you never have.  It takes communication, education, expertise and business acumen.  It’s not something you wait to be dragged into.

The notion that a security team would be "dragged" into SOA rather than embrace and approach it proactively and from the perspective of a thought leader and collaborative contributor astounds me.

That said, here’s what I had a problem with:

TD Ameritrade Chief Security Officer Bill Edwards figures that he’s
going to be pulled onto the service oriented architecture (SOA)
bandwagon soon. He might as well use it to enhance security.

"When the architects approached me about SOA my first reaction was ‘no
you can’t do that,’" said Edwards, who spoke at a financial services
online fraud panel at Wharton Technology Conference in Philadelphia on
Friday. "But then I realized I’m going to be dragged along with SOA
anyway so I should use it to rebuild security from the ground up. I
know it’s coming so my team got friendly with the architecture group."

What disturbs me is that SOA represents potentially monumental impact to business, technology and security and instead of embracing (see below) this in a proactive manner, the ad hoc formation of a "strategic" response is "…if you can’t beat ’em, join ’em" and perhaps leverage this to fix problems that weren’t fixed prior.

Paying for sins of the past with currency of the future and confusion in the present isn’t exactly showing alignment to the business as an enabler.  But that’s just me.

It’s clear that the first reaction of saying "no, you can’t do that" is so incredibly typical and representative of the security industry in general; fear what you don’t understand and can it. I can’t imagine how making decisions on risk without an effective model is doing the business justice.

Realizing that this is a train on the tracks that can’t be ducked and that he’s going to be "dragged along with SOA" and that something must be done to head off disaster at the pass (or at least get more budget,) I’m having trouble reconciling this:

"SOA is going to be embraced by security. I don’t know if the industry
is ready for security on SOA, but I’m looking forward to it as it will
make my job easier," he said. "SOA allows you to get granular on
security and focus on specific modules."

I am really having trouble understanding whether this is a statement or a question, but I just cannot comprehend how much sense that last sentence fails to make. 

You’re not embracing SOA when you describe being "dragged into it" and your first reaction is "no." Further, if you’re deploying SOA and you’re not baking in security, you should be fired.

Secondly, Explain to me how SOA is going to make security (his job) easier?  Because you can get "granular on security?"  Huh?  SOA is complex.  If you don’t have your "stuff" together in the first place, it’s only going to make your life more difficult.

I’m sorry for this reading like I’m a grumpy bastard (I am) and that I’m singling out Mr. Edwards (he chose to be on a panel) but this just doesn’t jive.

My advice to Mr. Edwards and anyone else looking for the right approach to take with SOA and security is to read Gunnar Peterson’s blog or some more of his work.
 

/Hoff

Virtualization is Risky Business?

February 28th, 2007 6 comments

Dangervirtualization_1
Over the last couple of months, the topic of virtualization and security (or lack thereof) continues to surface as one of the more intriguing topics of relevance in both the enterprise and service provider environments and those who cover them.  From bloggers to analysts to vendors, virtualization is a greenfield for security opportunity and a minefield for the risk models used to describe it.

There are many excellent arguments being discussed which highlight in an ad hoc manner the most serious risks posed by virtualization, and I find many of them accurate, compelling, frightening and relevant.  However, I find that overall, to gauge in relative terms the impact  that these new combinations of attack surfaces, vectors and actors pose, the risk model(s) are immature and incomplete. 

Most of the arguments are currently based on hyperbole and anecdotal references to attacks that could happen.  It reminds me much of the ballyhooed security risks currently held up for scrutiny for mobile handsets.  We know bad things could happen, but for the most part, we’re not being proactive about solving some of the issues before they see the light of day.

The panel I was on at the RSA show highlighted this very problem.  We had folks from VMWare and
RedHat in the audience who assured us that we were just being Chicken Little’s and that the risk is
both quantifiable and manageable today.  We also had other indications that customers felt that while the benefits for virtualization from a cost perspective were huge, the perceived downside from the unknown risks (mostly theoretical) were making them very uncomfortable.

Out of the 150+ folks in the room, approximately 20 had virtualized systems in production roles.  About 25% of them had collapsed multiple tiers of an n-tier application stack (including SOA environments) onto a single host VM.  NONE of them had yet had these systems audited by any third party or regulatory agency.

Rot Roh.

The interesting thing to me was the dichotomy regarding the top-down versus bottom-up approach to
describing the problem.  There was lots of discussion regarding hypervisor (in)security and privilege
escalation and the like, but I thought it interesting that most people were not thinking about the impact on the network and how security would have to change to accommodate it from a bottoms-up (infrastructure and architecture) approach.

The notions of guest VM hopping and malware detection in hypervisors/VM’s are reasonably well discussed (yet not resolved) so I thought I would approach it it from the perspective of what role, if any, the traditional  network infrastructure plays in this.

Thomas Ptacek was right when he said "…I also think modern enterprises are so far from having reasonable access control between the VLANs they already use without virtualization that it’s not a “next 18 month” priority to install them." And I agree with him there.  So, I posit that if one accepts this as true then what to do about the following:

Virtualization
If now we see the consolidation of multiple OS and applications on a single VM host in which the bulk of traffic and data interchange is between the VM’s themselves and utilize the virtual switching fabrics in the VM Host and never hit the actual physical network infrastructure, where, exactly, does this leave the self-defending "network" without VM-level security functionality at the "micro perimeters" of the VM’s?

I recall a question I asked at a recent Goldman Sachs security conference where I asked Jayshree Ullal from Cisco who was presenting Cisco’s strategy regarding virtualized security about how their approach to securing the network was impacted by virtualization in the situation I describe above. 

You could hear cricket’s chirp in the answer.

Talk amongst yourselves….

P.S. More excellent discussions from Matasano (Ptacek) here and Rothman’s bloggy.  I also recommend Greg Ness’ commentary on virtualization and security @ the HyperVisor here.

Uncle Mike says “Virtualization hasn’t changed the fundamental laws of network architecture.”

January 16th, 2007 2 comments

FlatDespite Mike completely missing the point of my last point regarding Alan Shimel’s rant on Tippingpoint (he defaults to "Hoff is defending Big Iron blurb,) Mike made a bold statement:

Virtualization hasn’t changed the fundamental laws of network architecture

I am astounded by this statement.  I violently disagree with this assertion.

Virtualization may have not changed the underlying mechanisms of CSMA/CD or provided the capability to exceed the speed of light, but virtualization has absolutely and fundamentally affected the manner in which networks are designed, deployed, managed and used.   You know, network architecture.

Whether we’re talking about VLAN’s, MPLS, SOA, Grid Computing or Storage, almost every example of data center operations and network design today are profoundly impacted by the V-word.

Furthermore, virtualization (of transport, storage, application, policy, data) has also fundamentally changed the manner in which computing is employed and resources consumed.  What you deploy, where, and how are really, really important.

More importantly (and relevant here) is that virtualization has caused architects to revisit the way in which these assets and the data that flow through them, is secured.

And to defray yet another "blah blah…big iron…large enterprise….blah blah" retort, I’m referring not just to the Crossbeam way (which is heavily virtualized,) but that of Cisco and Juniper also.  All Next Generation Network Services are in a low-earth orbit of the mass that is virtualization.

"Virtualization of the routed core. Virtualization of the data and control planes.  Virtualization of Transport.  Extending the virtualized enterprise over the WAN.  The virtualized access layer."  You know what those are?  Chapters out of a Cisco Press book on Network Virtualization which provides "…design guidance" for architects of virtualized Enterprises.

I suppose it’s only fair that I ask Mike to qualify his comment, because perhaps it’s another "out-of-context-ism" or I misunderstood (of course I did) but it made me itchy reading it.

Mike?